WorldWideScience

Sample records for reliable analytical method

  1. Application of system reliability analytical method, GO-FLOW

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Kobayashi, Michiyuki

    1999-01-01

    The Ship Research Institute proceed a developmental study on GO-FLOW method with various advancing functionalities for the system reliability analysis method occupying main parts of PSA (Probabilistic Safety Assessment). Here was attempted to intend to upgrade functionality of the GO-FLOW method, to develop an analytical function integrated with dynamic behavior analytical function, physical behavior and probable subject transfer, and to prepare a main accident sequence picking-out function. In 1997 fiscal year, in dynamic event-tree analytical system, an analytical function was developed by adding dependency between headings. In simulation analytical function of the accident sequence, main accident sequence of MRX for improved ship propulsion reactor became possible to be covered perfectly. And, input data for analysis was prepared with a function capable easily to set by an analysis operator. (G.K.)

  2. Precision profiles and analytic reliability of radioimmunologic methods

    International Nuclear Information System (INIS)

    Yaneva, Z.; Popova, Yu.

    1991-01-01

    The aim of the present study is to investigate and compare some methods for creation of 'precision profiles' (PP) and to clarify their possibilities for determining the analytical reliability of RIA. Only methods without complicated mathematical calculations has been used. The reproducibility in serums with a concentration of the determinable hormone in the whole range of the calibration curve has been studied. The radioimmunoassay has been performed with TSH-RIA set (ex East Germany), and comparative evaluations - with commercial sets of HOECHST (Germany) and AMERSHAM (GB). Three methods for obtaining the relationship concentration (IU/l) -reproducibility (C.V.,%) are used and a comparison is made of their corresponding profiles: preliminary rough profile, Rodbard-PP and Ekins-PP. It is concluded that the creation of a precision profile is obligatory and the method of its construction does not influence the relationship's course. PP allows to determine concentration range giving stable results which improves the efficiency of the analitical work. 16 refs., 4 figs

  3. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  4. Evidential analytic hierarchy process dependence assessment methodology in human reliability analysis

    International Nuclear Information System (INIS)

    Chen, Lu Yuan; Zhou, Xinyi; Xiao, Fuyuan; Deng, Yong; Mahadevan, Sankaran

    2017-01-01

    In human reliability analysis, dependence assessment is an important issue in risky large complex systems, such as operation of a nuclear power plant. Many existing methods depend on an expert's judgment, which contributes to the subjectivity and restrictions of results. Recently, a computational method, based on the Dempster-Shafer evidence theory and analytic hierarchy process, has been proposed to handle the dependence in human reliability analysis. The model can deal with uncertainty in an analyst's judgment and reduce the subjectivity in the evaluation process. However, the computation is heavy and complicated to some degree. The most important issue is that the existing method is in a positive aspect, which may cause an underestimation of the risk. In this study, a new evidential analytic hierarchy process dependence assessment methodology, based on the improvement of existing methods, has been proposed, which is expected to be easier and more effective

  5. Evidential Analytic Hierarchy Process Dependence Assessment Methodology in Human Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Luyuan Chen

    2017-02-01

    Full Text Available In human reliability analysis, dependence assessment is an important issue in risky large complex systems, such as operation of a nuclear power plant. Many existing methods depend on an expert's judgment, which contributes to the subjectivity and restrictions of results. Recently, a computational method, based on the Dempster–Shafer evidence theory and analytic hierarchy process, has been proposed to handle the dependence in human reliability analysis. The model can deal with uncertainty in an analyst's judgment and reduce the subjectivity in the evaluation process. However, the computation is heavy and complicated to some degree. The most important issue is that the existing method is in a positive aspect, which may cause an underestimation of the risk. In this study, a new evidential analytic hierarchy process dependence assessment methodology, based on the improvement of existing methods, has been proposed, which is expected to be easier and more effective.

  6. Evidential analytic hierarchy process dependence assessment methodology in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Lu Yuan; Zhou, Xinyi; Xiao, Fuyuan; Deng, Yong [School of Computer and Information Science, Southwest University, Chongqing (China); Mahadevan, Sankaran [School of Engineering, Vanderbilt University, Nashville (United States)

    2017-02-15

    In human reliability analysis, dependence assessment is an important issue in risky large complex systems, such as operation of a nuclear power plant. Many existing methods depend on an expert's judgment, which contributes to the subjectivity and restrictions of results. Recently, a computational method, based on the Dempster-Shafer evidence theory and analytic hierarchy process, has been proposed to handle the dependence in human reliability analysis. The model can deal with uncertainty in an analyst's judgment and reduce the subjectivity in the evaluation process. However, the computation is heavy and complicated to some degree. The most important issue is that the existing method is in a positive aspect, which may cause an underestimation of the risk. In this study, a new evidential analytic hierarchy process dependence assessment methodology, based on the improvement of existing methods, has been proposed, which is expected to be easier and more effective.

  7. NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    Y. Y. Luchko

    2010-03-01

    Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.

  8. Effects of Analytical and Holistic Scoring Patterns on Scorer Reliability in Biology Essay Tests

    Science.gov (United States)

    Ebuoh, Casmir N.

    2018-01-01

    Literature revealed that the patterns/methods of scoring essay tests had been criticized for not being reliable and this unreliability is more likely to be more in internal examinations than in the external examinations. The purpose of this study is to find out the effects of analytical and holistic scoring patterns on scorer reliability in…

  9. Statistically qualified neuro-analytic failure detection method and system

    Science.gov (United States)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  10. Calculation of the reliability of large complex systems by the relevant path method

    International Nuclear Information System (INIS)

    Richter, G.

    1975-03-01

    In this paper, analytical methods are presented and tested with which the probabilistic reliability data of technical systems can be determined for given fault trees and block diagrams and known reliability data of the components. (orig./AK) [de

  11. An analytical framework for reliability growth of one-shot systems

    International Nuclear Information System (INIS)

    Hall, J. Brian; Mosleh, Ali

    2008-01-01

    In this paper, we introduce a new reliability growth methodology for one-shot systems that is applicable to the case where all corrective actions are implemented at the end of the current test phase. The methodology consists of four model equations for assessing: expected reliability, the expected number of failure modes observed in testing, the expected probability of discovering new failure modes, and the expected portion of system unreliability associated with repeat failure modes. These model equations provide an analytical framework for which reliability practitioners can estimate reliability improvement, address goodness-of-fit concerns, quantify programmatic risk, and assess reliability maturity of one-shot systems. A numerical example is given to illustrate the value and utility of the presented approach. This methodology is useful to program managers and reliability practitioners interested in applying the techniques above in their reliability growth program

  12. Application of analytical procedure on system reliability, GO-FLOW

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Kobayashi, Michiyuki

    2000-01-01

    In the Ship Research Institute, research and development of GO-FLOW procedure with various advanced functions as a system reliability analysis method occupying main part of the probabilistic safety assessment (PSA) were promoted. In this study, as an important evaluation technique on executing PSA with lower than level 3, by intending fundamental upgrading of the GO-FLOW procedure, a safety assessment system using the GO-FLOW as well as an analytical function coupling of dynamic behavior analytical function and physical behavior of the system with stochastic phenomenon change were developed. In 1998 fiscal year, preparation and verification of various functions such as dependence addition between the headings, rearrangement in order of time, positioning of same heading to plural positions, calculation of forming frequency with elapsing time were carried out. And, on a simulation analysis function of accident sequence, confirmation on analysis covering all of main accident sequence in the reactor for improved marine reactor, MRX was carried out. In addition, a function near automatically producible on input data for analysis was also prepared. As a result, the conventional analysis not always easy understanding on analytical results except an expert of PSA was solved, and understanding of the accident phenomenon, verification of validity on analysis, feedback to analysis, and feedback to design could be easily carried out. (G.K.)

  13. A method of predicting the reliability of CDM coil insulation

    International Nuclear Information System (INIS)

    Kytasty, A.; Ogle, C.; Arrendale, H.

    1992-01-01

    This paper presents a method of predicting the reliability of the Collider Dipole Magnet (CDM) coil insulation design. The method proposes a probabilistic treatment of electrical test data, stress analysis, material properties variability and loading uncertainties to give the reliability estimate. The approach taken to predict reliability of design related failure modes of the CDM is to form analytical models of the various possible failure modes and their related mechanisms or causes, and then statistically assess the contributions of the various contributing variables. The probability of the failure mode occurring is interpreted as the number of times one would expect certain extreme situations to combine and randomly occur. One of the more complex failure modes of the CDM will be used to illustrate this methodology

  14. A Novel Analytic Technique for the Service Station Reliability in a Discrete-Time Repairable Queue

    Directory of Open Access Journals (Sweden)

    Renbin Liu

    2013-01-01

    Full Text Available This paper presents a decomposition technique for the service station reliability in a discrete-time repairable GeomX/G/1 queueing system, in which the server takes exhaustive service and multiple adaptive delayed vacation discipline. Using such a novel analytic technique, some important reliability indices and reliability relation equations of the service station are derived. Furthermore, the structures of the service station indices are also found. Finally, special cases and numerical examples validate the derived results and show that our analytic technique is applicable to reliability analysis of some complex discrete-time repairable bulk arrival queueing systems.

  15. Measurement of HDO Products Using GC-TCD: Towards Obtaining Reliable Analytical Data

    Directory of Open Access Journals (Sweden)

    Zuas Oman

    2018-03-01

    Full Text Available This paper reported the method development and validation of a gas chromatography with thermal conductivity detector (GC-TCD method for the measurement of the gaseous products of hydrodeoxygenation (HDO. The method validation parameters include selectivity, precision (repeatability and reproducibility, accuracy, linearity, limit of detection (LoD, limit of quantitation (LoQ, and robustness. The results showed that the developed method was able to separate the target components (H2, CO2, CH4 and CO from their mixtures without any special sample treatment. The validated method was selective, precise, accurate, and robust. Application of the developed and validated GC-TCD method to the measurement of by-product components of HDO of bio-oil revealed a good performance with relative standard deviation (RSD less than 1.0% for all target components, implying that the process of method development and validation provides a trustworthy way of obtaining reliable analytical data.

  16. An exact method for solving logical loops in reliability analysis

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi

    2009-01-01

    This paper presents an exact method for solving logical loops in reliability analysis. The systems that include logical loops are usually described by simultaneous Boolean equations. First, present a basic rule of solving simultaneous Boolean equations. Next, show the analysis procedures for three-component system with external supports. Third, more detailed discussions are given for the establishment of logical loop relation. Finally, take up two typical structures which include more than one logical loop. Their analysis results and corresponding GO-FLOW charts are given. The proposed analytical method is applicable to loop structures that can be described by simultaneous Boolean equations, and it is very useful in evaluating the reliability of complex engineering systems.

  17. Pre-analytical and analytical aspects affecting clinical reliability of plasma glucose results.

    Science.gov (United States)

    Pasqualetti, Sara; Braga, Federica; Panteghini, Mauro

    2017-07-01

    The measurement of plasma glucose (PG) plays a central role in recognizing disturbances in carbohydrate metabolism, with established decision limits that are globally accepted. This requires that PG results are reliable and unequivocally valid no matter where they are obtained. To control the pre-analytical variability of PG and prevent in vitro glycolysis, the use of citrate as rapidly effective glycolysis inhibitor has been proposed. However, the commercial availability of several tubes with studies showing different performance has created confusion among users. Moreover, and more importantly, studies have shown that tubes promptly inhibiting glycolysis give PG results that are significantly higher than tubes containing sodium fluoride only, used in the majority of studies generating the current PG cut-points, with a different clinical classification of subjects. From the analytical point of view, to be equivalent among different measuring systems, PG results should be traceable to a recognized higher-order reference via the implementation of an unbroken metrological hierarchy. In doing this, it is important that manufacturers of measuring systems consider the uncertainty accumulated through the different steps of the selected traceability chain. In particular, PG results should fulfil analytical performance specifications defined to fit the intended clinical application. Since PG has tight homeostatic control, its biological variability may be used to define these limits. Alternatively, given the central diagnostic role of the analyte, an outcome model showing the impact of analytical performance of test on clinical classifications of subjects can be used. Using these specifications, performance assessment studies employing commutable control materials with values assigned by reference procedure have shown that the quality of PG measurements is often far from desirable and that problems are exacerbated using point-of-care devices. Copyright © 2017 The Canadian

  18. Analytical approach for confirming the achievement of LMFBR reliability goals

    International Nuclear Information System (INIS)

    Ingram, G.E.; Elerath, J.G.; Wood, A.P.

    1981-01-01

    The approach, recommended by GE-ARSD, for confirming the achievement of LMFBR reliability goals relies upon a comprehensive understanding of the physical and operational characteristics of the system and the environments to which the system will be subjected during its operational life. This kind of understanding is required for an approach based on system hardware testing or analyses, as recommended in this report. However, for a system as complex and expensive as the LMFBR, an approach which relies primarily on system hardware testing would be prohibitive both in cost and time to obtain the required system reliability test information. By using an analytical approach, results of tests (reliability and functional) at a low level within the specific system of interest, as well as results from other similar systems can be used to form the data base for confirming the achievement of the system reliability goals. This data, along with information relating to the design characteristics and operating environments of the specific system, will be used in the assessment of the system's reliability

  19. Nuclear analytical methods for platinum group elements

    International Nuclear Information System (INIS)

    2005-04-01

    Platinum group elements (PGE) are of special interest for analytical research due to their economic importance like chemical peculiarities as catalysts, medical applications as anticancer drugs, and possible environmental detrimental impact as exhaust from automobile catalyzers. Natural levels of PGE are so low in concentration that most of the current analytical techniques approach their limit of detection capacity. In addition, Ru, Rh, Pd, Re, Os, Ir, and Pt analyses still constitute a challenge in accuracy and precision of quantification in natural matrices. Nuclear analytical techniques, such as neutron activation analysis, X ray fluorescence, or proton-induced X ray emission (PIXE), which are generally considered as reference methods for many analytical problems, are useful as well. However, due to methodological restrictions, they can, in most cases, only be applied after pre-concentration and under special irradiation conditions. This report was prepared following a coordinated research project and a consultants meeting addressing the subject from different viewpoints. The experts involved suggested to discuss the issue according to the (1) application, hence, the concentration levels encountered, and (2) method applied for analysis. Each of the different fields of application needs special consideration for sample preparation, PGE pre-concentration, and determination. Additionally, each analytical method requires special attention regarding the sensitivity and sample type. Quality assurance/quality control aspects are considered towards the end of the report. It is intended to provide the reader of this publication with state-of-the-art information on the various aspects of PGE analysis and to advise which technique might be most suitable for a particular analytical problem related to platinum group elements. In particular, many case studies described in detail from the authors' laboratory experience might help to decide which way to go. As in many cases

  20. Structural Reliability Methods

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager; Madsen, H. O.

    The structural reliability methods quantitatively treat the uncertainty of predicting the behaviour and properties of a structure given the uncertain properties of its geometry, materials, and the actions it is supposed to withstand. This book addresses the probabilistic methods for evaluation...... of structural reliability, including the theoretical basis for these methods. Partial safety factor codes under current practice are briefly introduced and discussed. A probabilistic code format for obtaining a formal reliability evaluation system that catches the most essential features of the nature...... of the uncertainties and their interplay is the developed, step-by-step. The concepts presented are illustrated by numerous examples throughout the text....

  1. Examination of fast reactor fuels, FBR analytical quality assurance standards and methods, and analytical methods development: irradiation tests. Progress report, April 1--June 30, 1976, and FY 1976

    International Nuclear Information System (INIS)

    Baker, R.D.

    1976-08-01

    Characterization of unirradiated and irradiated LMFBR fuels by analytical chemistry methods will continue, and additional methods will be modified and mechanized for hot cell application. Macro- and microexaminations will be made on fuel and cladding using the shielded electron microprobe, emission spectrograph, radiochemistry, gamma scanner, mass spectrometers, and other analytical facilities. New capabilities will be developed in gamma scanning, analyses to assess spatial distributions of fuel and fission products, mass spectrometric measurements of burnup and fission gas constituents and other chemical analyses. Microstructural analyses of unirradiated and irradiated materials will continue using optical and electron microscopy and autoradiographic and x-ray techniques. Analytical quality assurance standards tasks are designed to assure the quality of the chemical characterizations necessary to evaluate reactor components relative to specifications. Tasks include: (1) the preparation and distribution of calibration materials and quality control samples for use in quality assurance surveillance programs, (2) the development of and the guidance in the use of quality assurance programs for sampling and analysis, (3) the development of improved methods of analysis, and (4) the preparation of continuously updated analytical method manuals. Reliable analytical methods development for the measurement of burnup, oxygen-to-metal (O/M) ratio, and various gases in irradiated fuels is described

  2. Comparison of Three Analytical Methods for Separation of Mineral and Chelated Fraction from an Adulterated Zn-EDTA Fertilizer

    International Nuclear Information System (INIS)

    Khan, M.S.; Qazi, M.A.; Khan, N.A.; Mian, S.M.; Ahmed, N.; Ahmed, N.

    2013-01-01

    Summary: Different analytical procedures are being employed in the world to quantify the chelated portion in a Zn-EDTA fertilizer. Agriculture Department, Government of the Punjab is following Shahid's analytical method in this regard. This method is based on Ion-chromatography (IC) that separates the mineral zinc (Zn) from an adulterated Zn-EDTA fertilizer sample i.e. mixture of mineral and chelated Zn fractions. To find out its effectiveness and suitability, this comparative study was carried out by analyzing adulterated, non-adulterated Zn-EDTA standard and Zn-EDTA samples taken from market in thrice following three methods namely Shahid's (IC) analytical method, Atomic Absorption Spectrophotometric (AAS) method based on the principle of precipitating the mineral Zn fraction at high pH value by using alkali solution of suitable concentration and analysis of filtrate containing only chelated fraction and Association of Official Analytical Chemists (AOAC) method FM-841 respectively. Adulterated Zn-EDTA samples were prepared by mixing of known quantity of mineral Zn with chelated Zn-EDTA standard. The results showed that Shahid's analytical method and AAS method, both successfully estimated the chelated fraction. The AOAC FM-841 method was insensitive to put a ceiling on the mineral fraction hence did not furnish the reliable results. The Shahid's analytical method was selected being equallyeffective to produce reliable results both for solid and liquid Zn-EDTA samples. The AAS method was comparable in only liquid samples. (author)

  3. Bayesian methods in reliability

    Science.gov (United States)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  4. An interactive website for analytical method comparison and bias estimation.

    Science.gov (United States)

    Bahar, Burak; Tuncel, Ayse F; Holmes, Earle W; Holmes, Daniel T

    2017-12-01

    Regulatory standards mandate laboratories to perform studies to ensure accuracy and reliability of their test results. Method comparison and bias estimation are important components of these studies. We developed an interactive website for evaluating the relative performance of two analytical methods using R programming language tools. The website can be accessed at https://bahar.shinyapps.io/method_compare/. The site has an easy-to-use interface that allows both copy-pasting and manual entry of data. It also allows selection of a regression model and creation of regression and difference plots. Available regression models include Ordinary Least Squares, Weighted-Ordinary Least Squares, Deming, Weighted-Deming, Passing-Bablok and Passing-Bablok for large datasets. The server processes the data and generates downloadable reports in PDF or HTML format. Our website provides clinical laboratories a practical way to assess the relative performance of two analytical methods. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  5. Application of Statistical Methods to Activation Analytical Results near the Limit of Detection

    DEFF Research Database (Denmark)

    Heydorn, Kaj; Wanscher, B.

    1978-01-01

    Reporting actual numbers instead of upper limits for analytical results at or below the detection limit may produce reliable data when these numbers are subjected to appropriate statistical processing. Particularly in radiometric methods, such as activation analysis, where individual standard...... deviations of analytical results may be estimated, improved discrimination may be based on the Analysis of Precision. Actual experimental results from a study of the concentrations of arsenic in human skin demonstrate the power of this principle....

  6. Big data analytics for the Future Circular Collider reliability and availability studies

    Science.gov (United States)

    Begy, Volodimir; Apollonio, Andrea; Gutleber, Johannes; Martin-Marquez, Manuel; Niemi, Arto; Penttinen, Jussi-Pekka; Rogova, Elena; Romero-Marin, Antonio; Sollander, Peter

    2017-10-01

    Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. One branch of the study assesses industrial approaches to model and simulate the reliability and availability of the entire particle collider complex based on the continuous monitoring of CERN’s accelerator complex operation. The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. The work so far has revealed that a major challenge is obtaining accelerator monitoring and operational data with sufficient quality, to automate the data quality annotation and calculation of reliability distribution functions for systems, subsystems and components where needed. A flexible data management and analytics environment that permits integrating the heterogeneous data sources, the domain-specific data quality management algorithms and the reliability modelling and simulation suite is a key enabler to complete this accelerator operation study. This paper describes the Big Data infrastructure and analytics ecosystem that has been put in operation at CERN, serving as the foundation on which reliability and availability analysis and simulations can be built. This contribution focuses on data infrastructure and data management aspects and presents case studies chosen for its validation.

  7. An automated method for estimating reliability of grid systems using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Emmanuel Ramirez-Marquez, Jose

    2012-01-01

    Grid computing has become relevant due to its applications to large-scale resource sharing, wide-area information transfer, and multi-institutional collaborating. In general, in grid computing a service requests the use of a set of resources, available in a grid, to complete certain tasks. Although analysis tools and techniques for these types of systems have been studied, grid reliability analysis is generally computation-intensive to obtain due to the complexity of the system. Moreover, conventional reliability models have some common assumptions that cannot be applied to the grid systems. Therefore, new analytical methods are needed for effective and accurate assessment of grid reliability. This study presents a new method for estimating grid service reliability, which does not require prior knowledge about the grid system structure unlike the previous studies. Moreover, the proposed method does not rely on any assumptions about the link and node failure rates. This approach is based on a data-mining algorithm, the K2, to discover the grid system structure from raw historical system data, that allows to find minimum resource spanning trees (MRST) within the grid then, uses Bayesian networks (BN) to model the MRST and estimate grid service reliability.

  8. 7 CFR 94.303 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.303 Section 94.303 Agriculture... POULTRY AND EGG PRODUCTS Processed Poultry Products § 94.303 Analytical methods. The analytical methods... latest edition of the Official Methods of Analysis of AOAC INTERNATIONAL, Suite 500, 481 North Frederick...

  9. 7 CFR 98.4 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 98.4 Section 98.4 Agriculture....4 Analytical methods. (a) The majority of analytical methods used by the USDA laboratories to perform analyses of meat, meat food products and MRE's are listed as follows: (1) Official Methods of...

  10. Analytical method comparisons for the accurate determination of PCBs in sediments

    Energy Technology Data Exchange (ETDEWEB)

    Numata, M.; Yarita, T.; Aoyagi, Y.; Yamazaki, M.; Takatsu, A. [National Metrology Institute of Japan, Tsukuba (Japan)

    2004-09-15

    National Metrology Institute of Japan in National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) has been developing several matrix reference materials, for example, sediments, water and biological tissues, for the determinations of heavy metals and organometallic compounds. The matrix compositions of those certified reference materials (CRMs) are similar to compositions of actual samples, and those are useful for validating analytical procedures. ''Primary methods of measurements'' are essential to obtain accurate and SI-traceable certified values in the reference materials, because the methods have the highest quality of measurement. However, inappropriate analytical operations, such as incomplete extraction of analytes or crosscontamination during analytical procedures, will cause error of analytical results, even if one of the primary methods, isotope-dilution, is utilized. To avoid possible procedural bias for the certification of reference materials, we employ more than two analytical methods which have been optimized beforehand. Because the accurate determination of trace POPs in the environment is important to evaluate their risk, reliable CRMs are required by environmental chemists. Therefore, we have also been preparing matrix CRMs for the determination of POPs. To establish accurate analytical procedures for the certification of POPs, extraction is one of the critical steps as described above. In general, conventional extraction techniques for the determination of POPs, such as Soxhlet extraction (SOX) and saponification (SAP), have been characterized well, and introduced as official methods for environmental analysis. On the other hand, emerging techniques, such as microwave-assisted extraction (MAE), pressurized fluid extraction (PFE) and supercritical fluid extraction (SFE), give higher recovery yields of analytes with relatively short extraction time and small amount of solvent, by reasons of the high

  11. Origin Determination and Differentiation of Gelatin Species of Bovine, Porcine, and Piscine through Analytical Methods

    Directory of Open Access Journals (Sweden)

    Hatice Saadiye Eryılmaz

    2017-06-01

    Full Text Available Gelatin origin determination has been a crucial issue with respect to religion and health concerns. It is necessary to analyze the origin of gelatin with reliable methods to ensure not only consumer choices but also safety and legal requirements such as labeling. There are many analytical methods developed for detection and/or quantification of gelatin from different sources including bovine, porcine and piscine. These analytical methods can be divided into physicochemical, chromatographic, immunochemical, spectroscopic and molecular methods. Moreover, computational methods have been used in some cases consecutively to ensure sensitivity of the analytical methods. Every method has different advantages and limitations due to their own principles, applied food matrix and process conditions of material. The present review intends to give insight into novel analytical methods and perspectives that have been developed to differentiate porcine, bovine and piscine gelatins and to establish their authenticity. Almost every method can be succeeded in origin determination; however, it is a matter of sensitivity in that some researches fail to ensure sufficient differentiation.

  12. 7 CFR 93.4 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.4 Section 93.4 Agriculture... PROCESSED FRUITS AND VEGETABLES Citrus Juices and Certain Citrus Products § 93.4 Analytical methods. (a) The majority of analytical methods for citrus products are found in the Official Methods of Analysis of AOAC...

  13. OSS reliability measurement and assessment

    CERN Document Server

    Yamada, Shigeru

    2016-01-01

    This book analyses quantitative open source software (OSS) reliability assessment and its applications, focusing on three major topic areas: the Fundamentals of OSS Quality/Reliability Measurement and Assessment; the Practical Applications of OSS Reliability Modelling; and Recent Developments in OSS Reliability Modelling. Offering an ideal reference guide for graduate students and researchers in reliability for open source software (OSS) and modelling, the book introduces several methods of reliability assessment for OSS including component-oriented reliability analysis based on analytic hierarchy process (AHP), analytic network process (ANP), and non-homogeneous Poisson process (NHPP) models, the stochastic differential equation models and hazard rate models. These measurement and management technologies are essential to producing and maintaining quality/reliable systems using OSS.

  14. 7 CFR 94.103 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.103 Section 94.103 Agriculture... POULTRY AND EGG PRODUCTS Voluntary Analyses of Egg Products § 94.103 Analytical methods. The analytical methods used by the Science and Technology Division laboratories to perform voluntary analyses for egg...

  15. The application of the analytic hierarchy process (AHP) in uranium mine mining method of the optimal selection

    International Nuclear Information System (INIS)

    Tan Zhongyin; Kuang Zhengping; Qiu Huiyuan

    2014-01-01

    Analytic hierarchy process, AHP, is a combination of qualitative and quantitative, systematic and hierarchical analysis method. Basic decision theory of analytic hierarchy process is applied in this article, with a project example in north Guangdong region as the research object, the in-situ mining method optimization choose hierarchical analysis model is established and the analysis method, The results show that, the AHP model for mining method selecting model was reliable, optimization results were conformity with the actual use of the in-situ mining method, and it has better practicability. (authors)

  16. 7 CFR 94.4 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 94.4 Section 94.4 Agriculture... POULTRY AND EGG PRODUCTS Mandatory Analyses of Egg Products § 94.4 Analytical methods. The majority of analytical methods used by the USDA laboratories to perform mandatory analyses for egg products are listed as...

  17. A Comparison of Result Reliability for Investigation of Milk Composition by Alternative Analytical Methods in Czech Republic

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2014-01-01

    Full Text Available The milk analyse result reliability is important for assurance of foodstuff chain quality. There are more direct and indirect methods for milk composition measurement (fat (F, protein (P, lactose (L and solids non fat (SNF content. The goal was to evaluate some reference and routine milk analytical procedures on result basis. The direct reference analyses were: F, fat content (Röse–Gottlieb method; P, crude protein content (Kjeldahl method; L, lactose (monohydrate, polarimetric method; SNF, solids non fat (gravimetric method. F, P, L and SNF were determined also by various indirect methods: – MIR (infrared (IR technology with optical filters, 7 instruments in 4 labs; – MIR–FT (IR spectroscopy with Fourier’s transformations, 10 in 6; – ultrasonic method (UM, 3 in 1; – analysis by the blue and red box (BRB, 1 v 1. There were used 10 reference milk samples. Coefficient of determination (R2, correlation coefficient (r and standard deviation of the mean of individual differences (MDsd, for n were evaluated. All correlations (r; for all indirect and alternative methods and all milk components were significant (P ≤ 0.001. MIR and MIR–FT (conventional methods explained considerably higher proportion of the variability in reference results than the UM and BRB methods (alternative. All r average values (x minus 1.64 × sd for 95% confidence interval can be used as standards for calibration quality evaluation (MIR, MIR–FT, UM and BRB: – for F 0.997, 0.997, 0.99 and 0.995; – for P 0.986, 0.981, 0.828 and 0.864; – for L 0.968, 0.871, 0.705 and 0.761; – for SNF 0.992, 0.993, 0.911 and 0.872. Similarly ​MDsd (x plus 1.64 × sd: – for F 0.071, 0.068, 0.132 and 0.101%; – for P 0.051, 0.054, 0.202 and 0.14%; – for L 0.037, 0.074, 0.113 and 0.11%; – for SNF 0.052, 0.068, 0.141 and 0.204.

  18. An integrated approach to human reliability analysis -- decision analytic dynamic reliability model

    International Nuclear Information System (INIS)

    Holmberg, J.; Hukki, K.; Norros, L.; Pulkkinen, U.; Pyy, P.

    1999-01-01

    The reliability of human operators in process control is sensitive to the context. In many contemporary human reliability analysis (HRA) methods, this is not sufficiently taken into account. The aim of this article is that integration between probabilistic and psychological approaches in human reliability should be attempted. This is achieved first, by adopting such methods that adequately reflect the essential features of the process control activity, and secondly, by carrying out an interactive HRA process. Description of the activity context, probabilistic modeling, and psychological analysis form an iterative interdisciplinary sequence of analysis in which the results of one sub-task maybe input to another. The analysis of the context is carried out first with the help of a common set of conceptual tools. The resulting descriptions of the context promote the probabilistic modeling, through which new results regarding the probabilistic dynamics can be achieved. These can be incorporated in the context descriptions used as reference in the psychological analysis of actual performance. The results also provide new knowledge of the constraints of activity, by providing information of the premises of the operator's actions. Finally, the stochastic marked point process model gives a tool, by which psychological methodology may be interpreted and utilized for reliability analysis

  19. AK-SYS: An adaptation of the AK-MCS method for system reliability

    International Nuclear Information System (INIS)

    Fauriat, W.; Gayton, N.

    2014-01-01

    A lot of research work has been proposed over the last two decades to evaluate the probability of failure of a structure involving a very time-consuming mechanical model. Surrogate model approaches based on Kriging, such as the Efficient Global Reliability Analysis (EGRA) or the Active learning and Kriging-based Monte-Carlo Simulation (AK-MCS) methods, are very efficient and each has advantages of its own. EGRA is well suited to evaluating small probabilities, as the surrogate can be used to classify any population. AK-MCS is built in relation to a given population and requires no optimization program for the active learning procedure to be performed. It is therefore easier to implement and more likely to spend computational effort on areas with a significant probability content. When assessing system reliability, analytical approaches and first-order approximation are widely used in the literature. However, in the present paper we rather focus on sampling techniques and, considering the recent adaptation of the EGRA method for systems, a strategy is presented to adapt the AK-MCS method for system reliability. The AK-SYS method, “Active learning and Kriging-based SYStem reliability method”, is presented. Its high efficiency and accuracy are illustrated via various examples

  20. FORECASTING PILE SETTLEMENT ON CLAYSTONE USING NUMERICAL AND ANALYTICAL METHODS

    Directory of Open Access Journals (Sweden)

    Ponomarev Andrey Budimirovich

    2016-06-01

    Full Text Available In the article the problem of designing pile foundations on claystones is reviewed. The purpose of this paper is comparative analysis of the analytical and numerical methods for forecasting the settlement of piles on claystones. The following tasks were solved during the study: 1 The existing researches of pile settlement are analyzed; 2 The characteristics of experimental studies and the parameters for numerical modeling are presented, methods of field research of single piles’ operation are described; 3 Calculation of single pile settlement is performed using numerical methods in the software package Plaxis 2D and analytical method according to the requirements SP 24.13330.2011; 4 Experimental data is compared with the results of analytical and numerical calculations; 5 Basing on these results recommendations for forecasting pile settlement on claystone are presented. Much attention is paid to the calculation of pile settlement considering the impacted areas in ground space beside pile and the comparison with the results of field experiments. Basing on the obtained results, for the prediction of settlement of single pile on claystone the authors recommend using the analytical method considered in SP 24.13330.2011 with account for the impacted areas in ground space beside driven pile. In the case of forecasting the settlement of single pile on claystone by numerical methods in Plaxis 2D the authors recommend using the Hardening Soil model considering the impacted areas in ground space beside the driven pile. The analyses of the results and calculations are presented for examination and verification; therefore it is necessary to continue the research work of deep foundation at another experimental sites to improve the reliability of the calculation of pile foundation settlement. The work is of great interest for geotechnical engineers engaged in research, design and construction of pile foundations.

  1. Comparison of nuclear analytical methods with competitive methods

    International Nuclear Information System (INIS)

    1987-10-01

    The use of nuclear analytical techniques, especially neutron activation analysis, already have a 50 year old history. Today several sensitive and accurate, non-nuclear trace element analytical techniques are available and new methods are continuously developed. The IAEA is supporting the development of nuclear analytical laboratories in its Member States. In order to be able to advise the developing countries which methods to use in different applications, it is important to know the present status and development trends of nuclear analytical methods, what are their benefits, drawbacks and recommended fields of application, compared with other, non-nuclear techniques. In order to get an answer to these questions the IAEA convened this Advisory Group Meeting. This volume is the outcome of the presentations and discussions of the meeting. A separate abstract was prepared for each of the 21 papers. Refs, figs, tabs

  2. Reliability of stellar inclination estimated from asteroseismology: analytical criteria, mock simulations and Kepler data analysis

    Science.gov (United States)

    Kamiaka, Shoya; Benomar, Othman; Suto, Yasushi

    2018-05-01

    Advances in asteroseismology of solar-like stars, now provide a unique method to estimate the stellar inclination i⋆. This enables to evaluate the spin-orbit angle of transiting planetary systems, in a complementary fashion to the Rossiter-McLaughlineffect, a well-established method to estimate the projected spin-orbit angle λ. Although the asteroseismic method has been broadly applied to the Kepler data, its reliability has yet to be assessed intensively. In this work, we evaluate the accuracy of i⋆ from asteroseismology of solar-like stars using 3000 simulated power spectra. We find that the low signal-to-noise ratio of the power spectra induces a systematic under-estimate (over-estimate) bias for stars with high (low) inclinations. We derive analytical criteria for the reliable asteroseismic estimate, which indicates that reliable measurements are possible in the range of 20° ≲ i⋆ ≲ 80° only for stars with high signal-to-noise ratio. We also analyse and measure the stellar inclination of 94 Kepler main-sequence solar-like stars, among which 33 are planetary hosts. According to our reliability criteria, a third of them (9 with planets, 22 without) have accurate stellar inclination. Comparison of our asteroseismic estimate of vsin i⋆ against spectroscopic measurements indicates that the latter suffers from a large uncertainty possibly due to the modeling of macro-turbulence, especially for stars with projected rotation speed vsin i⋆ ≲ 5km/s. This reinforces earlier claims, and the stellar inclination estimated from the combination of measurements from spectroscopy and photometric variation for slowly rotating stars needs to be interpreted with caution.

  3. Life cycle management of analytical methods.

    Science.gov (United States)

    Parr, Maria Kristina; Schmidt, Alexander H

    2018-01-05

    In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. 40 CFR 141.704 - Analytical methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.704 Section... Monitoring Requirements § 141.704 Analytical methods. (a) Cryptosporidium. Systems must analyze for Cryptosporidium using Method 1623: Cryptosporidium and Giardia in Water by Filtration/IMS/FA, 2005, United States...

  5. Analytical quality control in environmental analysis - Recent results and future trends of the IAEA's analytical quality control programme

    Energy Technology Data Exchange (ETDEWEB)

    Suschny, O; Heinonen, J

    1973-12-01

    The significance of analytical results depends critically on the degree of their reliability, an assessment of this reliability is indispensable if the results are to have any meaning at all. Environmental radionuclide analysis is a relatively new analytical field in which new methods are continuously being developed and into which many new laboratories have entered during the last ten to fifteen years. The scarcity of routine methods and the lack of experience of the new laboratories have made the need for the assessment of the reliability of results particularly urgent in this field. The IAEA, since 1962, has provided assistance to its member states by making available to their laboratories analytical quality control services in the form of standard samples, reference materials and the organization of analytical intercomparisons. The scope of this programme has increased over the years and now includes, in addition to environmental radionuclides, non-radioactive environmental contaminants which may be analysed by nuclear methods, materials for forensic neutron activation analysis, bioassay materials and nuclear fuel. The results obtained in recent intercomparisons demonstrate the continued need for these services. (author)

  6. 40 CFR 141.89 - Analytical methods.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods. 141.89 Section 141...) NATIONAL PRIMARY DRINKING WATER REGULATIONS Control of Lead and Copper § 141.89 Analytical methods. (a... shall be conducted with the methods in § 141.23(k)(1). (1) Analyses for alkalinity, calcium...

  7. A Hybrid Approach for Reliability Analysis Based on Analytic Hierarchy Process and Bayesian Network

    International Nuclear Information System (INIS)

    Zubair, Muhammad

    2014-01-01

    By using analytic hierarchy process (AHP) and Bayesian Network (BN) the present research signifies the technical and non-technical issues of nuclear accidents. The study exposed that the technical faults was one major reason of these accidents. Keep an eye on other point of view it becomes clearer that human behavior like dishonesty, insufficient training, and selfishness are also play a key role to cause these accidents. In this study, a hybrid approach for reliability analysis based on AHP and BN to increase nuclear power plant (NPP) safety has been developed. By using AHP, best alternative to improve safety, design, operation, and to allocate budget for all technical and non-technical factors related with nuclear safety has been investigated. We use a special structure of BN based on the method AHP. The graphs of the BN and the probabilities associated with nodes are designed to translate the knowledge of experts on the selection of best alternative. The results show that the improvement in regulatory authorities will decrease failure probabilities and increase safety and reliability in industrial area.

  8. Nuclear analytical methods in the life sciences

    NARCIS (Netherlands)

    de Goeij, J.J.M.

    1994-01-01

    A survey is given of various nuclear analytical methods. The type of analytical information obtainable and advantageous features for application in the life sciences are briefly indicated. These features are: physically different basis of the analytical method, isotopic rather than elemental

  9. Review of Quantitative Software Reliability Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chu, T.L.; Yue, M.; Martinez-Guridi, M.; Lehner, J.

    2010-09-17

    The current U.S. Nuclear Regulatory Commission (NRC) licensing process for digital systems rests on deterministic engineering criteria. In its 1995 probabilistic risk assessment (PRA) policy statement, the Commission encouraged the use of PRA technology in all regulatory matters to the extent supported by the state-of-the-art in PRA methods and data. Although many activities have been completed in the area of risk-informed regulation, the risk-informed analysis process for digital systems has not yet been satisfactorily developed. Since digital instrumentation and control (I&C) systems are expected to play an increasingly important role in nuclear power plant (NPP) safety, the NRC established a digital system research plan that defines a coherent set of research programs to support its regulatory needs. One of the research programs included in the NRC's digital system research plan addresses risk assessment methods and data for digital systems. Digital I&C systems have some unique characteristics, such as using software, and may have different failure causes and/or modes than analog I&C systems; hence, their incorporation into NPP PRAs entails special challenges. The objective of the NRC's digital system risk research is to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems into NPP PRAs, and (2) using information on the risks of digital systems to support the NRC's risk-informed licensing and oversight activities. For several years, Brookhaven National Laboratory (BNL) has worked on NRC projects to investigate methods and tools for the probabilistic modeling of digital systems, as documented mainly in NUREG/CR-6962 and NUREG/CR-6997. However, the scope of this research principally focused on hardware failures, with limited reviews of software failure experience and software reliability methods. NRC also sponsored research at the Ohio State University investigating the modeling of

  10. Evaluating the reliability of multi-body mechanisms: A method considering the uncertainties of dynamic performance

    International Nuclear Information System (INIS)

    Wu, Jianing; Yan, Shaoze; Zuo, Ming J.

    2016-01-01

    Mechanism reliability is defined as the ability of a certain mechanism to maintain output accuracy under specified conditions. Mechanism reliability is generally assessed by the classical direct probability method (DPM) derived from the first order second moment (FOSM) method. The DPM relies strongly on the analytical form of the dynamic solution so it is not applicable to multi-body mechanisms that have only numerical solutions. In this paper, an indirect probability model (IPM) is proposed for mechanism reliability evaluation of multi-body mechanisms. IPM combines the dynamic equation, degradation function and Kaplan–Meier estimator to evaluate mechanism reliability comprehensively. Furthermore, to reduce the amount of computation in practical applications, the IPM is simplified into the indirect probability step model (IPSM). A case study of a crank–slider mechanism with clearance is investigated. Results show that relative errors between the theoretical and experimental results of mechanism reliability are less than 5%, demonstrating the effectiveness of the proposed method. - Highlights: • An indirect probability model (IPM) is proposed for mechanism reliability evaluation. • The dynamic equation, degradation function and Kaplan–Meier estimator are used. • Then the simplified form of indirect probability model is proposed. • The experimental results agree well with the predicted results.

  11. Nuclear analytical methods: Past, present and future

    International Nuclear Information System (INIS)

    Becker, D.A.

    1996-01-01

    The development of nuclear analytical methods as an analytical tool began in 1936 with the publication of the first paper on neutron activation analysis (NAA). This year, 1996, marks the 60th anniversary of that event. This paper attempts to look back at the nuclear analytical methods of the past, to look around and to see where the technology is right now, and finally, to look ahead to try and see where nuclear methods as an analytical technique (or as a group of analytical techniques) will be going in the future. The general areas which the author focuses on are: neutron activation analysis; prompt gamma neutron activation analysis (PGNAA); photon activation analysis (PAA); charged-particle activation analysis (CPAA)

  12. Selection of analytical methods for mixed waste analysis at the Hanford Site

    International Nuclear Information System (INIS)

    Morant, P.M.

    1994-09-01

    This document describes the process that the US Department of Energy (DOE), Richland Operations Office (RL) and contractor laboratories use to select appropriate or develop new or modified analytical methods. These methods are needed to provide reliable mixed waste characterization data that meet project-specific quality assurance (QA) requirements while also meeting health and safety standards for handling radioactive materials. This process will provide the technical basis for DOE's analysis of mixed waste and support requests for regulatory approval of these new methods when they are used to satisfy the regulatory requirements of the Hanford Federal Facility Agreement and Consent Order (Tri-party Agreement) (Ecology et al. 1992)

  13. Reactor Section standard analytical methods. Part 1

    Energy Technology Data Exchange (ETDEWEB)

    Sowden, D.

    1954-07-01

    the Standard Analytical Methods manual was prepared for the purpose of consolidating and standardizing all current analytical methods and procedures used in the Reactor Section for routine chemical analyses. All procedures are established in accordance with accepted practice and the general analytical methods specified by the Engineering Department. These procedures are specifically adapted to the requirements of the water treatment process and related operations. The methods included in this manual are organized alphabetically within the following five sections which correspond to the various phases of the analytical control program in which these analyses are to be used: water analyses, essential material analyses, cotton plug analyses boiler water analyses, and miscellaneous control analyses.

  14. Supercritical fluid analytical methods

    International Nuclear Information System (INIS)

    Smith, R.D.; Kalinoski, H.T.; Wright, B.W.; Udseth, H.R.

    1988-01-01

    Supercritical fluids are providing the basis for new and improved methods across a range of analytical technologies. New methods are being developed to allow the detection and measurement of compounds that are incompatible with conventional analytical methodologies. Characterization of process and effluent streams for synfuel plants requires instruments capable of detecting and measuring high-molecular-weight compounds, polar compounds, or other materials that are generally difficult to analyze. The purpose of this program is to develop and apply new supercritical fluid techniques for extraction, separation, and analysis. These new technologies will be applied to previously intractable synfuel process materials and to complex mixtures resulting from their interaction with environmental and biological systems

  15. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron......), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release...

  16. Waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S. Schilling, J.B.

    1995-01-01

    The US Department of Energy (DOE) will require a large number of waste characterizations over a multi-year period to accomplish the Department's goals in environmental restoration and waste management. Estimates vary, but two million analyses annually are expected. The waste generated by the analytical procedures used for characterizations is a significant source of new DOE waste. Success in reducing the volume of secondary waste and the costs of handling this waste would significantly decrease the overall cost of this DOE program. Selection of appropriate analytical methods depends on the intended use of the resultant data. It is not always necessary to use a high-powered analytical method, typically at higher cost, to obtain data needed to make decisions about waste management. Indeed, for samples taken from some heterogeneous systems, the meaning of high accuracy becomes clouded if the data generated are intended to measure a property of this system. Among the factors to be considered in selecting the analytical method are the lower limit of detection, accuracy, turnaround time, cost, reproducibility (precision), interferences, and simplicity. Occasionally, there must be tradeoffs among these factors to achieve the multiple goals of a characterization program. The purpose of the work described here is to add waste minimization to the list of characteristics to be considered. In this paper the authors present results of modifying analytical methods for waste characterization to reduce both the cost of analysis and volume of secondary wastes. Although tradeoffs may be required to minimize waste while still generating data of acceptable quality for the decision-making process, they have data demonstrating that wastes can be reduced in some cases without sacrificing accuracy or precision

  17. Evaluation of the quality of results obtained in institutions participating in interlaboratory experiments and of the reliability characteristics of the analytical methods used on the basis of certification of standard soil samples

    Energy Technology Data Exchange (ETDEWEB)

    Parshin, A.K.; Obol' yaninova, V.G.; Sul' dina, N.P.

    1986-08-20

    Rapid monitoring of the level of pollution of the environment and, especially, of soils necessitates preparation of standard samples (SS) close in properties and material composition to the objects to be analyzed. During 1978-1982 four sets (three types of samples in each) of State Standard Samples of different soils were developed: soddy-podzolic sandy-loamy, typical chernozem, krasnozem, and calcareous sierozem. The certification studies of the SS of the soils were carried out in accordance with the classical scheme of interlab experiment (ILE). More than 100 institutions were involved in the ILE and the total number of independent analytical results was of the order of 10/sup 4/. With such a volume of analytical information at their disposal they were able to find some general characteristics intrinsic to certification studies, to assess the quality of work of the ILE participants with due regard for their specialization, and the reliability characteristics of the analytical methods used.

  18. An Evaluation Method of Equipment Reliability Configuration Management

    Science.gov (United States)

    Wang, Wei; Feng, Weijia; Zhang, Wei; Li, Yuan

    2018-01-01

    At present, many equipment development companies have been aware of the great significance of reliability of the equipment development. But, due to the lack of effective management evaluation method, it is very difficult for the equipment development company to manage its own reliability work. Evaluation method of equipment reliability configuration management is to determine the reliability management capabilities of equipment development company. Reliability is not only designed, but also managed to achieve. This paper evaluates the reliability management capabilities by reliability configuration capability maturity model(RCM-CMM) evaluation method.

  19. An accurate and efficient reliability-based design optimization using the second order reliability method and improved stability transformation method

    Science.gov (United States)

    Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo

    2018-05-01

    The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.

  20. HTGR analytical methods and design verification

    International Nuclear Information System (INIS)

    Neylan, A.J.; Northup, T.E.

    1982-05-01

    Analytical methods for the high-temperature gas-cooled reactor (HTGR) include development, update, verification, documentation, and maintenance of all computer codes for HTGR design and analysis. This paper presents selected nuclear, structural mechanics, seismic, and systems analytical methods related to the HTGR core. This paper also reviews design verification tests in the reactor core, reactor internals, steam generator, and thermal barrier

  1. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    Directory of Open Access Journals (Sweden)

    D.D. Lestiani

    2011-08-01

    Full Text Available Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA and particles induced X-ray emission (PIXE. Particle samples in the PM2.5 and PM2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preffered, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment.

  2. Analytical Methods INAA and PIXE Applied to Characterization of Airborne Particulate Matter in Bandung, Indonesia

    International Nuclear Information System (INIS)

    Lestiani, D.D.; Santoso, M.

    2011-01-01

    Urbanization and industrial growth have deteriorated air quality and are major cause to air pollution. Air pollution through fine and ultra-fine particles is a serious threat to human health. The source of air pollution must be known quantitatively by elemental characterization, in order to design the appropriate air quality management. The suitable methods for analysis the airborne particulate matter such as nuclear analytical techniques are hardly needed to solve the air pollution problem. The objectives of this study are to apply the nuclear analytical techniques to airborne particulate samples collected in Bandung, to assess the accuracy and to ensure the reliable of analytical results through the comparison of instrumental neutron activation analysis (INAA) and particles induced X-ray emission (PIXE). Particle samples in the PM 2.5 and PM 2.5-10 ranges have been collected in Bandung twice a week for 24 hours using a Gent stacked filter unit. The result showed that generally there was a systematic difference between INAA and PIXE results, which the values obtained by PIXE were lower than values determined by INAA. INAA is generally more sensitive and reliable than PIXE for Na, Al, Cl, V, Mn, Fe, Br and I, therefore INAA data are preferred, while PIXE usually gives better precision than INAA for Mg, K, Ca, Ti and Zn. Nevertheless, both techniques provide reliable results and complement to each other. INAA is still a prospective method, while PIXE with the special capabilities is a promising tool that could contribute and complement the lack of NAA in determination of lead, sulphur and silicon. The combination of INAA and PIXE can advantageously be used in air pollution studies to extend the number of important elements measured as key elements in source apportionment. (author)

  3. Method for effective usage of Google Analytics tools

    Directory of Open Access Journals (Sweden)

    Ирина Николаевна Егорова

    2016-01-01

    Full Text Available Modern Google Analytics tools have been investigated against effective attraction channels for users and bottlenecks detection. Conducted investigation allowed to suggest modern method for effective usage of Google Analytics tools. The method is based on main traffic indicators analysis, as well as deep analysis of goals and their consecutive tweaking. Method allows to increase website conversion and might be useful for SEO and Web analytics specialists

  4. A platform analytical quality by design (AQbD) approach for multiple UHPLC-UV and UHPLC-MS methods development for protein analysis.

    Science.gov (United States)

    Kochling, Jianmei; Wu, Wei; Hua, Yimin; Guan, Qian; Castaneda-Merced, Juan

    2016-06-05

    A platform analytical quality by design approach for methods development is presented in this paper. This approach is not limited just to method development following the same logical Analytical quality by design (AQbD) process, it is also exploited across a range of applications in methods development with commonality in equipment and procedures. As demonstrated by the development process of 3 methods, the systematic approach strategy offers a thorough understanding of the method scientific strength. The knowledge gained from the UHPLC-UV peptide mapping method can be easily transferred to the UHPLC-MS oxidation method and the UHPLC-UV C-terminal heterogeneity methods of the same protein. In addition, the platform AQbD method development strategy ensures method robustness is built in during development. In early phases, a good method can generate reliable data for product development allowing confident decision making. Methods generated following the AQbD approach have great potential for avoiding extensive post-approval analytical method change. While in the commercial phase, high quality data ensures timely data release, reduced regulatory risk, and lowered lab operational cost. Moreover, large, reliable database and knowledge gained during AQbD method development provide strong justifications during regulatory filling for the selection of important parameters or parameter change needs for method validation, and help to justify for removal of unnecessary tests used for product specifications. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM

    Directory of Open Access Journals (Sweden)

    Zahirah Alifia Maulida

    2015-01-01

    Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human

  6. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  7. Nonlinear ordinary differential equations analytical approximation and numerical methods

    CERN Document Server

    Hermann, Martin

    2016-01-01

    The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...

  8. Workshop on Analytical Methods in Statistics

    CERN Document Server

    Jurečková, Jana; Maciak, Matúš; Pešta, Michal

    2017-01-01

    This volume collects authoritative contributions on analytical methods and mathematical statistics. The methods presented include resampling techniques; the minimization of divergence; estimation theory and regression, eventually under shape or other constraints or long memory; and iterative approximations when the optimal solution is difficult to achieve. It also investigates probability distributions with respect to their stability, heavy-tailness, Fisher information and other aspects, both asymptotically and non-asymptotically. The book not only presents the latest mathematical and statistical methods and their extensions, but also offers solutions to real-world problems including option pricing. The selected, peer-reviewed contributions were originally presented at the workshop on Analytical Methods in Statistics, AMISTAT 2015, held in Prague, Czech Republic, November 10-13, 2015.

  9. 40 CFR 141.25 - Analytical methods for radioactivity.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Analytical methods for radioactivity... § 141.25 Analytical methods for radioactivity. (a) Analysis for the following contaminants shall be conducted to determine compliance with § 141.66 (radioactivity) in accordance with the methods in the...

  10. Improving Wind Turbine Drivetrain Reliability Using a Combined Experimental, Computational, and Analytical Approach

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Y.; van Dam, J.; Bergua, R.; Jove, J.; Campbell, J.

    2015-03-01

    Nontorque loads induced by the wind turbine rotor overhang weight and aerodynamic forces can greatly affect drivetrain loads and responses. If not addressed properly, these loads can result in a decrease in gearbox component life. This work uses analytical modeling, computational modeling, and experimental data to evaluate a unique drivetrain design that minimizes the effects of nontorque loads on gearbox reliability: the Pure Torque(R) drivetrain developed by Alstom. The drivetrain has a hub-support configuration that transmits nontorque loads directly into the tower rather than through the gearbox as in other design approaches. An analytical model of Alstom's Pure Torque drivetrain provides insight into the relationships among turbine component weights, aerodynamic forces, and the resulting drivetrain loads. Main shaft bending loads are orders of magnitude lower than the rated torque and are hardly affected by wind conditions and turbine operations.

  11. Analytical methods used at model facility

    International Nuclear Information System (INIS)

    Wing, N.S.

    1984-01-01

    A description of analytical methods used at the model LEU Fuel Fabrication Facility is presented. The methods include gravimetric uranium analysis, isotopic analysis, fluorimetric analysis, and emission spectroscopy

  12. Analytical methods for heat transfer and fluid flow problems

    CERN Document Server

    Weigand, Bernhard

    2015-01-01

    This book describes useful analytical methods by applying them to real-world problems rather than solving the usual over-simplified classroom problems. The book demonstrates the applicability of analytical methods even for complex problems and guides the reader to a more intuitive understanding of approaches and solutions. Although the solution of Partial Differential Equations by numerical methods is the standard practice in industries, analytical methods are still important for the critical assessment of results derived from advanced computer simulations and the improvement of the underlying numerical techniques. Literature devoted to analytical methods, however, often focuses on theoretical and mathematical aspects and is therefore useless to most engineers. Analytical Methods for Heat Transfer and Fluid Flow Problems addresses engineers and engineering students. The second edition has been updated, the chapters on non-linear problems and on axial heat conduction problems were extended. And worked out exam...

  13. A new, rapid and reliable method for the determination of reduced sulphur (S2-) species in natural water discharges

    International Nuclear Information System (INIS)

    Montegrossi, Giordano; Tassi, Franco; Vaselli, Orlando; Bidini, Eva; Minissale, Angelo

    2006-01-01

    The determination of reduced S species in natural waters is particularly difficult due to their high instability and chemical and physical interferences in the current analytical methods. In this paper a new, rapid and reliable analytical procedure is presented, named the Cd-IC method, for their determination as ΣS 2- via oxidation to SO 4 2- after chemical trapping with an ammonia-cadmium solution that allows precipitation of all the reduced S species as CdS. The S 2- -SO 4 is analysed by ion-chromatography. The main advantages of this method are: low cost, high stability of CdS precipitate, absence of interferences, low detection limit (0.01mg/L as SO 4 for 10mL of water) and low analytical error (about 5%). The proposed method has been applied to more than 100 water samples from different natural systems (water discharges and cold wells from volcanic and geothermal areas, crater lakes) in central-southern Italy

  14. Basic Concepts in Classical Test Theory: Tests Aren't Reliable, the Nature of Alpha, and Reliability Generalization as a Meta-analytic Method.

    Science.gov (United States)

    Helms, LuAnn Sherbeck

    This paper discusses the fact that reliability is about scores and not tests and how reliability limits effect sizes. The paper also explores the classical reliability coefficients of stability, equivalence, and internal consistency. Stability is concerned with how stable test scores will be over time, while equivalence addresses the relationship…

  15. Prioritizing pesticide compounds for analytical methods development

    Science.gov (United States)

    Norman, Julia E.; Kuivila, Kathryn; Nowell, Lisa H.

    2012-01-01

    The U.S. Geological Survey (USGS) has a periodic need to re-evaluate pesticide compounds in terms of priorities for inclusion in monitoring and studies and, thus, must also assess the current analytical capabilities for pesticide detection. To meet this need, a strategy has been developed to prioritize pesticides and degradates for analytical methods development. Screening procedures were developed to separately prioritize pesticide compounds in water and sediment. The procedures evaluate pesticide compounds in existing USGS analytical methods for water and sediment and compounds for which recent agricultural-use information was available. Measured occurrence (detection frequency and concentrations) in water and sediment, predicted concentrations in water and predicted likelihood of occurrence in sediment, potential toxicity to aquatic life or humans, and priorities of other agencies or organizations, regulatory or otherwise, were considered. Several existing strategies for prioritizing chemicals for various purposes were reviewed, including those that identify and prioritize persistent, bioaccumulative, and toxic compounds, and those that determine candidates for future regulation of drinking-water contaminants. The systematic procedures developed and used in this study rely on concepts common to many previously established strategies. The evaluation of pesticide compounds resulted in the classification of compounds into three groups: Tier 1 for high priority compounds, Tier 2 for moderate priority compounds, and Tier 3 for low priority compounds. For water, a total of 247 pesticide compounds were classified as Tier 1 and, thus, are high priority for inclusion in analytical methods for monitoring and studies. Of these, about three-quarters are included in some USGS analytical method; however, many of these compounds are included on research methods that are expensive and for which there are few data on environmental samples. The remaining quarter of Tier 1

  16. Laser induced uranium fluorescence as an analytical method

    International Nuclear Information System (INIS)

    Krutman, I.

    1985-01-01

    A laser induced fluorescence system was developed to measure uranium trace level amounts in aqueous solution with reliable and simple materials and electronics. A nitrogen pulsed laser was built with the storage energy capacitor directly coupled to laser tube electrodes as a transmission line device. This laser operated at 3Hz repetition rate with peak intensity around 21 Kw and temporal width of 4.5 x 10 -9 s. A sample compartment made of rigid PVC and a photomultiplier housing of aluminium were constructed and assembled forming a single integrated device. As a result of this prototype system we made several analytical measurements with U dissolved in nitric acid to obtain a calibration curve. We obtained a straight line from a plot of U concentration versus fluorescence intensity fitted by a least square method that produced a regression coefficient of 0.994. The lower limit of U determination was 30 ppb -+ 3.5%. (Author) [pt

  17. Analytical Method of Malculation of the Current and Torque a Reluctance Stepper Motor via Fourier Series

    Directory of Open Access Journals (Sweden)

    Pavel Zaskalicky

    2008-01-01

    Full Text Available Reluctance stepper motors are becoming to be very attractive transducer to conversion of electric signal to the mechanical position. Due to its simple construction is reluctance machine considered a very reliable machine which not requiring any maintenance. Present paper proposes a mathematical method of an analytical calculus of a phase current and electromagnetic torque of the motor via Fourier series. Saturation effect and winding reluctance are neglected.

  18. Analytical detection methods for irradiated foods

    International Nuclear Information System (INIS)

    1991-03-01

    The present publication is a review of scientific literature on the analytical identification of foods treated with ionizing radiation and the quantitative determination of absorbed dose of radiation. Because of the extremely low level of chemical changes resulting from irradiation or because of the lack of specificity to irradiation of any chemical changes, a few methods of quantitative determination of absorbed dose have shown promise until now. On the other hand, the present review has identified several possible methods, which could be used, following further research and testing, for the identification of irradiated foods. An IAEA Co-ordinated Research Programme on Analytical Detection Methods for Irradiation Treatment of Food ('ADMIT'), established in 1990, is currently investigating many of the methods cited in the present document. Refs and tab

  19. Suitability of analytical methods to measure solubility for the purpose of nanoregulation.

    Science.gov (United States)

    Tantra, Ratna; Bouwmeester, Hans; Bolea, Eduardo; Rey-Castro, Carlos; David, Calin A; Dogné, Jean-Michel; Jarman, John; Laborda, Francisco; Laloy, Julie; Robinson, Kenneth N; Undas, Anna K; van der Zande, Meike

    2016-01-01

    Solubility is an important physicochemical parameter in nanoregulation. If nanomaterial is completely soluble, then from a risk assessment point of view, its disposal can be treated much in the same way as "ordinary" chemicals, which will simplify testing and characterisation regimes. This review assesses potential techniques for the measurement of nanomaterial solubility and evaluates the performance against a set of analytical criteria (based on satisfying the requirements as governed by the cosmetic regulation as well as the need to quantify the concentration of free (hydrated) ions). Our findings show that no universal method exists. A complementary approach is thus recommended, to comprise an atomic spectrometry-based method in conjunction with an electrochemical (or colorimetric) method. This article shows that although some techniques are more commonly used than others, a huge research gap remains, related with the need to ensure data reliability.

  20. Validation of Land Cover Products Using Reliability Evaluation Methods

    Directory of Open Access Journals (Sweden)

    Wenzhong Shi

    2015-06-01

    Full Text Available Validation of land cover products is a fundamental task prior to data applications. Current validation schemes and methods are, however, suited only for assessing classification accuracy and disregard the reliability of land cover products. The reliability evaluation of land cover products should be undertaken to provide reliable land cover information. In addition, the lack of high-quality reference data often constrains validation and affects the reliability results of land cover products. This study proposes a validation schema to evaluate the reliability of land cover products, including two methods, namely, result reliability evaluation and process reliability evaluation. Result reliability evaluation computes the reliability of land cover products using seven reliability indicators. Process reliability evaluation analyzes the reliability propagation in the data production process to obtain the reliability of land cover products. Fuzzy fault tree analysis is introduced and improved in the reliability analysis of a data production process. Research results show that the proposed reliability evaluation scheme is reasonable and can be applied to validate land cover products. Through the analysis of the seven indicators of result reliability evaluation, more information on land cover can be obtained for strategic decision-making and planning, compared with traditional accuracy assessment methods. Process reliability evaluation without the need for reference data can facilitate the validation and reflect the change trends of reliabilities to some extent.

  1. Application of reliability methods in Ontario Hydro

    International Nuclear Information System (INIS)

    Jeppesen, R.; Ravishankar, T.J.

    1985-01-01

    Ontario Hydro have established a reliability program in support of its substantial nuclear program. Application of the reliability program to achieve both production and safety goals is described. The value of such a reliability program is evident in the record of Ontario Hydro's operating nuclear stations. The factors which have contributed to the success of the reliability program are identified as line management's commitment to reliability; selective and judicious application of reliability methods; establishing performance goals and monitoring the in-service performance; and collection, distribution, review and utilization of performance information to facilitate cost-effective achievement of goals and improvements. (orig.)

  2. A new, rapid and reliable method for the determination of reduced sulphur (S{sup 2-}) species in natural water discharges

    Energy Technology Data Exchange (ETDEWEB)

    Montegrossi, Giordano [C.N.R. - Institute of Geosciences and Earth Resources, Via G. La Pira 4, 50121 Florence (Italy)]. E-mail: giordano@geo.unifi.it; Tassi, Franco [Department of Earth Sciences, University of Florence, Via G. La Pira 4, 50121 Florence (Italy); Vaselli, Orlando [C.N.R. - Institute of Geosciences and Earth Resources, Via G. La Pira 4, 50121 Florence (Italy); Department of Earth Sciences, University of Florence, Via G. La Pira 4, 50121 Florence (Italy); Bidini, Eva [Department of Earth Sciences, University of Florence, Via G. La Pira 4, 50121 Florence (Italy); Minissale, Angelo [C.N.R. - Institute of Geosciences and Earth Resources, Via G. La Pira 4, 50121 Florence (Italy)

    2006-05-15

    The determination of reduced S species in natural waters is particularly difficult due to their high instability and chemical and physical interferences in the current analytical methods. In this paper a new, rapid and reliable analytical procedure is presented, named the Cd-IC method, for their determination as {sigma}S{sup 2-} via oxidation to SO{sub 4}{sup 2-} after chemical trapping with an ammonia-cadmium solution that allows precipitation of all the reduced S species as CdS. The S{sup 2-}-SO{sub 4} is analysed by ion-chromatography. The main advantages of this method are: low cost, high stability of CdS precipitate, absence of interferences, low detection limit (0.01mg/L as SO{sub 4} for 10mL of water) and low analytical error (about 5%). The proposed method has been applied to more than 100 water samples from different natural systems (water discharges and cold wells from volcanic and geothermal areas, crater lakes) in central-southern Italy.

  3. Manual of selected physico-chemical analytical methods. IV

    International Nuclear Information System (INIS)

    Beran, M.; Klosova, E.; Krtil, J.; Sus, F.; Kuvik, V.; Vrbova, L.; Hamplova, M.; Lengyel, J.; Kelnar, L.; Zakouril, K.

    1990-11-01

    The Central Testing Laboratory of the Nuclear Research Institute at Rez has for a decade been participating in the development of analytical procedures and has been providing analyses of samples of different types and origin. The analytical procedures developed have been published in special journals and a number of them in the Manuals of analytical methods, in three parts. The 4th part of the Manual contains selected physico-chemical methods developed or modified by the Laboratory in the years 1986-1990 within the project ''Development of physico-chemical analytical methods''. In most cases, techniques are involved for non-nuclear applications. Some can find wider applications, especially in analyses of environmental samples. Others have been developed for specific cases of sample analyses or require special instrumentation (mass spectrometer), which partly restricts their applicability by other institutions. (author)

  4. Measurement methods to assess diastasis of the rectus abdominis muscle (DRAM): A systematic review of their measurement properties and meta-analytic reliability generalisation.

    Science.gov (United States)

    van de Water, A T M; Benjamin, D R

    2016-02-01

    Systematic literature review. Diastasis of the rectus abdominis muscle (DRAM) has been linked with low back pain, abdominal and pelvic dysfunction. Measurement is used to either screen or to monitor DRAM width. Determining which methods are suitable for screening and monitoring DRAM is of clinical value. To identify the best methods to screen for DRAM presence and monitor DRAM width. AMED, Embase, Medline, PubMed and CINAHL databases were searched for measurement property studies of DRAM measurement methods. Population characteristics, measurement methods/procedures and measurement information were extracted from included studies. Quality of all studies was evaluated using 'quality rating criteria'. When possible, reliability generalisation was conducted to provide combined reliability estimations. Thirteen studies evaluated measurement properties of the 'finger width'-method, tape measure, calipers, ultrasound, CT and MRI. Ultrasound was most evaluated. Methodological quality of these studies varied widely. Pearson's correlations of r = 0.66-0.79 were found between calipers and ultrasound measurements. Calipers and ultrasound had Intraclass Correlation Coefficients (ICC) of 0.78-0.97 for test-retest, inter- and intra-rater reliability. The 'finger width'-method had weighted Kappa's of 0.73-0.77 for test-retest reliability, but moderate agreement (63%; weighted Kappa = 0.53) between raters. Comparing calipers and ultrasound, low measurement error was found (above the umbilicus), and the methods had good agreement (83%; weighted Kappa = 0.66) for discriminative purposes. The available information support ultrasound and calipers as adequate methods to assess DRAM. For other methods limited measurement information of low to moderate quality is available and further evaluation of their measurement properties is required. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Analytical methods under emergency conditions

    International Nuclear Information System (INIS)

    Sedlet, J.

    1983-01-01

    This lecture discusses methods for the radiochemical determination of internal contamination of the body under emergency conditions, here defined as a situation in which results on internal radioactive contamination are needed quickly. The purpose of speed is to determine the necessity for medical treatment to increase the natural elimination rate. Analytical methods discussed include whole-body counting, organ counting, wound monitoring, and excreta analysis. 12 references

  6. The fitness for purpose of analytical methods applied to fluorimetric uranium determination in water matrix

    International Nuclear Information System (INIS)

    Grinman, Ana; Giustina, Daniel; Mondini, Julia; Diodat, Jorge

    2008-01-01

    Full text: This paper describes the steps which should be followed by a laboratory in order to validate the fluorimetric method for natural uranium in water matrix. The validation of an analytical method is a necessary requirement prior accreditation under Standard norm ISO/IEC 17025, of a non normalized method. Different analytical techniques differ in a sort of variables to be validated. Depending on the chemical process, measurement technique, matrix type, data fitting and measurement efficiency, a laboratory must set up experiments to verify reliability of data, through the application of several statistical tests and by participating in Quality Programs (QP) organized by reference laboratories such as the National Institute of Standards and Technology (NIST), National Physics Laboratory (NPL), or Environmental Measurements Laboratory (EML). However, the participation in QP not only involves international reference laboratories, but also, the national ones which are able to prove proficiency to the Argentinean Accreditation Board. The parameters that the ARN laboratory had to validate in the fluorimetric method to fit in accordance with Eurachem guide and IUPAC definitions, are: Detection Limit, Quantification Limit, Precision, Intra laboratory Precision, Reproducibility Limit, Repeatability Limit, Linear Range and Robustness. Assays to fit the above parameters were designed on the bases of statistics requirements, and a detailed data treatment is presented together with the respective tests in order to show the parameters validated. As a final conclusion, the uranium determination by fluorimetry is a reliable method for direct measurement to meet radioprotection requirements in water matrix, within its linear range which is fixed every time a calibration is carried out at the beginning of the analysis. The detection limit ( depending on blank standard deviation and slope) varies between 3 ug U and 5 ug U which yields minimum detectable concentrations (MDC) of

  7. The effect on reliability and sensitivity to level of training of combining analytic and holistic rating scales for assessing communication skills in an internal medicine resident OSCE.

    Science.gov (United States)

    Daniels, Vijay John; Harley, Dwight

    2017-07-01

    Although previous research has compared checklists to rating scales for assessing communication, the purpose of this study was to compare the effect on reliability and sensitivity to level of training of an analytic, a holistic, and a combined analytic-holistic rating scale in assessing communication skills. The University of Alberta Internal Medicine Residency runs OSCEs for postgraduate year (PGY) 1 and 2 residents and another for PGY-4 residents. Communication stations were scored with an analytic scale (empathy, non-verbal skills, verbal skills, and coherence subscales) and a holistic scale. Authors analyzed reliability of individual and combined scales using generalizability theory and evaluated each scale's sensitivity to level of training. For analytic, holistic, and combined scales, 12, 12, and 11 stations respectively yielded a Phi of 0.8 for the PGY-1,2 cohort, and 16, 16, and 14 stations yielded a Phi of 0.8 for the PGY-4 cohort. PGY-4 residents scored higher on the combined scale, the analytic rating scale, and the non-verbal and coherence subscales. A combined analytic-holistic rating scale increased score reliability and was sensitive to level of training. Given increased validity evidence, OSCE developers should consider combining analytic and holistic scales when assessing communication skills. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A novel reliability evaluation method for large engineering systems

    Directory of Open Access Journals (Sweden)

    Reda Farag

    2016-06-01

    Full Text Available A novel reliability evaluation method for large nonlinear engineering systems excited by dynamic loading applied in time domain is presented. For this class of problems, the performance functions are expected to be function of time and implicit in nature. Available first- or second-order reliability method (FORM/SORM will be challenging to estimate reliability of such systems. Because of its inefficiency, the classical Monte Carlo simulation (MCS method also cannot be used for large nonlinear dynamic systems. In the proposed approach, only tens instead of hundreds or thousands of deterministic evaluations at intelligently selected points are used to extract the reliability information. A hybrid approach, consisting of the stochastic finite element method (SFEM developed by the author and his research team using FORM, response surface method (RSM, an interpolation scheme, and advanced factorial schemes, is proposed. The method is clarified with the help of several numerical examples.

  9. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    Science.gov (United States)

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  10. Reliability and maintainability assessment factors for reliable fault-tolerant systems

    Science.gov (United States)

    Bavuso, S. J.

    1984-01-01

    A long term goal of the NASA Langley Research Center is the development of a reliability assessment methodology of sufficient power to enable the credible comparison of the stochastic attributes of one ultrareliable system design against others. This methodology, developed over a 10 year period, is a combined analytic and simulative technique. An analytic component is the Computer Aided Reliability Estimation capability, third generation, or simply CARE III. A simulative component is the Gate Logic Software Simulator capability, or GLOSS. The numerous factors that potentially have a degrading effect on system reliability and the ways in which these factors that are peculiar to highly reliable fault tolerant systems are accounted for in credible reliability assessments. Also presented are the modeling difficulties that result from their inclusion and the ways in which CARE III and GLOSS mitigate the intractability of the heretofore unworkable mathematics.

  11. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  12. Propulsion and launching analysis of variable-mass rockets by analytical methods

    OpenAIRE

    D.D. Ganji; M. Gorji; M. Hatami; A. Hasanpour; N. Khademzadeh

    2013-01-01

    In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM), homotopy perturbation method (HPM) and least square method (LSM) were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a paramet...

  13. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    OpenAIRE

    Hai An; Ling Zhou; Hui Sun

    2016-01-01

    Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new...

  14. Rigid inclusions-Comparison between analytical and numerical methods

    International Nuclear Information System (INIS)

    Gomez Perez, R.; Melentijevic, S.

    2014-01-01

    This paper compares different analytical methods for analysis of rigid inclusions with finite element modeling. First of all, the load transfer in the distribution layer is analyzed for its different thicknesses and different inclusion grids to define the range between results obtained by analytical and numerical methods. The interaction between the soft soil and the inclusion in the estimation of settlements is studied as well. Considering different stiffness of the soft soil, settlements obtained analytical and numerically are compared. The influence of the soft soil modulus of elasticity on the neutral point depth was also performed by finite elements. This depth has a great importance for the definition of the total length of rigid inclusion. (Author)

  15. New analytical exact solutions of time fractional KdV-KZK equation by Kudryashov methods

    Science.gov (United States)

    S Saha, Ray

    2016-04-01

    In this paper, new exact solutions of the time fractional KdV-Khokhlov-Zabolotskaya-Kuznetsov (KdV-KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann-Liouville derivative is used to convert the nonlinear time fractional KdV-KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV-KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV-KZK equation.

  16. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    Directory of Open Access Journals (Sweden)

    Xuyong Chen

    2017-01-01

    Full Text Available Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic reliability index and to narrow the range of the nonprobabilistic reliability index. If the range of the reliability index reduces to an acceptable accuracy, the solution will be considered convergent, and the nonprobabilistic reliability index will be obtained. The case study indicates that using the proposed method can avoid oscillating iteration process, make iteration process stable and convergent, reduce iteration steps significantly, and improve computational efficiency and precision significantly compared with the traditional nonprobabilistic response surface method. Finally, the nonprobabilistic reliability evaluation process of bridge will be built through evaluating the reliability of one PC continuous rigid frame bridge with three spans using the proposed method, which appears to be more simple and reliable when lack of samples and parameters in the bridge nonprobabilistic reliability evaluation is present.

  17. Jet substructure with analytical methods

    Energy Technology Data Exchange (ETDEWEB)

    Dasgupta, Mrinal [University of Manchester, Consortium for Fundamental Physics, School of Physics and Astronomy, Manchester (United Kingdom); Fregoso, Alessandro; Powling, Alexander [University of Manchester, School of Physics and Astronomy, Manchester (United Kingdom); Marzani, Simone [Durham University, Institute for Particle Physics Phenomenology, Durham (United Kingdom)

    2013-11-15

    We consider the mass distribution of QCD jets after the application of jet-substructure methods, specifically the mass-drop tagger, pruning, trimming and their variants. In contrast to most current studies employing Monte Carlo methods, we carry out analytical calculations at the next-to-leading order level, which are sufficient to extract the dominant logarithmic behaviour for each technique, and compare our findings to exact fixed-order results. Our results should ultimately lead to a better understanding of these jet-substructure methods which in turn will influence the development of future substructure tools for LHC phenomenology. (orig.)

  18. Analytical techniques and method validation for the measurement of selected semivolatile and nonvolatile organofluorochemicals in air.

    Science.gov (United States)

    Reagen, William K; Lindstrom, Kent R; Thompson, Kathy L; Flaherty, John M

    2004-09-01

    The widespread use of semi- and nonvolatile organofluorochemicals in industrial facilities, concern about their persistence, and relatively recent advancements in liquid chromatography/mass spectrometry (LC/MS) technology have led to the development of new analytical methods to assess potential worker exposure to airborne organofluorochemicals. Techniques were evaluated for the determination of 19 organofluorochemicals and for total fluorine in ambient air samples. Due to the potential biphasic nature of most of these fluorochemicals when airborne, Occupational Safety and Health Administration (OSHA) versatile sampler (OVS) tubes were used to simultaneously trap fluorochemical particulates and vapors from workplace air. Analytical methods were developed for OVS air samples to quantitatively analyze for total fluorine using oxygen bomb combustion/ion selective electrode and for 17 organofluorochemicals using LC/MS and gas chromatography/mass spectrometry (GC/MS). The experimental design for this validation was based on the National Institute of Occupational Safety and Health (NIOSH) Guidelines for Air Sampling and Analytical Method Development and Evaluation, with some revisions of the experimental design. The study design incorporated experiments to determine analytical recovery and stability, sampler capacity, the effect of some environmental parameters on recoveries, storage stability, limits of detection, precision, and accuracy. Fluorochemical mixtures were spiked onto each OVS tube over a range of 0.06-6 microg for each of 12 compounds analyzed by LC/MS and 0.3-30 microg for 5 compounds analyzed by GC/MS. These ranges allowed reliable quantitation at 0.001-0.1 mg/m3 in general for LC/MS analytes and 0.005-0.5 mg/m3 for GC/MS analytes when 60 L of air are sampled. The organofluorochemical exposure guideline (EG) is currently 0.1 mg/m3 for many analytes, with one exception being ammonium perfluorooctanoate (EG is 0.01 mg/m3). Total fluorine results may be used

  19. On the NPP structural reliability

    International Nuclear Information System (INIS)

    Klemin, A.I.; Polyakov, E.F.

    1980-01-01

    Reviewed are the main statements peculiarities and possibilities of the first branch guiding technical material (GTM) ''The methods of calculation of structural reliability of NPP and its systems at the stage of projecting''. It is stated, that in GTM presented are recomendations on the calculation of reliability of such specific systems, as the system of the reactor control and protection the system of measuring instruments and automatics and safe systems. GTM are based on analytical methods of modern theory of realibility with the Use of metodology of minimal cross sections of complex systems. It is stressed, that the calculations on the proposed methods permit to calculate a wide complex of reliability parameters, reflecting separately or together prorerties of NPP dependability and maintainability. For NPP, operating by a variable schedule of leading, aditionally considered are parameters, characterizing reliability with account of the proposed regime of power change, i.e. taking into account failures, caused by decrease of the obtained power lower, than the reguired or increase of the required power higher, than the obtained

  20. Reliability demonstration methodology for products with Gamma Process by optimal accelerated degradation testing

    International Nuclear Information System (INIS)

    Zhang, Chunhua; Lu, Xiang; Tan, Yuanyuan; Wang, Yashun

    2015-01-01

    For products with high reliability and long lifetime, accelerated degradation testing (ADT) may be adopted during product development phase to verify whether its reliability satisfies the predetermined level within feasible test duration. The actual degradation from engineering is usually a strictly monotonic process, such as fatigue crack growth, wear, and erosion. However, the method for reliability demonstration by ADT with monotonic degradation process has not been investigated so far. This paper proposes a reliability demonstration methodology by ADT for this kind of product. We first apply Gamma process to describe the monotonic degradation. Next, we present a reliability demonstration method by converting the required reliability level into allowable cumulative degradation in ADT and comparing the actual accumulative degradation with the allowable level. Further, we suggest an analytical optimal ADT design method for more efficient reliability demonstration by minimizing the asymptotic variance of decision variable in reliability demonstration under the constraints of sample size, test duration, test cost, and predetermined decision risks. The method is validated and illustrated with example on reliability demonstration of alloy product, and is applied to demonstrate the wear reliability within long service duration of spherical plain bearing in the end. - Highlights: • We present a reliability demonstration method by ADT for products with monotonic degradation process, which may be applied to verify reliability with long service life for products with monotonic degradation process within feasible test duration. • We suggest an analytical optimal ADT design method for more efficient reliability demonstration, which differs from the existed optimal ADT design for more accurate reliability estimation by different objective function and different constraints. • The methods are applied to demonstrate the wear reliability within long service duration of

  1. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  2. Analytical N beam position monitor method

    Directory of Open Access Journals (Sweden)

    A. Wegscheider

    2017-11-01

    Full Text Available Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β^{*}-leveling on luminosity will require many operational optics. A fast measurement of the β-function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs. A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  3. Analytical N beam position monitor method

    Science.gov (United States)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  4. Systems and Methods for Composable Analytics

    Science.gov (United States)

    2014-04-29

    simplistic module that performs a mathematical operation on two numbers. The most important method is the Execute() method. This will get called when it is...addition, an input control is also specified in the example below. In this example, the mathematical operator can only be chosen from a preconfigured...approaches. Some of the industries that could benefit from Composable Analytics include pharmaceuticals, health care, insurance, actuaries , and

  5. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    Energy Technology Data Exchange (ETDEWEB)

    Ronald Laurids Boring

    2010-11-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  6. Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues

    International Nuclear Information System (INIS)

    Boring, Ronald Laurids

    2010-01-01

    This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.

  7. 7 CFR 93.13 - Analytical methods.

    Science.gov (United States)

    2010-01-01

    ... No. 1, USDA, Agricultural Marketing Service, Science and Technology, 3521 South Agriculture Building... 7 Agriculture 3 2010-01-01 2010-01-01 false Analytical methods. 93.13 Section 93.13 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards...

  8. Secondary waste minimization in analytical methods

    International Nuclear Information System (INIS)

    Green, D.W.; Smith, L.L.; Crain, J.S.; Boparai, A.S.; Kiely, J.T.; Yaeger, J.S.; Schilling, J.B.

    1995-01-01

    The characterization phase of site remediation is an important and costly part of the process. Because toxic solvents and other hazardous materials are used in common analytical methods, characterization is also a source of new waste, including mixed waste. Alternative analytical methods can reduce the volume or form of hazardous waste produced either in the sample preparation step or in the measurement step. The authors are examining alternative methods in the areas of inorganic, radiological, and organic analysis. For determining inorganic constituents, alternative methods were studied for sample introduction into inductively coupled plasma spectrometers. Figures of merit for the alternative methods, as well as their associated waste volumes, were compared with the conventional approaches. In the radiological area, the authors are comparing conventional methods for gross α/β measurements of soil samples to an alternative method that uses high-pressure microwave dissolution. For determination of organic constituents, microwave-assisted extraction was studied for RCRA regulated semivolatile organics in a variety of solid matrices, including spiked samples in blank soil; polynuclear aromatic hydrocarbons in soils, sludges, and sediments; and semivolatile organics in soil. Extraction efficiencies were determined under varying conditions of time, temperature, microwave power, moisture content, and extraction solvent. Solvent usage was cut from the 300 mL used in conventional extraction methods to about 30 mL. Extraction results varied from one matrix to another. In most cases, the microwave-assisted extraction technique was as efficient as the more common Soxhlet or sonication extraction techniques

  9. Review of methods for the integration of reliability and design engineering

    International Nuclear Information System (INIS)

    Reilly, J.T.

    1978-03-01

    A review of methods for the integration of reliability and design engineering was carried out to establish a reliability program philosophy, an initial set of methods, and procedures to be used by both the designer and reliability analyst. The report outlines a set of procedures which implements a philosophy that requires increased involvement by the designer in reliability analysis. Discussions of each method reviewed include examples of its application

  10. Analytic of elements for the determination of soil->plant transfer factors

    International Nuclear Information System (INIS)

    Liese, T.

    1985-02-01

    This article describes a part of the conventional analytical work, which was done to determine soil to plant transfer factors. The analytical methods, the experiments to find out the best way of sample digestion and the resulting analytical procedures are described. Analytical methods are graphite furnace atomic absorption spectrometry (GFAAS) and inductively coupled plasma atomic emission spectrometry (ICP-AES). In case of ICP-AES the necessity of right background correction and correction of the spectral interferences is shown. The reliability of the analytical procedure is demonstrated by measuring different kinds of standard reference materials and by comparison of AAS and AES. (orig./HP) [de

  11. Analytical method for solving radioactive transformations

    International Nuclear Information System (INIS)

    Vudakin, Z.

    1999-01-01

    Analytical method for solving radioactive transformations is presented in this paper. High accuracy series expansion of the depletion function and nonsingular Bateman coefficients are used to overcome numerical difficulties when applying well-known Bateman solution of a simple radioactive decay. Generality and simplicity of the method are found to be useful in evaluating nuclide chains with one hundred or more nuclides in the chain. Method enables evaluation of complete chain, without elimination of short-lives nuclides. It is efficient and accurate

  12. Analytical procedures. Pt. 4

    International Nuclear Information System (INIS)

    Rackwitz, R.

    1985-01-01

    The semi-analytical procedures are summarized under the heading 'first or second-order reliability method'. The asymptotic aggravation of the theory was repeatedly hinted at. In supporting structures the probability of outage of components always is also a function of the condition of all other components. It depends moreover on the stress affecting mostly all components. This fact causes a marked reduction of the effect of redundant component arrangements in the system. It moreover requires very special formulations. Although theoretically interesting and practically important developments will leave their mark on the further progress of the theory, the statements obtained by those approaches will continue to depend on how closely the chosen physical relationships and stoachstic models can come to the scatter quantities. Sensitivity studies show that these are partly aspects of substantially higher importance with a view to decision criteria than the refinement of the (probabilistic) method. Questions of relevance and reliability of data and their adequate treatment in reliability analyses seem to rank higher in order of sequence than exaggerated demands on methodics. (orig./HP) [de

  13. A New Method to Study Analytic Inequalities

    Directory of Open Access Journals (Sweden)

    Xiao-Ming Zhang

    2010-01-01

    Full Text Available We present a new method to study analytic inequalities involving n variables. Regarding its applications, we proved some well-known inequalities and improved Carleman's inequality.

  14. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  15. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  16. Method matters: Understanding diagnostic reliability in DSM-IV and DSM-5.

    Science.gov (United States)

    Chmielewski, Michael; Clark, Lee Anna; Bagby, R Michael; Watson, David

    2015-08-01

    Diagnostic reliability is essential for the science and practice of psychology, in part because reliability is necessary for validity. Recently, the DSM-5 field trials documented lower diagnostic reliability than past field trials and the general research literature, resulting in substantial criticism of the DSM-5 diagnostic criteria. Rather than indicating specific problems with DSM-5, however, the field trials may have revealed long-standing diagnostic issues that have been hidden due to a reliance on audio/video recordings for estimating reliability. We estimated the reliability of DSM-IV diagnoses using both the standard audio-recording method and the test-retest method used in the DSM-5 field trials, in which different clinicians conduct separate interviews. Psychiatric patients (N = 339) were diagnosed using the SCID-I/P; 218 were diagnosed a second time by an independent interviewer. Diagnostic reliability using the audio-recording method (N = 49) was "good" to "excellent" (M κ = .80) and comparable to the DSM-IV field trials estimates. Reliability using the test-retest method (N = 218) was "poor" to "fair" (M κ = .47) and similar to DSM-5 field-trials' estimates. Despite low test-retest diagnostic reliability, self-reported symptoms were highly stable. Moreover, there was no association between change in self-report and change in diagnostic status. These results demonstrate the influence of method on estimates of diagnostic reliability. (c) 2015 APA, all rights reserved).

  17. Decision analytic methods in RODOS

    International Nuclear Information System (INIS)

    Borzenko, V.; French, S.

    1996-01-01

    In the event of a nuclear accident, RODOS seeks to provide decision support at all levels ranging from the largely descriptive to providing a detailed evaluation of the benefits and disadvantages of various countermeasure strategies and ranking them according to the societal preferences as perceived by the decision makers. To achieve this, it must draw upon several decision analytic methods and bring them together in a coherent manner so that the guidance offered to decision makers is consistent from one stage of an accident to the next. The methods used draw upon multi-attribute value and utility theories

  18. A critique of reliability prediction techniques for avionics applications

    Directory of Open Access Journals (Sweden)

    Guru Prasad PANDIAN

    2018-01-01

    Full Text Available Avionics (aeronautics and aerospace industries must rely on components and systems of demonstrated high reliability. For this, handbook-based methods have been traditionally used to design for reliability, develop test plans, and define maintenance requirements and sustainment logistics. However, these methods have been criticized as flawed and leading to inaccurate and misleading results. In its recent report on enhancing defense system reliability, the U.S. National Academy of Sciences has recently discredited these methods, judging the Military Handbook (MIL-HDBK-217 and its progeny as invalid and inaccurate. This paper discusses the issues that arise with the use of handbook-based methods in commercial and military avionics applications. Alternative approaches to reliability design (and its demonstration are also discussed, including similarity analysis, testing, physics-of-failure, and data analytics for prognostics and systems health management.

  19. New analytical exact solutions of time fractional KdV–KZK equation by Kudryashov methods

    International Nuclear Information System (INIS)

    Saha Ray, S

    2016-01-01

    In this paper, new exact solutions of the time fractional KdV–Khokhlov–Zabolotskaya–Kuznetsov (KdV–KZK) equation are obtained by the classical Kudryashov method and modified Kudryashov method respectively. For this purpose, the modified Riemann–Liouville derivative is used to convert the nonlinear time fractional KdV–KZK equation into the nonlinear ordinary differential equation. In the present analysis, the classical Kudryashov method and modified Kudryashov method are both used successively to compute the analytical solutions of the time fractional KdV–KZK equation. As a result, new exact solutions involving the symmetrical Fibonacci function, hyperbolic function and exponential function are obtained for the first time. The methods under consideration are reliable and efficient, and can be used as an alternative to establish new exact solutions of different types of fractional differential equations arising from mathematical physics. The obtained results are exhibited graphically in order to demonstrate the efficiencies and applicabilities of these proposed methods of solving the nonlinear time fractional KdV–KZK equation. (paper)

  20. Quantifying the measurement uncertainty of results from environmental analytical methods.

    Science.gov (United States)

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  1. Structural reliability analysis and seismic risk assessment

    International Nuclear Information System (INIS)

    Hwang, H.; Reich, M.; Shinozuka, M.

    1984-01-01

    This paper presents a reliability analysis method for safety evaluation of nuclear structures. By utilizing this method, it is possible to estimate the limit state probability in the lifetime of structures and to generate analytically the fragility curves for PRA studies. The earthquake ground acceleration, in this approach, is represented by a segment of stationary Gaussian process with a zero mean and a Kanai-Tajimi Spectrum. All possible seismic hazard at a site represented by a hazard curve is also taken into consideration. Furthermore, the limit state of a structure is analytically defined and the corresponding limit state surface is then established. Finally, the fragility curve is generated and the limit state probability is evaluated. In this paper, using a realistic reinforced concrete containment as an example, results of the reliability analysis of the containment subjected to dead load, live load and ground earthquake acceleration are presented and a fragility curve for PRA studies is also constructed

  2. Development, validation and evaluation of an analytical method for the determination of monomeric and oligomeric procyanidins in apple extracts.

    Science.gov (United States)

    Hollands, Wendy J; Voorspoels, Stefan; Jacobs, Griet; Aaby, Kjersti; Meisland, Ane; Garcia-Villalba, Rocio; Tomas-Barberan, Francisco; Piskula, Mariusz K; Mawson, Deborah; Vovk, Irena; Needs, Paul W; Kroon, Paul A

    2017-04-28

    There is a lack of data for individual oligomeric procyanidins in apples and apple extracts. Our aim was to develop, validate and evaluate an analytical method for the separation, identification and quantification of monomeric and oligomeric flavanols in apple extracts. To achieve this, we prepared two types of flavanol extracts from freeze-dried apples; one was an epicatechin-rich extract containing ∼30% (w/w) monomeric (-)-epicatechin which also contained oligomeric procyanidins (Extract A), the second was an oligomeric procyanidin-rich extract depleted of epicatechin (Extract B). The parameters considered for method optimisation were HPLC columns and conditions, sample heating, mass of extract and dilution volumes. The performance characteristics considered for method validation included standard linearity, method sensitivity, precision and trueness. Eight laboratories participated in the method evaluation. Chromatographic separation of the analytes was best achieved utilizing a Hilic column with a binary mobile phase consisting of acidic acetonitrile and acidic aqueous methanol. The final method showed linearity for epicatechin in the range 5-100μg/mL with a correlation co-efficient >0.999. Intra-day and inter-day precision of the analytes ranged from 2 to 6% and 2 to 13% respectively. Up to dp3, trueness of the method was >95% but decreased with increasing dp. Within laboratory precision showed RSD values <5 and 10% for monomers and oligomers, respectively. Between laboratory precision was 4 and 15% (Extract A) and 7 and 30% (Extract B) for monomers and oligomers, respectively. An analytical method for the separation, identification and quantification of procyanidins in an apple extract was developed, validated and assessed. The results of the inter-laboratory evaluation indicate that the method is reliable and reproducible. Copyright © 2017. Published by Elsevier B.V.

  3. Characterization of Catalytic Fast Pyrolysis Oils: The Importance of Solvent Selection for Analytical Method Development

    Energy Technology Data Exchange (ETDEWEB)

    Ferrell, Jack R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Ware, Anne E [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-02-25

    Two catalytic fast pyrolysis (CFP) oils (bottom/heavy fraction) were analyzed in various solvents that are used in common analytical methods (nuclear magnetic resonance - NMR, gas chromatography - GC, gel permeation chromatography - GPC, thermogravimetric analysis - TGA) for oil characterization and speciation. A more accurate analysis of the CFP oils can be obtained by identification and exploitation of solvent miscibility characteristics. Acetone and tetrahydrofuran can be used to completely solubilize CFP oils for analysis by GC and tetrahydrofuran can be used for traditional organic GPC analysis of the oils. DMSO-d6 can be used to solubilize CFP oils for analysis by 13C NMR. The fractionation of oils into solvents that did not completely solubilize the whole oils showed that miscibility can be related to the oil properties. This allows for solvent selection based on physico-chemical properties of the oils. However, based on semi-quantitative comparisons of the GC chromatograms, the organic solvent fractionation schemes did not speciate the oils based on specific analyte type. On the other hand, chlorinated solvents did fractionate the oils based on analyte size to a certain degree. Unfortunately, like raw pyrolysis oil, the matrix of the CFP oils is complicated and is not amenable to simple liquid-liquid extraction (LLE) or solvent fractionation to separate the oils based on the chemical and/or physical properties of individual components. For reliable analyses, for each analytical method used, it is critical that the bio-oil sample is both completely soluble and also not likely to react with the chosen solvent. The adoption of the standardized solvent selection protocols presented here will allow for greater reproducibility of analysis across different users and facilities.

  4. Further HTGR core support structure reliability studies. Interim report No. 1

    International Nuclear Information System (INIS)

    Platus, D.L.

    1976-01-01

    Results of a continuing effort to investigate high temperature gas cooled reactor (HTGR) core support structure reliability are described. Graphite material and core support structure component physical, mechanical and strength properties required for the reliability analysis are identified. Also described are experimental and associated analytical techniques for determining the required properties, a procedure for determining number of tests required, properties that might be monitored by special surveillance of the core support structure to improve reliability predictions, and recommendations for further studies. Emphasis in the study is directed towards developing a basic understanding of graphite failure and strength degradation mechanisms; and validating analytical methods for predicting strength and strength degradation from basic material properties

  5. Teaching Analytical Method Transfer through Developing and Validating Then Transferring Dissolution Testing Methods for Pharmaceuticals

    Science.gov (United States)

    Kimaru, Irene; Koether, Marina; Chichester, Kimberly; Eaton, Lafayette

    2017-01-01

    Analytical method transfer (AMT) and dissolution testing are important topics required in industry that should be taught in analytical chemistry courses. Undergraduate students in senior level analytical chemistry laboratory courses at Kennesaw State University (KSU) and St. John Fisher College (SJFC) participated in development, validation, and…

  6. Design Protocols and Analytical Strategies that Incorporate Structural Reliability Models

    Science.gov (United States)

    Duffy, Stephen F.

    1997-01-01

    Ceramic matrix composites (CMC) and intermetallic materials (e.g., single crystal nickel aluminide) are high performance materials that exhibit attractive mechanical, thermal and chemical properties. These materials are critically important in advancing certain performance aspects of gas turbine engines. From an aerospace engineer's perspective the new generation of ceramic composites and intermetallics offers a significant potential for raising the thrust/weight ratio and reducing NO(x) emissions of gas turbine engines. These aspects have increased interest in utilizing these materials in the hot sections of turbine engines. However, as these materials evolve and their performance characteristics improve a persistent need exists for state-of-the-art analytical methods that predict the response of components fabricated from CMC and intermetallic material systems. This need provided the motivation for the technology developed under this research effort. Continuous ceramic fiber composites exhibit an increase in work of fracture, which allows for "graceful" rather than catastrophic failure. When loaded in the fiber direction, these composites retain substantial strength capacity beyond the initiation of transverse matrix cracking despite the fact that neither of its constituents would exhibit such behavior if tested alone. As additional load is applied beyond first matrix cracking, the matrix tends to break in a series of cracks bridged by the ceramic fibers. Any additional load is born increasingly by the fibers until the ultimate strength of the composite is reached. Thus modeling efforts supported under this research effort have focused on predicting this sort of behavior. For single crystal intermetallics the issues that motivated the technology development involved questions relating to material behavior and component design. Thus the research effort supported by this grant had to determine the statistical nature and source of fracture in a high strength, Ni

  7. Advances in methods and applications of reliability and safety analysis

    International Nuclear Information System (INIS)

    Fieandt, J.; Hossi, H.; Laakso, K.; Lyytikaeinen, A.; Niemelae, I.; Pulkkinen, U.; Pulli, T.

    1986-01-01

    The know-how of the reliability and safety design and analysis techniques of Vtt has been established over several years in analyzing the reliability in the Finnish nuclear power plants Loviisa and Olkiluoto. This experience has been later on applied and developed to be used in the process industry, conventional power industry, automation and electronics. VTT develops and transfers methods and tools for reliability and safety analysis to the private and public sectors. The technology transfer takes place in joint development projects with potential users. Several computer-aided methods, such as RELVEC for reliability modelling and analysis, have been developed. The tool developed are today used by major Finnish companies in the fields of automation, nuclear power, shipbuilding and electronics. Development of computer-aided and other methods needed in analysis of operating experience, reliability or safety is further going on in a number of research and development projects

  8. Recent developments in analytical detection methods for radiation processed foods

    International Nuclear Information System (INIS)

    Wu Jilan

    1993-01-01

    A short summary of the programmes of 'ADMIT' (FAO/IAEA) and the developments in analytical detection methods for radiation processed foods has been given. It is suggested that for promoting the commercialization of radiation processed foods and controlling its quality, one must pay more attention to the study of analytical detection methods of irradiated food

  9. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  10. Reliability methods in nuclear power plant ageing management

    International Nuclear Information System (INIS)

    Simola, K.

    1999-01-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  11. Reliability methods in nuclear power plant ageing management

    Energy Technology Data Exchange (ETDEWEB)

    Simola, K. [VTT Automation, Espoo (Finland). Industrial Automation

    1999-07-01

    The aim of nuclear power plant ageing management is to maintain an adequate safety level throughout the lifetime of the plant. In ageing studies, the reliability of components, systems and structures is evaluated taking into account the possible time-dependent degradation. The phases of ageing analyses are generally the identification of critical components, identification and evaluation of ageing effects, and development of mitigation methods. This thesis focuses on the use of reliability methods and analyses of plant- specific operating experience in nuclear power plant ageing studies. The presented applications and method development have been related to nuclear power plants, but many of the approaches can also be applied outside the nuclear industry. The thesis consists of a summary and seven publications. The summary provides an overview of ageing management and discusses the role of reliability methods in ageing analyses. In the publications, practical applications and method development are described in more detail. The application areas at component and system level are motor-operated valves and protection automation systems, for which experience-based ageing analyses have been demonstrated. Furthermore, Bayesian ageing models for repairable components have been developed, and the management of ageing by improving maintenance practices is discussed. Recommendations for improvement of plant information management in order to facilitate ageing analyses are also given. The evaluation and mitigation of ageing effects on structural components is addressed by promoting the use of probabilistic modelling of crack growth, and developing models for evaluation of the reliability of inspection results. (orig.)

  12. Description of JNC's analytical method and its performance for FBR cores

    International Nuclear Information System (INIS)

    Ishikawa, M.

    2000-01-01

    The description of JNC's analytical method and its performance for FBR cores includes: an outline of JNC's Analytical System Compared with ERANOS; a standard data base for FBR Nuclear Design in JNC; JUPITER Critical Experiment; details of Analytical Method and Its Effects on JUPITER; performance of JNC Analytical System (effective multiplication factor k eff , control rod worth, and sodium void reactivity); design accuracy of a 600 MWe-class FBR Core. JNC developed a consistent analytical system for FBR core evaluation, based on JENDL library, f-table method, and three dimensional diffusion/transport theory, which includes comprehensive sensitivity tools to improve the prediction accuracy of core parameters. JNC system was verified by analysis of JUPITER critical experiment, and other facilities. Its performance can be judged quite satisfactory for FBR-core design work, though there is room for further improvement, such as more detailed treatment of cross-section resonance regions

  13. Extension of the analytic nodal method to four energy groups

    International Nuclear Information System (INIS)

    Parsons, D.K.; Nigg, D.W.

    1985-01-01

    The Analytic Nodal Method is one of several recently-developed coarse mesh numerical methods for efficiently and accurately solving the multidimensional static and transient neutron diffusion equations. This summary describes a mathematically rigorous extension of the Analytic Nodal Method to the frequently more physically realistic four-group case. A few general theoretical considerations are discussed, followed by some calculated results for a typical steady-state two-dimensional PWR quarter core application. 8 refs

  14. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    KAUST Repository

    Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.

    2009-01-01

    equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid

  15. Application of an analytical method for solution of thermal hydraulic conservation equations

    Energy Technology Data Exchange (ETDEWEB)

    Fakory, M.R. [Simulation, Systems & Services Technologies Company (S3 Technologies), Columbia, MD (United States)

    1995-09-01

    An analytical method has been developed and applied for solution of two-phase flow conservation equations. The test results for application of the model for simulation of BWR transients are presented and compared with the results obtained from application of the explicit method for integration of conservation equations. The test results show that with application of the analytical method for integration of conservation equations, the Courant limitation associated with explicit Euler method of integration was eliminated. The results obtained from application of the analytical method (with large time steps) agreed well with the results obtained from application of explicit method of integration (with time steps smaller than the size imposed by Courant limitation). The results demonstrate that application of the analytical approach significantly improves the numerical stability and computational efficiency.

  16. Development and Validation of Analytical Method for Losartan ...

    African Journals Online (AJOL)

    Development and Validation of Analytical Method for Losartan-Copper Complex Using UV-Vis Spectrophotometry. ... Tropical Journal of Pharmaceutical Research ... Purpose: To develop a new spectrophotometric method for the analysis of losartan potassium in pharmaceutical formulations by making its complex with ...

  17. Structural hybrid reliability index and its convergent solving method based on random–fuzzy–interval reliability model

    Directory of Open Access Journals (Sweden)

    Hai An

    2016-08-01

    Full Text Available Aiming to resolve the problems of a variety of uncertainty variables that coexist in the engineering structure reliability analysis, a new hybrid reliability index to evaluate structural hybrid reliability, based on the random–fuzzy–interval model, is proposed in this article. The convergent solving method is also presented. First, the truncated probability reliability model, the fuzzy random reliability model, and the non-probabilistic interval reliability model are introduced. Then, the new hybrid reliability index definition is presented based on the random–fuzzy–interval model. Furthermore, the calculation flowchart of the hybrid reliability index is presented and it is solved using the modified limit-step length iterative algorithm, which ensures convergence. And the validity of convergent algorithm for the hybrid reliability model is verified through the calculation examples in literature. In the end, a numerical example is demonstrated to show that the hybrid reliability index is applicable for the wear reliability assessment of mechanisms, where truncated random variables, fuzzy random variables, and interval variables coexist. The demonstration also shows the good convergence of the iterative algorithm proposed in this article.

  18. Comparison of Model Reliabilities from Single-Step and Bivariate Blending Methods

    DEFF Research Database (Denmark)

    Taskinen, Matti; Mäntysaari, Esa; Lidauer, Martin

    2013-01-01

    Model based reliabilities in genetic evaluation are compared between three methods: animal model BLUP, single-step BLUP, and bivariate blending after genomic BLUP. The original bivariate blending is revised in this work to better account animal models. The study data is extracted from...... be calculated. Model reliabilities by the single-step and the bivariate blending methods were higher than by animal model due to genomic information. Compared to the single-step method, the bivariate blending method reliability estimates were, in general, lower. Computationally bivariate blending method was......, on the other hand, lighter than the single-step method....

  19. Comparison of soil sampling and analytical methods for asbestos at the Sumas Mountain Asbestos Site-Working towards a toolbox for better assessment.

    Science.gov (United States)

    Wroble, Julie; Frederick, Timothy; Frame, Alicia; Vallero, Daniel

    2017-01-01

    Established soil sampling methods for asbestos are inadequate to support risk assessment and risk-based decision making at Superfund sites due to difficulties in detecting asbestos at low concentrations and difficulty in extrapolating soil concentrations to air concentrations. Environmental Protection Agency (EPA)'s Office of Land and Emergency Management (OLEM) currently recommends the rigorous process of Activity Based Sampling (ABS) to characterize site exposures. The purpose of this study was to compare three soil analytical methods and two soil sampling methods to determine whether one method, or combination of methods, would yield more reliable soil asbestos data than other methods. Samples were collected using both traditional discrete ("grab") samples and incremental sampling methodology (ISM). Analyses were conducted using polarized light microscopy (PLM), transmission electron microscopy (TEM) methods or a combination of these two methods. Data show that the fluidized bed asbestos segregator (FBAS) followed by TEM analysis could detect asbestos at locations that were not detected using other analytical methods; however, this method exhibited high relative standard deviations, indicating the results may be more variable than other soil asbestos methods. The comparison of samples collected using ISM versus discrete techniques for asbestos resulted in no clear conclusions regarding preferred sampling method. However, analytical results for metals clearly showed that measured concentrations in ISM samples were less variable than discrete samples.

  20. Glycoprotein Enrichment Analytical Techniques: Advantages and Disadvantages.

    Science.gov (United States)

    Zhu, R; Zacharias, L; Wooding, K M; Peng, W; Mechref, Y

    2017-01-01

    Protein glycosylation is one of the most important posttranslational modifications. Numerous biological functions are related to protein glycosylation. However, analytical challenges remain in the glycoprotein analysis. To overcome the challenges associated with glycoprotein analysis, many analytical techniques were developed in recent years. Enrichment methods were used to improve the sensitivity of detection, while HPLC and mass spectrometry methods were developed to facilitate the separation of glycopeptides/proteins and enhance detection, respectively. Fragmentation techniques applied in modern mass spectrometers allow the structural interpretation of glycopeptides/proteins, while automated software tools started replacing manual processing to improve the reliability and throughput of the analysis. In this chapter, the current methodologies of glycoprotein analysis were discussed. Multiple analytical techniques are compared, and advantages and disadvantages of each technique are highlighted. © 2017 Elsevier Inc. All rights reserved.

  1. Analytical modeling of nuclear power station operator reliability

    International Nuclear Information System (INIS)

    Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    The operator-plant interface is a critical component of power stations which requires the formulation of mathematical models to be applied in plant reliability analysis. The human model introduced here is based on cybernetic interactions and allows for use of available data from psychological experiments, hot and cold training and normal operation. The operator model is identified and integrated in the control and protection systems. The availability and reliability are given for different segments of the operator task and for specific periods of the operator life: namely, training, operation and vigilance or near retirement periods. The results can be easily and directly incorporated in system reliability analysis. (author)

  2. Mathematical methods for physical and analytical chemistry

    CERN Document Server

    Goodson, David Z

    2011-01-01

    Mathematical Methods for Physical and Analytical Chemistry presents mathematical and statistical methods to students of chemistry at the intermediate, post-calculus level. The content includes a review of general calculus; a review of numerical techniques often omitted from calculus courses, such as cubic splines and Newton's method; a detailed treatment of statistical methods for experimental data analysis; complex numbers; extrapolation; linear algebra; and differential equations. With numerous example problems and helpful anecdotes, this text gives chemistry students the mathematical

  3. Analytical Methods for Biomass Characterization during Pretreatment and Bioconversion

    Energy Technology Data Exchange (ETDEWEB)

    Pu, Yunqiao [ORNL; Meng, Xianzhi [University of Tennessee, Knoxville (UTK); Yoo, Chang Geun; Li, Mi; Ragauskas, Arthur J [ORNL

    2016-01-01

    Lignocellulosic biomass has been introduced as a promising resource for alternative fuels and chemicals because of its abundance and complement for petroleum resources. Biomass is a complex biopolymer and its compositional and structural characteristics largely vary depending on its species as well as growth environments. Because of complexity and variety of biomass, understanding its physicochemical characteristics is a key for effective biomass utilization. Characterization of biomass does not only provide critical information of biomass during pretreatment and bioconversion, but also give valuable insights on how to utilize the biomass. For better understanding biomass characteristics, good grasp and proper selection of analytical methods are necessary. This chapter introduces existing analytical approaches that are widely employed for biomass characterization during biomass pretreatment and conversion process. Diverse analytical methods using Fourier transform infrared (FTIR) spectroscopy, gel permeation chromatography (GPC), and nuclear magnetic resonance (NMR) spectroscopy for biomass characterization are reviewed. In addition, biomass accessibility methods by analyzing surface properties of biomass are also summarized in this chapter.

  4. Statistical models and methods for reliability and survival analysis

    CERN Document Server

    Couallier, Vincent; Huber-Carol, Catherine; Mesbah, Mounir; Huber -Carol, Catherine; Limnios, Nikolaos; Gerville-Reache, Leo

    2013-01-01

    Statistical Models and Methods for Reliability and Survival Analysis brings together contributions by specialists in statistical theory as they discuss their applications providing up-to-date developments in methods used in survival analysis, statistical goodness of fit, stochastic processes for system reliability, amongst others. Many of these are related to the work of Professor M. Nikulin in statistics over the past 30 years. The authors gather together various contributions with a broad array of techniques and results, divided into three parts - Statistical Models and Methods, Statistical

  5. Case study on the use of PSA methods: Human reliability analysis

    International Nuclear Information System (INIS)

    1991-04-01

    The overall objective of treating human reliability in a probabilistic safety analysis is to ensure that the key human interactions of typical crews are accurately and systematically incorporated into the study in a traceable manner. An additional objective is to make the human reliability analysis (HRA) as realistic as possible, taking into account the emergency procedures, the man-machine interface, the focus of training process, and the knowledge and experience of the crews. Section 3 of the paper describes an overview of this analytical process which leads to three more detailed example problems described in Section 4. Section 5 discusses a peer review process. References are presented that are useful in performing HRAs. In addition appendices are provided for definitions, selected data and a generic list of performance shaping factors. 35 refs, figs and tabs

  6. Hanford environmental analytical methods: Methods as of March 1990

    International Nuclear Information System (INIS)

    Goheen, S.C.; McCulloch, M.; Daniel, J.L.

    1993-05-01

    This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually

  7. Analytical methods for determination of mycotoxins: a review.

    Science.gov (United States)

    Turner, Nicholas W; Subrahmanyam, Sreenath; Piletsky, Sergey A

    2009-01-26

    Mycotoxins are small (MW approximately 700), toxic chemical products formed as secondary metabolites by a few fungal species that readily colonise crops and contaminate them with toxins in the field or after harvest. Ochratoxins and Aflatoxins are mycotoxins of major significance and hence there has been significant research on broad range of analytical and detection techniques that could be useful and practical. Due to the variety of structures of these toxins, it is impossible to use one standard technique for analysis and/or detection. Practical requirements for high-sensitivity analysis and the need for a specialist laboratory setting create challenges for routine analysis. Several existing analytical techniques, which offer flexible and broad-based methods of analysis and in some cases detection, have been discussed in this manuscript. There are a number of methods used, of which many are lab-based, but to our knowledge there seems to be no single technique that stands out above the rest, although analytical liquid chromatography, commonly linked with mass spectroscopy is likely to be popular. This review manuscript discusses (a) sample pre-treatment methods such as liquid-liquid extraction (LLE), supercritical fluid extraction (SFE), solid phase extraction (SPE), (b) separation methods such as (TLC), high performance liquid chromatography (HPLC), gas chromatography (GC), and capillary electrophoresis (CE) and (c) others such as ELISA. Further currents trends, advantages and disadvantages and future prospects of these methods have been discussed.

  8. Results of a Demonstration Assessment of Passive System Reliability Utilizing the Reliability Method for Passive Systems (RMPS)

    Energy Technology Data Exchange (ETDEWEB)

    Bucknor, Matthew; Grabaskas, David; Brunett, Acacia; Grelle, Austin

    2015-04-26

    Advanced small modular reactor designs include many advantageous design features such as passively driven safety systems that are arguably more reliable and cost effective relative to conventional active systems. Despite their attractiveness, a reliability assessment of passive systems can be difficult using conventional reliability methods due to the nature of passive systems. Simple deviations in boundary conditions can induce functional failures in a passive system, and intermediate or unexpected operating modes can also occur. As part of an ongoing project, Argonne National Laboratory is investigating various methodologies to address passive system reliability. The Reliability Method for Passive Systems (RMPS), a systematic approach for examining reliability, is one technique chosen for this analysis. This methodology is combined with the Risk-Informed Safety Margin Characterization (RISMC) approach to assess the reliability of a passive system and the impact of its associated uncertainties. For this demonstration problem, an integrated plant model of an advanced small modular pool-type sodium fast reactor with a passive reactor cavity cooling system is subjected to a station blackout using RELAP5-3D. This paper discusses important aspects of the reliability assessment, including deployment of the methodology, the uncertainty identification and quantification process, and identification of key risk metrics.

  9. Assessment of modern methods of human factor reliability analysis in PSA studies

    International Nuclear Information System (INIS)

    Holy, J.

    2001-12-01

    The report is structured as follows: Classical terms and objects (Probabilistic safety assessment as a framework for human reliability assessment; Human failure within the PSA model; Basic types of operator failure modelled in a PSA study and analyzed by HRA methods; Qualitative analysis of human reliability; Quantitative analysis of human reliability used; Process of analysis of nuclear reactor operator reliability in a PSA study); New terms and objects (Analysis of dependences; Errors of omission; Errors of commission; Error forcing context); and Overview and brief assessment of human reliability analysis (Basic characteristics of the methods; Assets and drawbacks of the use of each of HRA method; History and prospects of the use of the methods). (P.A.)

  10. Food irradiation. An update of legal and analytical aspects

    International Nuclear Information System (INIS)

    Masotti, P.; Zonta, F.

    1999-01-01

    A new European directive concerning ionising radiation treatment of foodstuffs has been recently adopted, although National laws may continue to be applied at least until 31 December 2000. A brief updated review dealing with the legal and analytical aspects of food irradiation is presented. The legal status of the food irradiation issue presently in force in Italy, in the European Union and in the USA is discussed. Some of the most used and reliable analytical methods for detecting irradiated foodstuffs, with special reference to standardised methods of European Committee of Standardization, are listed [it

  11. Method for assessing reliability of a network considering probabilistic safety assessment

    International Nuclear Information System (INIS)

    Cepin, M.

    2005-01-01

    A method for assessment of reliability of the network is developed, which uses the features of the fault tree analysis. The method is developed in a way that the increase of the network under consideration does not require significant increase of the model. The method is applied to small examples of network consisting of a small number of nodes and a small number of their connections. The results give the network reliability. They identify equipment, which is to be carefully maintained in order that the network reliability is not reduced, and equipment, which is a candidate for redundancy, as this would improve network reliability significantly. (author)

  12. Reliability analysis of self-actuated shutdown system

    International Nuclear Information System (INIS)

    Itooka, S.; Kumasaka, K.; Okabe, A.; Satoh, K.; Tsukui, Y.

    1991-01-01

    An analytical study was performed for the reliability of a self-actuated shutdown system (SASS) under the unprotected loss of flow (ULOF) event in a typical loop-type liquid metal fast breeder reactor (LMFBR) by the use of the response surface Monte Carlo analysis method. Dominant parameters for the SASS, such as Curie point characteristics, subassembly outlet coolant temperature, electromagnetic surface condition, etc., were selected and their probability density functions (PDFs) were determined by the design study information and experimental data. To get the response surface function (RSF) for the maximum coolant temperature, transient analyses of ULOF were performed by utilizing the experimental design method in the determination of analytical cases. Then, the RSF was derived by the multi-variable regression analysis. The unreliability of the SASS was evaluated as a probability that the maximum coolant temperature exceeded an acceptable level, employing the Monte Carlo calculation using the above PDFs and RSF. In this study, sensitivities to the dominant parameter were compared. The dispersion of subassembly outlet coolant temperature near the SASS-was found to be one of the most sensitive parameters. Fault tree analysis was performed using this value for the SASS in order to evaluate the shutdown system reliability. As a result of this study, the effectiveness of the SASS on the reliability improvement in the LMFBR shutdown system was analytically confirmed. This study has been performed as a part of joint research and development projects for DFBR under the sponsorship of the nine Japanese electric power companies, Electric Power Development Company and the Japan Atomic Power Company. (author)

  13. Big data analytics methods and applications

    CERN Document Server

    Rao, BLS; Rao, SB

    2016-01-01

    This book has a collection of articles written by Big Data experts to describe some of the cutting-edge methods and applications from their respective areas of interest, and provides the reader with a detailed overview of the field of Big Data Analytics as it is practiced today. The chapters cover technical aspects of key areas that generate and use Big Data such as management and finance; medicine and healthcare; genome, cytome and microbiome; graphs and networks; Internet of Things; Big Data standards; bench-marking of systems; and others. In addition to different applications, key algorithmic approaches such as graph partitioning, clustering and finite mixture modelling of high-dimensional data are also covered. The varied collection of themes in this volume introduces the reader to the richness of the emerging field of Big Data Analytics.

  14. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    Science.gov (United States)

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  15. Exact combinatorial reliability analysis of dynamic systems with sequence-dependent failures

    International Nuclear Information System (INIS)

    Xing Liudong; Shrestha, Akhilesh; Dai Yuanshun

    2011-01-01

    Many real-life fault-tolerant systems are subjected to sequence-dependent failure behavior, in which the order in which the fault events occur is important to the system reliability. Such systems can be modeled by dynamic fault trees (DFT) with priority-AND (pAND) gates. Existing approaches for the reliability analysis of systems subjected to sequence-dependent failures are typically state-space-based, simulation-based or inclusion-exclusion-based methods. Those methods either suffer from the state-space explosion problem or require long computation time especially when results with high degree of accuracy are desired. In this paper, an analytical method based on sequential binary decision diagrams is proposed. The proposed approach can analyze the exact reliability of non-repairable dynamic systems subjected to the sequence-dependent failure behavior. Also, the proposed approach is combinatorial and is applicable for analyzing systems with any arbitrary component time-to-failure distributions. The application and advantages of the proposed approach are illustrated through analysis of several examples. - Highlights: → We analyze the sequence-dependent failure behavior using combinatorial models. → The method has no limitation on the type of time-to-failure distributions. → The method is analytical and based on sequential binary decision diagrams (SBDD). → The method is computationally more efficient than existing methods.

  16. Parameters estimation of the single and double diode photovoltaic models using a Gauss–Seidel algorithm and analytical method: A comparative study

    International Nuclear Information System (INIS)

    Et-torabi, K.; Nassar-eddine, I.; Obbadi, A.; Errami, Y.; Rmaily, R.; Sahnoun, S.; El fajri, A.; Agunaou, M.

    2017-01-01

    Highlights: • Comparative study of two methods: a Gauss Seidel method and an analytical method. • Five models are implemented to estimate the five parameters for single diode. • Two models are used to estimate the seven parameters for double diode. • The parameters are estimated under changing environmental conditions. • To choose method/model combination more adequate for each PV module technology. - Abstract: In the photovoltaic (PV) panels modeling field, this paper presents a comparative study of two parameter estimation methods: the iterative method called Gauss Seidel, applied on the single diode model, and the analytical method used on the double diode model. These parameter estimation methods are based on the manufacturer's datasheets. They are also tested on three PV modules of different technologies: multicrystalline (kyocera KC200GT), monocrystalline (Shell SQ80), and thin film (Shell ST40). For the iterative method, five existing mathematical models classified from 1 to 5 are used to estimate the parameters of these PV modules under varying environmental conditions. Only two models of them are used for the analytical method. Each model is based on the combination of the photocurrent and the reverse saturation current’s expressions in terms of temperature and irradiance. In addition, the results of the models’ simulation are compared with the experimental data obtained from the PV modules’ datasheets, in order to evaluate the accuracy of the models. The simulation shows that the I-V characteristics obtained are matching to the experimental data. In order to validate the reliability of the two methods, both the Absolute Error (AE) and the Root Mean Square Error (RMSE) were calculated. The results suggest that the analytical method can be very useful for monocrystalline and multicrystalline modules, but for the thin film module, the iterative method is the most suitable.

  17. Assessment of the reliability of ultrasonic inspection methods

    International Nuclear Information System (INIS)

    Haines, N.F.; Langston, D.B.; Green, A.J.; Wilson, R.

    1982-01-01

    The reliability of NDT techniques has remained an open question for many years. A reliable technique may be defined as one that, when rigorously applied by a number of inspection teams, consistently finds then correctly sizes all defects of concern. In this paper we report an assessment of the reliability of defect detection by manual ultrasonic methods applied to the inspection of thick section pressure vessel weldments. Initially we consider the available data relating to the inherent physical capabilities of ultrasonic techniques to detect cracks in weldment and then, independently, we assess the likely variability in team to team performance when several teams are asked to follow the same specified test procedure. The two aspects of 'capability' and 'variability' are brought together to provide quantitative estimates of the overall reliability of ultrasonic inspection of thick section pressure vessel weldments based on currently existing data. The final section of the paper considers current research programmes on reliability and presents a view on how these will help to further improve NDT reliability. (author)

  18. An analytic data analysis method for oscillatory slug tests.

    Science.gov (United States)

    Chen, Chia-Shyun

    2006-01-01

    An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.

  19. A SIMPLE ANALYTICAL METHOD TO DETERMINE SOLAR ENERGETIC PARTICLES' MEAN FREE PATH

    International Nuclear Information System (INIS)

    He, H.-Q.; Qin, G.

    2011-01-01

    To obtain the mean free path of solar energetic particles (SEPs) for a solar event, one usually has to fit time profiles of both flux and anisotropy from spacecraft observations to numerical simulations of SEPs' transport processes. This method can be called a simulation method. But a reasonably good fitting needs a lot of simulations, which demand a large amount of calculation resources. Sometimes, it is necessary to find an easy way to obtain the mean free path of SEPs quickly, for example, in space weather practice. Recently, Shalchi et al. provided an approximate analytical formula of SEPs' anisotropy time profile as a function of particles' mean free path for impulsive events. In this paper, we determine SEPs' mean free path by fitting the anisotropy time profiles from Shalchi et al.'s analytical formula to spacecraft observations. This new method can be called an analytical method. In addition, we obtain SEPs' mean free path with the traditional simulation methods. Finally, we compare the mean free path obtained with the simulation method to that of the analytical method to show that the analytical method, with some minor modifications, can give us a good, quick approximation of SEPs' mean free path for impulsive events.

  20. New Analytic Solution to the Lane-Emden Equation of Index 2

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2012-01-01

    Full Text Available We present two new analytic methods that are used for solving initial value problems that model polytropic and stellar structures in astrophysics and mathematical physics. The applicability, effectiveness, and reliability of the methods are assessed on the Lane-Emden equation which is described by a second-order nonlinear differential equation. The results obtained in this work are also compared with numerical results of Horedt (1986 which are widely used as a benchmark for testing new methods of solution. Good agreement is observed between the present results and the numerical results. Comparison is also made between the proposed new methods and existing analytical methods and it is found that the new methods are more efficient and have several advantages over some of the existing analytical methods.

  1. RELIABILITY ASSESSMENT OF ENTROPY METHOD FOR SYSTEM CONSISTED OF IDENTICAL EXPONENTIAL UNITS

    Institute of Scientific and Technical Information of China (English)

    Sun Youchao; Shi Jun

    2004-01-01

    The reliability assessment of unit-system near two levels is the most important content in the reliability multi-level synthesis of complex systems. Introducing the information theory into system reliability assessment, using the addible characteristic of information quantity and the principle of equivalence of information quantity, an entropy method of data information conversion is presented for the system consisted of identical exponential units. The basic conversion formulae of entropy method of unit test data are derived based on the principle of information quantity equivalence. The general models of entropy method synthesis assessment for system reliability approximate lower limits are established according to the fundamental principle of the unit reliability assessment. The applications of the entropy method are discussed by way of practical examples. Compared with the traditional methods, the entropy method is found to be valid and practicable and the assessment results are very satisfactory.

  2. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    International Nuclear Information System (INIS)

    Gao, Z.; Xu, Y.; Downar, T.

    2013-01-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  3. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Z. [Rice University, MS 318, 6100 Main Street, Houston, TX 77005 (United States); Xu, Y. [Argonne National Laboratory, 9700 South Case Ave., Argonne, IL 60439 (United States); Downar, T. [Department of Nuclear Engineering, University of Michigan, 2355 Bonisteel blvd., Ann Arbor, MI 48109 (United States)

    2013-07-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  4. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  5. Interim reliability evaluation program, Browns Ferry fault trees

    International Nuclear Information System (INIS)

    Stewart, M.E.

    1981-01-01

    An abbreviated fault tree method is used to evaluate and model Browns Ferry systems in the Interim Reliability Evaluation programs, simplifying the recording and displaying of events, yet maintaining the system of identifying faults. The level of investigation is not changed. The analytical thought process inherent in the conventional method is not compromised. But the abbreviated method takes less time, and the fault modes are much more visible

  6. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...

  7. A method to assign failure rates for piping reliability assessments

    International Nuclear Information System (INIS)

    Gamble, R.M.; Tagart, S.W. Jr.

    1991-01-01

    This paper reports on a simplified method that has been developed to assign failure rates that can be used in reliability and risk studies of piping. The method can be applied on a line-by-line basis by identifying line and location specific attributes that can lead to piping unreliability from in-service degradation mechanisms and random events. A survey of service experience for nuclear piping reliability also was performed. The data from this survey provides a basis for identifying in-service failure attributes and assigning failure rates for risk and reliability studies

  8. Reliability analysis of neutron transport simulation using Monte Carlo method

    International Nuclear Information System (INIS)

    Souza, Bismarck A. de; Borges, Jose C.

    1995-01-01

    This work presents a statistical and reliability analysis covering data obtained by computer simulation of neutron transport process, using the Monte Carlo method. A general description of the method and its applications is presented. Several simulations, corresponding to slowing down and shielding problems have been accomplished. The influence of the physical dimensions of the materials and of the sample size on the reliability level of results was investigated. The objective was to optimize the sample size, in order to obtain reliable results, optimizing computation time. (author). 5 refs, 8 figs

  9. Studies on analytical method and nondestructive measuring method on the sensitization of austenitic stainless steels

    International Nuclear Information System (INIS)

    Onimura, Kichiro; Arioka, Koji; Horai, Manabu; Noguchi, Shigeru.

    1982-03-01

    Austenitic stainless steels are widely used as structural materials for the machine and equipment of various kinds of plants, such as thermal power, nuclear power, and chemical plants. The machines and equipment using this kind of material, however, have the possibility of suffering corrosion damage while in service, and these damages are considered to be largely due to the sensitization of the material in sometimes. So, it is necessary to develop an analytical method for grasping the sensitization of the material more in detail and a quantitative nondestructive measuring method which is applicable to various kinds of structures in order to prevent the corrosion damage. From the above viewpoint, studies have been made on the analytical method based on the theory of diffusion of chromium in austenitic stainless steels and on Electro-Potentiokinetics Reactivation Method (EPR Method) as a nondestructive measuring method, using 304 and 316 austenitic stainless steels having different carbon contents in base metals. This paper introduces the results of EPR test on the sensitization of austenitic stainless steels and the correlation between analytical and experimental results. (author)

  10. Nuclear and nuclear related analytical methods applied in environmental research

    International Nuclear Information System (INIS)

    Popescu, Ion V.; Gheboianu, Anca; Bancuta, Iulian; Cimpoca, G. V; Stihi, Claudia; Radulescu, Cristiana; Oros Calin; Frontasyeva, Marina; Petre, Marian; Dulama, Ioana; Vlaicu, G.

    2010-01-01

    Nuclear Analytical Methods can be used for research activities on environmental studies like water quality assessment, pesticide residues, global climatic change (transboundary), pollution and remediation. Heavy metal pollution is a problem associated with areas of intensive industrial activity. In this work the moss bio monitoring technique was employed to study the atmospheric deposition in Dambovita County Romania. Also, there were used complementary nuclear and atomic analytical methods: Neutron Activation Analysis (NAA), Atomic Absorption Spectrometry (AAS) and Inductively Coupled Plasma Atomic Emission Spectrometry (ICP-AES). These high sensitivity analysis methods were used to determine the chemical composition of some samples of mosses placed in different areas with different pollution industrial sources. The concentrations of Cr, Fe, Mn, Ni and Zn were determined. The concentration of Fe from the same samples was determined using all these methods and we obtained a very good agreement, in statistical limits, which demonstrate the capability of these analytical methods to be applied on a large spectrum of environmental samples with the same results. (authors)

  11. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    Science.gov (United States)

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  12. Propulsion and launching analysis of variable-mass rockets by analytical methods

    Directory of Open Access Journals (Sweden)

    D.D. Ganji

    2013-09-01

    Full Text Available In this study, applications of some analytical methods on nonlinear equation of the launching of a rocket with variable mass are investigated. Differential transformation method (DTM, homotopy perturbation method (HPM and least square method (LSM were applied and their results are compared with numerical solution. An excellent agreement with analytical methods and numerical ones is observed in the results and this reveals that analytical methods are effective and convenient. Also a parametric study is performed here which includes the effect of exhaust velocity (Ce, burn rate (BR of fuel and diameter of cylindrical rocket (d on the motion of a sample rocket, and contours for showing the sensitivity of these parameters are plotted. The main results indicate that the rocket velocity and altitude are increased with increasing the Ce and BR and decreased with increasing the rocket diameter and drag coefficient.

  13. Human reliability analysis methods for probabilistic safety assessment

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-11-01

    Human reliability analysis (HRA) of a probabilistic safety assessment (PSA) includes identifying human actions from safety point of view, modelling the most important of them in PSA models, and assessing their probabilities. As manifested by many incidents and studies, human actions may have both positive and negative effect on safety and economy. Human reliability analysis is one of the areas of probabilistic safety assessment (PSA) that has direct applications outside the nuclear industry. The thesis focuses upon developments in human reliability analysis methods and data. The aim is to support PSA by extending the applicability of HRA. The thesis consists of six publications and a summary. The summary includes general considerations and a discussion about human actions in the nuclear power plant (NPP) environment. A condensed discussion about the results of the attached publications is then given, including new development in methods and data. At the end of the summary part, the contribution of the publications to good practice in HRA is presented. In the publications, studies based on the collection of data on maintenance-related failures, simulator runs and expert judgement are presented in order to extend the human reliability analysis database. Furthermore, methodological frameworks are presented to perform a comprehensive HRA, including shutdown conditions, to study reliability of decision making, and to study the effects of wrong human actions. In the last publication, an interdisciplinary approach to analysing human decision making is presented. The publications also include practical applications of the presented methodological frameworks. (orig.)

  14. A reliability evaluation method for NPP safety DCS application software

    International Nuclear Information System (INIS)

    Li Yunjian; Zhang Lei; Liu Yuan

    2014-01-01

    In the field of nuclear power plant (NPP) digital i and c application, reliability evaluation for safety DCS application software is a key obstacle to be removed. In order to quantitatively evaluate reliability of NPP safety DCS application software, this paper propose a reliability evaluating method based on software development life cycle every stage's v and v defects density characteristics, by which the operating reliability level of the software can be predicted before its delivery, and helps to improve the reliability of NPP safety important software. (authors)

  15. Characteristics and application study of AP1000 NPPs equipment reliability classification method

    International Nuclear Information System (INIS)

    Guan Gao

    2013-01-01

    AP1000 nuclear power plant applies an integrated approach to establish equipment reliability classification, which includes probabilistic risk assessment technique, maintenance rule administrative, power production reliability classification and functional equipment group bounding method, and eventually classify equipment reliability into 4 levels. This classification process and result are very different from classical RCM and streamlined RCM. It studied the characteristic of AP1000 equipment reliability classification approach, considered that equipment reliability classification should effectively support maintenance strategy development and work process control, recommended to use a combined RCM method to establish the future equipment reliability program of AP1000 nuclear power plants. (authors)

  16. Biodiesel Analytical Methods: August 2002--January 2004

    Energy Technology Data Exchange (ETDEWEB)

    Van Gerpen, J.; Shanks, B.; Pruszko, R.; Clements, D.; Knothe, G.

    2004-07-01

    Biodiesel is an alternative fuel for diesel engines that is receiving great attention worldwide. The material contained in this book is intended to provide the reader with information about biodiesel engines and fuels, analytical methods used to measure fuel properties, and specifications for biodiesel quality control.

  17. Human reliability

    International Nuclear Information System (INIS)

    Embrey, D.E.

    1987-01-01

    Concepts and techniques of human reliability have been developed and are used mostly in probabilistic risk assessment. For this, the major application of human reliability assessment has been to identify the human errors which have a significant effect on the overall safety of the system and to quantify the probability of their occurrence. Some of the major issues within human reliability studies are reviewed and it is shown how these are applied to the assessment of human failures in systems. This is done under the following headings; models of human performance used in human reliability assessment, the nature of human error, classification of errors in man-machine systems, practical aspects, human reliability modelling in complex situations, quantification and examination of human reliability, judgement based approaches, holistic techniques and decision analytic approaches. (UK)

  18. A Generalized Pivotal Quantity Approach to Analytical Method Validation Based on Total Error.

    Science.gov (United States)

    Yang, Harry; Zhang, Jianchun

    2015-01-01

    The primary purpose of method validation is to demonstrate that the method is fit for its intended use. Traditionally, an analytical method is deemed valid if its performance characteristics such as accuracy and precision are shown to meet prespecified acceptance criteria. However, these acceptance criteria are not directly related to the method's intended purpose, which is usually a gurantee that a high percentage of the test results of future samples will be close to their true values. Alternate "fit for purpose" acceptance criteria based on the concept of total error have been increasingly used. Such criteria allow for assessing method validity, taking into account the relationship between accuracy and precision. Although several statistical test methods have been proposed in literature to test the "fit for purpose" hypothesis, the majority of the methods are not designed to protect the risk of accepting unsuitable methods, thus having the potential to cause uncontrolled consumer's risk. In this paper, we propose a test method based on generalized pivotal quantity inference. Through simulation studies, the performance of the method is compared to five existing approaches. The results show that both the new method and the method based on β-content tolerance interval with a confidence level of 90%, hereafter referred to as the β-content (0.9) method, control Type I error and thus consumer's risk, while the other existing methods do not. It is further demonstrated that the generalized pivotal quantity method is less conservative than the β-content (0.9) method when the analytical methods are biased, whereas it is more conservative when the analytical methods are unbiased. Therefore, selection of either the generalized pivotal quantity or β-content (0.9) method for an analytical method validation depends on the accuracy of the analytical method. It is also shown that the generalized pivotal quantity method has better asymptotic properties than all of the current

  19. Reliability studies in a developing technology

    International Nuclear Information System (INIS)

    Mitchell, L.A.; Osgood, C.; Radcliffe, S.J.

    1975-01-01

    The standard methods of reliability analysis can only be applied if valid failure statistics are available. In a developing technology the statistics which have been accumulated, over many years of conventional experience, are often rendered useless by environmental effects. Thus new data, which take account of the new environment, are required. This paper discusses the problem of optimizing the acquisition of these data when time-scales and resources are limited. It is concluded that the most fruitful strategy in assessing the reliability of mechanisms is to study the failures of individual joints whilst developing, where necessary, analytical tools to facilitate the use of these data. The approach is illustrated by examples from the field of tribology. Failures of rolling element bearings in moist, high-pressure carbon dioxide illustrate the important effects of apparently minor changes in the environment. New analytical techniques are developed from a study of friction failures in sliding joints. (author)

  20. Reliability and risk analysis methods research plan

    International Nuclear Information System (INIS)

    1984-10-01

    This document presents a plan for reliability and risk analysis methods research to be performed mainly by the Reactor Risk Branch (RRB), Division of Risk Analysis and Operations (DRAO), Office of Nuclear Regulatory Research. It includes those activities of other DRAO branches which are very closely related to those of the RRB. Related or interfacing programs of other divisions, offices and organizations are merely indicated. The primary use of this document is envisioned as an NRC working document, covering about a 3-year period, to foster better coordination in reliability and risk analysis methods development between the offices of Nuclear Regulatory Research and Nuclear Reactor Regulation. It will also serve as an information source for contractors and others to more clearly understand the objectives, needs, programmatic activities and interfaces together with the overall logical structure of the program

  1. Approximate analytical methods for solving ordinary differential equations

    CERN Document Server

    Radhika, TSL; Rani, T Raja

    2015-01-01

    Approximate Analytical Methods for Solving Ordinary Differential Equations (ODEs) is the first book to present all of the available approximate methods for solving ODEs, eliminating the need to wade through multiple books and articles. It covers both well-established techniques and recently developed procedures, including the classical series solution method, diverse perturbation methods, pioneering asymptotic methods, and the latest homotopy methods.The book is suitable not only for mathematicians and engineers but also for biologists, physicists, and economists. It gives a complete descripti

  2. Analytical method of waste allocation in waste management systems: Concept, method and case study

    Energy Technology Data Exchange (ETDEWEB)

    Bergeron, Francis C., E-mail: francis.b.c@videotron.ca

    2017-01-15

    Waste is not a rejected item to dispose anymore but increasingly a secondary resource to exploit, influencing waste allocation among treatment operations in a waste management (WM) system. The aim of this methodological paper is to present a new method for the assessment of the WM system, the “analytical method of the waste allocation process” (AMWAP), based on the concept of the “waste allocation process” defined as the aggregation of all processes of apportioning waste among alternative waste treatment operations inside or outside the spatial borders of a WM system. AMWAP contains a conceptual framework and an analytical approach. The conceptual framework includes, firstly, a descriptive model that focuses on the description and classification of the WM system. It includes, secondly, an explanatory model that serves to explain and to predict the operation of the WM system. The analytical approach consists of a step-by-step analysis for the empirical implementation of the conceptual framework. With its multiple purposes, AMWAP provides an innovative and objective modular method to analyse a WM system which may be integrated in the framework of impact assessment methods and environmental systems analysis tools. Its originality comes from the interdisciplinary analysis of the WAP and to develop the conceptual framework. AMWAP is applied in the framework of an illustrative case study on the household WM system of Geneva (Switzerland). It demonstrates that this method provides an in-depth and contextual knowledge of WM. - Highlights: • The study presents a new analytical method based on the waste allocation process. • The method provides an in-depth and contextual knowledge of the waste management system. • The paper provides a reproducible procedure for professionals, experts and academics. • It may be integrated into impact assessment or environmental system analysis tools. • An illustrative case study is provided based on household waste

  3. Analytical method of waste allocation in waste management systems: Concept, method and case study

    International Nuclear Information System (INIS)

    Bergeron, Francis C.

    2017-01-01

    Waste is not a rejected item to dispose anymore but increasingly a secondary resource to exploit, influencing waste allocation among treatment operations in a waste management (WM) system. The aim of this methodological paper is to present a new method for the assessment of the WM system, the “analytical method of the waste allocation process” (AMWAP), based on the concept of the “waste allocation process” defined as the aggregation of all processes of apportioning waste among alternative waste treatment operations inside or outside the spatial borders of a WM system. AMWAP contains a conceptual framework and an analytical approach. The conceptual framework includes, firstly, a descriptive model that focuses on the description and classification of the WM system. It includes, secondly, an explanatory model that serves to explain and to predict the operation of the WM system. The analytical approach consists of a step-by-step analysis for the empirical implementation of the conceptual framework. With its multiple purposes, AMWAP provides an innovative and objective modular method to analyse a WM system which may be integrated in the framework of impact assessment methods and environmental systems analysis tools. Its originality comes from the interdisciplinary analysis of the WAP and to develop the conceptual framework. AMWAP is applied in the framework of an illustrative case study on the household WM system of Geneva (Switzerland). It demonstrates that this method provides an in-depth and contextual knowledge of WM. - Highlights: • The study presents a new analytical method based on the waste allocation process. • The method provides an in-depth and contextual knowledge of the waste management system. • The paper provides a reproducible procedure for professionals, experts and academics. • It may be integrated into impact assessment or environmental system analysis tools. • An illustrative case study is provided based on household waste

  4. SPANDOM - source projection analytic nodal discrete ordinates method

    International Nuclear Information System (INIS)

    Kim, Tae Hyeong; Cho, Nam Zin

    1994-01-01

    We describe a new discrete ordinates nodal method for the two-dimensional transport equation. We solve the discrete ordinates equation analytically after the source term is projected and represented in polynomials. The method is applied to two fast reactor benchmark problems and compared with the TWOHEX code. The results indicate that the present method accurately predicts not only multiplication factor but also flux distribution

  5. Reliability research to nuclear power plant operators based on several methods

    International Nuclear Information System (INIS)

    Fang Xiang; Li Fu; Zhao Bingquan

    2009-01-01

    The paper utilizes many kinds of international reliability research methods, and summarizes the review of reliability research of Chinese nuclear power plant operators in past over ten years based on the simulator platform of nuclear power plant. The paper shows the necessity and feasibility of the research to nuclear power plant operators from many angles including human cognition reliability, fuzzy mathematics model and psychological research model, etc. It will be good to the safe operation of nuclear power plant based on many kinds of research methods to the reliability research of nuclear power plant operators. (authors)

  6. Transport methods: general. 1. The Analytical Monte Carlo Method for Radiation Transport Calculations

    International Nuclear Information System (INIS)

    Martin, William R.; Brown, Forrest B.

    2001-01-01

    We present an alternative Monte Carlo method for solving the coupled equations of radiation transport and material energy. This method is based on incorporating the analytical solution to the material energy equation directly into the Monte Carlo simulation for the radiation intensity. This method, which we call the Analytical Monte Carlo (AMC) method, differs from the well known Implicit Monte Carlo (IMC) method of Fleck and Cummings because there is no discretization of the material energy equation since it is solved as a by-product of the Monte Carlo simulation of the transport equation. Our method also differs from the method recently proposed by Ahrens and Larsen since they use Monte Carlo to solve both equations, while we are solving only the radiation transport equation with Monte Carlo, albeit with effective sources and cross sections to represent the emission sources. Our method bears some similarity to a method developed and implemented by Carter and Forest nearly three decades ago, but there are substantive differences. We have implemented our method in a simple zero-dimensional Monte Carlo code to test the feasibility of the method, and the preliminary results are very promising, justifying further extension to more realistic geometries. (authors)

  7. Designing Glass Panels for Economy and Reliability

    Science.gov (United States)

    Moore, D. M.

    1983-01-01

    Analytical method determines probability of failure of rectangular glass plates subjected to uniformly distributed loads such as those from wind, earthquake, snow, and deadweight. Developed as aid in design of protective glass covers for solar-cell arrays and solar collectors, method is also useful in estimating the reliability of large windows in buildings exposed to high winds and is adapted to nonlinear stress analysis of simply supported plates of any elastic material.

  8. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  9. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  10. Reliability of the k{sub 0}-standardization method using geological sample analysed in a proficiency test

    Energy Technology Data Exchange (ETDEWEB)

    Pelaes, Ana Clara O.; Menezes, Maria Ângela de B.C., E-mail: anacpelaes@gmail.com, E-mail: menezes@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-11-01

    The Neutron Activation Analysis (NAA) is an analytical technique to determine the elemental chemical composition in samples of several matrices, that has been applied by the Laboratory for Neutron Activation Analysis, located at Centro de Desenvolvimento da Tecnologia Nuclear /Comissão Nacional de Energia Nuclear (Nuclear Technology Development Center/Brazilian Commission for Nuclear Energy), CDTN/CNEN, since the starting up of the TRIGA MARK I IPR-R1 reactor, in 1960. Among the methods of application of the technique, the k{sub 0}-standardization method, which was established at CDTN in 1995, is the most efficient and in 2003 it was reestablished and optimized. In order to verify the reproducibility of the results generated by the application of the k{sub 0}-standardization method at CDTN, aliquots of a geological sample sent by WEPAL (Wageningen Evaluating Programs for Analytical Laboratories) were analysed and its results were compared with the results obtained through the Intercomparison of Results organized by the International Atomic Energy Agency in 2015. WEPAL is an accredited institution for the organisation of interlaboratory studies, preparing and organizing proficiency testing schemes all over the world. Therefore, the comparison with the results provided aims to contribute to the continuous improvement of the quality of the results obtained by the CDTN. The objective of this study was to verify the reliability of the method applied two years after the intercomparison round. (author)

  11. New Analytical Method for the Determination of Metronidazole in ...

    African Journals Online (AJOL)

    New Analytical Method for the Determination of Metronidazole in Human Plasma: Application to Bioequivalence Study. ... Methods: Metronidazole was extracted from human plasma through one step of ... http://dx.doi.org/10.4314/tjpr.v11i5.14.

  12. Some recent developments in the risk- and reliability analysis of structures

    International Nuclear Information System (INIS)

    Bauer, J.; Choi, H.S.; Kappler, H.; Melzer, H.J.; Panggabean, H.; Reichmann, K.H.; Schueller, G.I.; Schwarz, R.F.

    1979-01-01

    This report consists of six contributions divided into four general topics. While Part I concentrates on the development of Analytical Methods in Structural Reliability, Part II to IV are devoted to the application of these methods to Offshore-, Nuclear - and generally to Wind- and Earthquake Exposed Structures. (orig.) [de

  13. Development and validation of analytical methods for dietary supplements

    International Nuclear Information System (INIS)

    Sullivan, Darryl; Crowley, Richard

    2006-01-01

    The expanding use of innovative botanical ingredients in dietary supplements and foods has resulted in a flurry of research aimed at the development and validation of analytical methods for accurate measurement of active ingredients. The pressing need for these methods is being met through an expansive collaborative initiative involving industry, government, and analytical organizations. This effort has resulted in the validation of several important assays as well as important advances in the method engineering procedures which have improved the efficiency of the process. The initiative has also allowed researchers to hurdle many of the barricades that have hindered accurate analysis such as the lack of reference standards and comparative data. As the availability for nutraceutical products continues to increase these methods will provide consumers and regulators with the scientific information needed to assure safety and dependable labeling

  14. Performance of analytical methods for tomographic gamma scanning

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Mercer, D.J.

    1997-01-01

    The use of gamma-ray computerized tomography for nondestructive assay of radioactive materials has led to the development of specialized analytical methods. Over the past few years, Los Alamos has developed and implemented a computer code, called ARC-TGS, for the analysis of data obtained by tomographic gamma scanning (TGS). ARC-TGS reduces TGS transmission and emission tomographic data, providing the user with images of the sample contents, the activity or mass of selected radionuclides, and an estimate of the uncertainty in the measured quantities. The results provided by ARC-TGS can be corrected for self-attenuation when the isotope of interest emits more than one gamma-ray. In addition, ARC-TGS provides information needed to estimate TGS quantification limits and to estimate the scan time needed to screen for small amounts of radioactivity. In this report, an overview of the analytical methods used by ARC-TGS is presented along with an assessment of the performance of these methods for TGS

  15. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    Science.gov (United States)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  16. Analytic continuation of quantum Monte Carlo data. Stochastic sampling method

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Khaldoon; Koch, Erik [Institute for Advanced Simulation, Forschungszentrum Juelich, 52425 Juelich (Germany)

    2016-07-01

    We apply Bayesian inference to the analytic continuation of quantum Monte Carlo (QMC) data from the imaginary axis to the real axis. Demanding a proper functional Bayesian formulation of any analytic continuation method leads naturally to the stochastic sampling method (StochS) as the Bayesian method with the simplest prior, while it excludes the maximum entropy method and Tikhonov regularization. We present a new efficient algorithm for performing StochS that reduces computational times by orders of magnitude in comparison to earlier StochS methods. We apply the new algorithm to a wide variety of typical test cases: spectral functions and susceptibilities from DMFT and lattice QMC calculations. Results show that StochS performs well and is able to resolve sharp features in the spectrum.

  17. Modifying nodal pricing method considering market participants optimality and reliability

    Directory of Open Access Journals (Sweden)

    A. R. Soofiabadi

    2015-06-01

    Full Text Available This paper develops a method for nodal pricing and market clearing mechanism considering reliability of the system. The effects of components reliability on electricity price, market participants’ profit and system social welfare is considered. This paper considers reliability both for evaluation of market participant’s optimality as well as for fair pricing and market clearing mechanism. To achieve fair pricing, nodal price has been obtained through a two stage optimization problem and to achieve fair market clearing mechanism, comprehensive criteria has been introduced for optimality evaluation of market participant. Social welfare of the system and system efficiency are increased under proposed modified nodal pricing method.

  18. Five-point Element Scheme of Finite Analytic Method for Unsteady Groundwater Flow

    Institute of Scientific and Technical Information of China (English)

    Xiang Bo; Mi Xiao; Ji Changming; Luo Qingsong

    2007-01-01

    In order to improve the finite analytic method's adaptability for irregular unit, by using coordinates rotation technique this paper establishes a five-point element scheme of finite analytic method. It not only solves unsteady groundwater flow equation but also gives the boundary condition. This method can be used to calculate the three typical questions of groundwater. By compared with predecessor's computed result, the result of this method is more satisfactory.

  19. A reliability analysis tool for SpaceWire network

    Science.gov (United States)

    Zhou, Qiang; Zhu, Longjiang; Fei, Haidong; Wang, Xingyou

    2017-04-01

    A SpaceWire is a standard for on-board satellite networks as the basis for future data-handling architectures. It is becoming more and more popular in space applications due to its technical advantages, including reliability, low power and fault protection, etc. High reliability is the vital issue for spacecraft. Therefore, it is very important to analyze and improve the reliability performance of the SpaceWire network. This paper deals with the problem of reliability modeling and analysis with SpaceWire network. According to the function division of distributed network, a reliability analysis method based on a task is proposed, the reliability analysis of every task can lead to the system reliability matrix, the reliability result of the network system can be deduced by integrating these entire reliability indexes in the matrix. With the method, we develop a reliability analysis tool for SpaceWire Network based on VC, where the computation schemes for reliability matrix and the multi-path-task reliability are also implemented. By using this tool, we analyze several cases on typical architectures. And the analytic results indicate that redundancy architecture has better reliability performance than basic one. In practical, the dual redundancy scheme has been adopted for some key unit, to improve the reliability index of the system or task. Finally, this reliability analysis tool will has a directive influence on both task division and topology selection in the phase of SpaceWire network system design.

  20. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine

    DEFF Research Database (Denmark)

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon

    2013-01-01

    as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine...

  1. Course on Advanced Analytical Chemistry and Chromatography

    DEFF Research Database (Denmark)

    Andersen, Jens Enevold Thaulov; Fristrup, Peter; Nielsen, Kristian Fog

    2011-01-01

    Methods of analytical chemistry constitute an integral part of decision making in chemical research, and students must master a high degree of knowledge, in order to perform reliable analysis. At DTU departments of chemistry it was thus decided to develop a course that was attractive to master...... students of different direction of studies, to Ph.D. students and to professionals that need an update of their current state of skills and knowledge. A course of 10 ECTS points was devised with the purpose of introducing students to analytical chemistry and chromatography with the aim of including theory...

  2. Level III Reliability methods feasible for complex structures

    NARCIS (Netherlands)

    Waarts, P.H.; Boer, A. de

    2001-01-01

    The paper describes the comparison between three types of reliability methods: code type level I used by a designer, full level I and a level III method. Two cases that are typical for civil engineering practise, a cable-stayed subjected to traffic load and the installation of a soil retaining sheet

  3. Directory of Analytical Methods, Department 1820

    International Nuclear Information System (INIS)

    Whan, R.E.

    1986-01-01

    The Materials Characterization Department performs chemical, physical, and thermophysical analyses in support of programs throughout the Laboratories. The department has a wide variety of techniques and instruments staffed by experienced personnel available for these analyses, and we strive to maintain near state-of-the-art technology by continued updates. We have prepared this Directory of Analytical Methods in order to acquaint you with our capabilities and to help you identify personnel who can assist with your analytical needs. The descriptions of the various capabilities are requester-oriented and have been limited in length and detail. Emphasis has been placed on applications and limitations with notations of estimated analysis time and alternative or related techniques. A short, simplified discussion of underlying principles is also presented along with references if more detail is desired. The contents of this document have been organized in the order: bulky analysis, microanalysis, surface analysis, optical and thermal property measurements

  4. The reliability of radiochemical and chemical trace analyses in environmental materials

    International Nuclear Information System (INIS)

    Heinonen, Jorma.

    1977-12-01

    After theoretically exploring the factors which influence the quality of analytical data as well as the means by which a sufficient quality can be assured and controlled, schemes of different kinds have been developed and applied in order to demonstrate the analytical quality assurance and control in practice. Methods have been developed for the determination of cesium, bromine and arsenic by neutron activation analysis at the natural ''background'' concentration level in environmental materials. The calibration of methods is described. The methods were also applied on practical routine analysis, the results of which are briefly reviewed. In the case of Ce the precision of a comprehensive calibration was found to vary between 5.2-9.2% as a relative standard deviation, which agrees well with the calculated statistical random error 5.7-8.7%. In the case of Br the method showed a reasonable precision, about 11% on the average, and accuracy. In employing the method to analyze died samples containing Br from 3 to 12 ppm a continuous control of precison was performed. The analysis of As demonstrates the many problems and difficulties associated with environmental analysis. In developing the final method four former intercomparison materials of IAEA were utilized in the calibration. The tests performed revealed a systematic error. In this case a scheme was developed for the continuous control of both precision and accuracy. The results of radiochemical analyses in environmental materials show a reliability somewhat better than that occuring in the determination of stable trace elements. According to a rough classification, 15% of the results of radiochemical analysis show excellent reliability, whereas 60% show a reliability adequate for certain purposes. The remaining 15% are excellent, 60% adequate for some purposes and 30% good-for-nothing. The reasons for often insufficient reliability of results are both organizational and technical. With reasonable effort and

  5. Methods of analytical check for highly pure tungsten

    International Nuclear Information System (INIS)

    Miklin, D.G.; Karpov, Yu.A.; Orlova, V.A.

    1993-01-01

    The review is devoted to the methods of high-purity tungsten analysis. Current trends in the development of this branch of analytical chemistry are considered. Application of both instrument mass-spectrometry analysis and optico-spectral, activation methods and mass-spectrometry ones with inductively-bound plasma in combination with preliminary isolation of the basis and impurity concentration is expected to be the most actual

  6. Recent advances in computational structural reliability analysis methods

    Science.gov (United States)

    Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.

    1993-10-01

    The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.

  7. A reliable method for ageing of whiting (Merlangius merlangus) for use in stock assessment and management

    DEFF Research Database (Denmark)

    Ross, Stine Dalmann; Hüssy, Karin

    2013-01-01

    Accurate age estimation is important for stock assessment and management. The importance of reliable ageing is emphasized by the impending analytical assessment of whiting (Merlangius merlangus) in the Baltic Sea. Whiting is a top predator in the western Baltic Sea, where it is fished commercially...

  8. Pyrrolizidine alkaloids in honey: comparison of analytical methods

    NARCIS (Netherlands)

    Kempf, M.; Wittig, M.; Reinhard, A.; Ohe, von der K.; Blacquière, T.; Raezke, K.P.; Michel, R.; Schreier, P.; Beuerle, T.

    2011-01-01

    Pyrrolizidine alkaloids (PAs) are a structurally diverse group of toxicologically relevant secondary plant metabolites. Currently, two analytical methods are used to determine PA content in honey. To achieve reasonably high sensitivity and selectivity, mass spectrometry detection is demanded. One

  9. Reliability assessment and probability based design of reinforced concrete containments and shear walls

    International Nuclear Information System (INIS)

    Hwang, H.; Reich, M.; Ellingwood, B.; Shinozuka, M.

    1986-03-01

    This report summarizes work completed under the program entitled, ''Probability-Based Load Combinations for Design of Category I Structures.'' Under this program, the probabilistic models for various static and dynamic loads were formulated. The randomness and uncertainties in material strengths and structural resistance were established. Several limit states of concrete containments and shear walls were identified and analytically formulated. Furthermore, the reliability analysis methods for estimating limit state probabilities were established. These reliability analysis methods can be used to evaluate the safety levels of nuclear structures under various combinations of static and dynamic loads. They can also be used to generate analytically the fragility data for PRA studies. In addition to the development of reliability analysis methods, probability-based design criteria for concrete containments and shear wall structures have also been developed. The proposed design criteria are in the load and resistance factor design (LRFD) format. The load and resistance factors are determined for several limit states and target limit state probabilities. Thus, the proposed design criteria are risk-consistent and have a well-established rationale. 73 refs., 18 figs., 16 tabs

  10. Long-Term Prediction of Satellite Orbit Using Analytical Method

    Directory of Open Access Journals (Sweden)

    Jae-Cheol Yoon

    1997-12-01

    Full Text Available A long-term prediction algorithm of geostationary orbit was developed using the analytical method. The perturbation force models include geopotential upto fifth order and degree and luni-solar gravitation, and solar radiation pressure. All of the perturbation effects were analyzed by secular variations, short-period variations, and long-period variations for equinoctial elements such as the semi-major axis, eccentricity vector, inclination vector, and mean longitude of the satellite. Result of the analytical orbit propagator was compared with that of the cowell orbit propagator for the KOREASAT. The comparison indicated that the analytical solution could predict the semi-major axis with an accuarcy of better than ~35meters over a period of 3 month.

  11. Assessment and Improving Methods of Reliability Indices in Bakhtar Regional Electricity Company

    Directory of Open Access Journals (Sweden)

    Saeed Shahrezaei

    2013-04-01

    Full Text Available Reliability of a system is the ability of a system to do prospected duties in future and the probability of desirable operation for doing predetermined duties. Power system elements failures data are the main data of reliability assessment in the network. Determining antiseptic parameters is the goal of reliability assessment by using system history data. These parameters help to recognize week points of the system. In other words, the goal of reliability assessment is operation improving and decreasing of the failures and power outages. This paper is developed to assess reliability indices of Bakhtar Regional Electricity Company up to 1393 and the improving methods and their effects on the reliability indices in this network. DIgSILENT Power Factory software is employed for simulation. Simulation results show the positive effect of improving methods in reliability indices of Bakhtar Regional Electricity Company.

  12. Reliability Impact of Stockpile Aging: Stress Voiding; TOPICAL

    International Nuclear Information System (INIS)

    ROBINSON, DAVID G.

    1999-01-01

    The objective of this research is to statistically characterize the aging of integrated circuit interconnects. This report supersedes the stress void aging characterization presented in SAND99-0975, ''Reliability Degradation Due to Stockpile Aging,'' by the same author. The physics of the stress voiding, before and after wafer processing have been recently characterized by F. G. Yost in SAND99-0601, ''Stress Voiding during Wafer Processing''. The current effort extends this research to account for uncertainties in grain size, storage temperature, void spacing and initial residual stress and their impact on interconnect failure after wafer processing. The sensitivity of the life estimates to these uncertainties is also investigated. Various methods for characterizing the probability of failure of a conductor line were investigated including: Latin hypercube sampling (LHS), quasi-Monte Carlo sampling (qMC), as well as various analytical methods such as the advanced mean value (Ah/IV) method. The comparison was aided by the use of the Cassandra uncertainty analysis library. It was found that the only viable uncertainty analysis methods were those based on either LHS or quasi-Monte Carlo sampling. Analytical methods such as AMV could not be applied due to the nature of the stress voiding problem. The qMC method was chosen since it provided smaller estimation error for a given number of samples. The preliminary results indicate that the reliability of integrated circuits due to stress voiding is very sensitive to the underlying uncertainties associated with grain size and void spacing. In particular, accurate characterization of IC reliability depends heavily on not only the frost and second moments of the uncertainty distribution, but more specifically the unique form of the underlying distribution

  13. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    Institute of Scientific and Technical Information of China (English)

    桂劲松; 刘红; 康海贵

    2004-01-01

    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  14. Efficient surrogate models for reliability analysis of systems with multiple failure modes

    International Nuclear Information System (INIS)

    Bichon, Barron J.; McFarland, John M.; Mahadevan, Sankaran

    2011-01-01

    Despite many advances in the field of computational reliability analysis, the efficient estimation of the reliability of a system with multiple failure modes remains a persistent challenge. Various sampling and analytical methods are available, but they typically require accepting a tradeoff between accuracy and computational efficiency. In this work, a surrogate-based approach is presented that simultaneously addresses the issues of accuracy, efficiency, and unimportant failure modes. The method is based on the creation of Gaussian process surrogate models that are required to be locally accurate only in the regions of the component limit states that contribute to system failure. This approach to constructing surrogate models is demonstrated to be both an efficient and accurate method for system-level reliability analysis. - Highlights: → Extends efficient global reliability analysis to systems with multiple failure modes. → Constructs locally accurate Gaussian process models of each response. → Highly efficient and accurate method for assessing system reliability. → Effectiveness is demonstrated on several test problems from the literature.

  15. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    OpenAIRE

    Chen, Xuyong; Chen, Qian; Bian, Xiaoya; Fan, Jianping

    2017-01-01

    Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic ...

  16. Survey of methods used to asses human reliability in the human factors reliability benchmark exercise

    International Nuclear Information System (INIS)

    Poucet, A.

    1988-01-01

    The Joint Research Centre of the European Commission has organised a Human Factors Reliability Benchmark Exercise (HF-RBE) with the aim to assess the state-of-the-art in human reliability modelling and assessment. Fifteen teams from eleven countries, representing industry, utilities, licensing organisations and research institutes, participate in the HF-RBE, which is organised around two study cases: (1) analysis of routine functional test and maintenance procedures, with the aim to assess the probability of test-induced failures, the probability of failures to remain unrevealed, and the potential to initiate transients because of errors performed in the test; and (2) analysis of human actions during an operational transient, with the aim to assess the probability that the operators will correctly diagnose the malfunctions and take proper corrective action. The paper briefly reports how the HF-RBE was structured and gives an overview of the methods that have been used for predicting human reliability in both study cases. The experience in applying these methods is discussed and the results obtained are compared. (author)

  17. An analytical-numerical comprehensive method for optimizing the fringing magnetic field

    International Nuclear Information System (INIS)

    Xiao Meiqin; Mao Naifeng

    1991-01-01

    The criterion of optimizing the fringing magnetic field is discussed, and an analytical-numerical comprehensive method for realizing the optimization is introduced. The method mentioned above consists of two parts, the analytical part calculates the field of the shims, which corrects the fringing magnetic field by using uniform magnetizing method; the numerical part fulfils the whole calculation of the field distribution by solving the equation of magnetic vector potential A within the region covered by arbitrary triangular meshes with the aid of finite difference method and successive over relaxation method. On the basis of the method, the optimization of the fringing magnetic field for a large-scale electromagnetic isotope separator is finished

  18. Limitations in simulator time-based human reliability analysis methods

    International Nuclear Information System (INIS)

    Wreathall, J.

    1989-01-01

    Developments in human reliability analysis (HRA) methods have evolved slowly. Current methods are little changed from those of almost a decade ago, particularly in the use of time-reliability relationships. While these methods were suitable as an interim step, the time (and the need) has come to specify the next evolution of HRA methods. As with any performance-oriented data source, power plant simulator data have no direct connection to HRA models. Errors reported in data are normal deficiencies observed in human performance; failures are events modeled in probabilistic risk assessments (PRAs). Not all errors cause failures; not all failures are caused by errors. Second, the times at which actions are taken provide no measure of the likelihood of failures to act correctly within an accident scenario. Inferences can be made about human reliability, but they must be made with great care. Specific limitations are discussed. Simulator performance data are useful in providing qualitative evidence of the variety of error types and their potential influences on operating systems. More work is required to combine recent developments in the psychology of error with the qualitative data collected at stimulators. Until data become openly available, however, such an advance will not be practical

  19. Development of an analytical method for the simultaneous analysis of MCPD esters and glycidyl esters in oil-based foodstuffs.

    Science.gov (United States)

    Ermacora, Alessia; Hrnčiřík, Karel

    2014-01-01

    Substantial progress has been recently made in the development and optimisation of analytical methods for the quantification of 2-MCPD, 3-MCPD and glycidyl esters in oils and fats, and there are a few methods currently available that allow a reliable quantification of these contaminants in bulk oils and fats. On the other hand, no standard method for the analysis of foodstuffs has yet been established. The aim of this study was the development and validation of a new method for the simultaneous quantification of 2-MCPD, 3-MCPD and glycidyl esters in oil-based food products. The developed protocol includes a first step of liquid-liquid extraction and purification of the lipophilic substances of the sample, followed by the application of a previously developed procedure based on acid transesterification, for the indirect quantification of these contaminants in oils and fats. The method validation was carried out on food products (fat-based spreads, creams, margarine, mayonnaise) manufactured in-house, in order to control the manufacturing process and account for any food matrix-analyte interactions (the sample spiking was carried out on the single components used for the formulations rather than the final products). The method showed good accuracy (the recoveries ranged from 97% to 106% for bound 3-MCPD and 2-MCPD and from 88% to 115% for bound glycidol) and sensitivity (the LOD was 0.04 and 0.05 mg kg(-1) for bound MCPD and glycidol, respectively). Repeatability and reproducibility were satisfactory (RSD below 2% and 5%, respectively) for all analytes. The levels of salts and surface-active compounds in the formulation were found to have no impact on the accuracy and the other parameters of the method.

  20. Reliability studies of diagnostic methods in Indian traditional Ayurveda medicine: An overview

    Science.gov (United States)

    Kurande, Vrinda Hitendra; Waagepetersen, Rasmus; Toft, Egon; Prasad, Ramjee

    2013-01-01

    Recently, a need to develop supportive new scientific evidence for contemporary Ayurveda has emerged. One of the research objectives is an assessment of the reliability of diagnoses and treatment. Reliability is a quantitative measure of consistency. It is a crucial issue in classification (such as prakriti classification), method development (pulse diagnosis), quality assurance for diagnosis and treatment and in the conduct of clinical studies. Several reliability studies are conducted in western medicine. The investigation of the reliability of traditional Chinese, Japanese and Sasang medicine diagnoses is in the formative stage. However, reliability studies in Ayurveda are in the preliminary stage. In this paper, examples are provided to illustrate relevant concepts of reliability studies of diagnostic methods and their implication in practice, education, and training. An introduction to reliability estimates and different study designs and statistical analysis is given for future studies in Ayurveda. PMID:23930037

  1. Modern methods in analytical acoustics lecture notes

    CERN Document Server

    Crighton, D G; Williams, J E Ffowcs; Heckl, M; Leppington, F G

    1992-01-01

    Modern Methods in Analytical Acoustics considers topics fundamental to the understanding of noise, vibration and fluid mechanisms. The series of lectures on which this material is based began by some twenty five years ago and has been developed and expanded ever since. Acknowledged experts in the field have given this course many times in Europe and the USA. Although the scope of the course has widened considerably, the primary aim of teaching analytical techniques of acoustics alongside specific areas of wave motion and unsteady fluid mechanisms remains. The distinguished authors of this volume are drawn from Departments of Acoustics, Engineering of Applied Mathematics in Berlin, Cambridge and London. Their intention is to reach a wider audience of all those concerned with acoustic analysis than has been able to attend the course.

  2. Reliably detectable flaw size for NDE methods that use calibration

    Science.gov (United States)

    Koshti, Ajay M.

    2017-04-01

    Probability of detection (POD) analysis is used in assessing reliably detectable flaw size in nondestructive evaluation (NDE). MIL-HDBK-1823 and associated mh18232 POD software gives most common methods of POD analysis. In this paper, POD analysis is applied to an NDE method, such as eddy current testing, where calibration is used. NDE calibration standards have known size artificial flaws such as electro-discharge machined (EDM) notches and flat bottom hole (FBH) reflectors which are used to set instrument sensitivity for detection of real flaws. Real flaws such as cracks and crack-like flaws are desired to be detected using these NDE methods. A reliably detectable crack size is required for safe life analysis of fracture critical parts. Therefore, it is important to correlate signal responses from real flaws with signal responses form artificial flaws used in calibration process to determine reliably detectable flaw size.

  3. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  4. Statistical Bayesian method for reliability evaluation based on ADT data

    Science.gov (United States)

    Lu, Dawei; Wang, Lizhi; Sun, Yusheng; Wang, Xiaohong

    2018-05-01

    Accelerated degradation testing (ADT) is frequently conducted in the laboratory to predict the products’ reliability under normal operating conditions. Two kinds of methods, degradation path models and stochastic process models, are utilized to analyze degradation data and the latter one is the most popular method. However, some limitations like imprecise solution process and estimation result of degradation ratio still exist, which may affect the accuracy of the acceleration model and the extrapolation value. Moreover, the conducted solution of this problem, Bayesian method, lose key information when unifying the degradation data. In this paper, a new data processing and parameter inference method based on Bayesian method is proposed to handle degradation data and solve the problems above. First, Wiener process and acceleration model is chosen; Second, the initial values of degradation model and parameters of prior and posterior distribution under each level is calculated with updating and iteration of estimation values; Third, the lifetime and reliability values are estimated on the basis of the estimation parameters; Finally, a case study is provided to demonstrate the validity of the proposed method. The results illustrate that the proposed method is quite effective and accuracy in estimating the lifetime and reliability of a product.

  5. Analytical quality control [An IAEA service

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1973-07-01

    In analytical chemistry the determination of small or trace amounts of elements or compounds in different types of materials is increasingly important. The results of these findings have a great influence on different fields of science, and on human life. Their reliability, precision and accuracy must, therefore, be checked by analytical quality control measures. The International Atomic Energy Agency (IAEA) set up an Analytical Quality Control Service (AQCS) in 1962 to assist laboratories in Member States in the assessment of their reliability in radionuclide analysis, and in other branches of applied analysis in which radionuclides may be used as analytical implements. For practical reasons, most analytical laboratories are not in a position to check accuracy internally, as frequently resources are available for only one method; standardized sample material, particularly in the case of trace analysis, is not available and can be prepared by the institutes themselves only in exceptional cases; intercomparisons are organized rather seldom and many important types of analysis are so far not covered. AQCS assistance is provided by the shipment to laboratories of standard reference materials containing known quantities of different trace elements or radionuclides, as well as by the organization of analytical intercomparisons in which the participating laboratories are provided with aliquots of homogenized material of unknown composition for analysis. In the latter case the laboratories report their data to the Agency's laboratory, which calculates averages and distributions of results and advises each laboratory of its performance relative to all the others. Throughout the years several dozens of intercomparisons have been organized and many thousands of samples provided. The service offered, as a consequence, has grown enormously. The programme for 1973 and 1974, which is currently being distributed to Member States, will contain 31 different types of materials.

  6. Mathematical Methods in Survival Analysis, Reliability and Quality of Life

    CERN Document Server

    Huber, Catherine; Mesbah, Mounir

    2008-01-01

    Reliability and survival analysis are important applications of stochastic mathematics (probability, statistics and stochastic processes) that are usually covered separately in spite of the similarity of the involved mathematical theory. This title aims to redress this situation: it includes 21 chapters divided into four parts: Survival analysis, Reliability, Quality of life, and Related topics. Many of these chapters were presented at the European Seminar on Mathematical Methods for Survival Analysis, Reliability and Quality of Life in 2006.

  7. Developing a reliable signal wire attachment method for rail.

    Science.gov (United States)

    2014-11-01

    The goal of this project was to develop a better attachment method for rail signal wires to improve the reliability of signaling : systems. EWI conducted basic research into the failure mode of current attachment methods and developed and tested a ne...

  8. Analytic Methods Used in Quality Control in a Compounding Pharmacy.

    Science.gov (United States)

    Allen, Loyd V

    2017-01-01

    Analytical testing will no doubt become a more important part of pharmaceutical compounding as the public and regulatory agencies demand increasing documentation of the quality of compounded preparations. Compounding pharmacists must decide what types of testing and what amount of testing to include in their quality-control programs, and whether testing should be done in-house or outsourced. Like pharmaceutical compounding, analytical testing should be performed only by those who are appropriately trained and qualified. This article discusses the analytical methods that are used in quality control in a compounding pharmacy. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  9. Analytical maximum-likelihood method to detect patterns in real networks

    International Nuclear Information System (INIS)

    Squartini, Tiziano; Garlaschelli, Diego

    2011-01-01

    In order to detect patterns in real networks, randomized graph ensembles that preserve only part of the topology of an observed network are systematically used as fundamental null models. However, the generation of them is still problematic. Existing approaches are either computationally demanding and beyond analytic control or analytically accessible but highly approximate. Here, we propose a solution to this long-standing problem by introducing a fast method that allows one to obtain expectation values and standard deviations of any topological property analytically, for any binary, weighted, directed or undirected network. Remarkably, the time required to obtain the expectation value of any property analytically across the entire graph ensemble is as short as that required to compute the same property using the adjacency matrix of the single original network. Our method reveals that the null behavior of various correlation properties is different from what was believed previously, and is highly sensitive to the particular network considered. Moreover, our approach shows that important structural properties (such as the modularity used in community detection problems) are currently based on incorrect expressions, and provides the exact quantities that should replace them.

  10. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Science.gov (United States)

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. EPA's analytical methods for water: The next generation

    International Nuclear Information System (INIS)

    Hites, R.A.; Budde, W.L.

    1991-01-01

    By the late 1970s, it had become clear to EPA that organic compounds were polluting many of the nation's waters. By 1977, as a result of a lawsuit by several environmentally concerned plaintiffs, EPA had focused on a list of 114 'priority' organic pollutants. Its long-term goal was the regulation of specific compounds that were found to pose significant environmental problems, a daunting task. Tens of thousands of samples needed to be measured by hundreds of different laboratories. Clearly, there were concerns about the comparability of data among laboratories. The result was a series of laboratory-based analytical 'methods.' These EPA methods are detailed, step-by-step directions (recipes) that describe everything the analyst needs to know to complete a satisfactory analysis. During the 1970s the first set of methods was developed; this was the '600 series' for the analysis of organic compounds in wastewater. In 1979 and the 1980s, a set of '500 series' methods, focusing on drinking water, was developed. By now, many of the 500 and 600 series methods are in widespread use, and it is clear that there are considerably overlaps among the methods in terms of both procedures and analytes. Indiana University was asked by EPA to consider the question, 'Is it possible to revise or eliminate some of the 500 and 600 series methods and effect a savings of time and money?' This and related questions were studied and recommendations were developed

  12. Application of capability indices and control charts in the analytical method control strategy.

    Science.gov (United States)

    Oliva, Alexis; Llabres Martinez, Matías

    2017-08-01

    In this study, we assessed the usefulness of control charts in combination with the process capability indices, C pm and C pk , in the control strategy of an analytical method. The traditional X-chart and moving range chart were used to monitor the analytical method over a 2-year period. The results confirmed that the analytical method is in-control and stable. Different criteria were used to establish the specifications limits (i.e. analyst requirements) for fixed method performance (i.e. method requirements). If the specification limits and control limits are equal in breadth, the method can be considered "capable" (C pm  = 1), but it does not satisfy the minimum method capability requirements proposed by Pearn and Shu (2003). Similar results were obtained using the C pk index. The method capability was also assessed as a function of method performance for fixed analyst requirements. The results indicate that the method does not meet the requirements of the analytical target approach. A real-example data of a SEC with light-scattering detection method was used as a model whereas previously published data were used to illustrate the applicability of the proposed approach. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Optimally Fortifying Logic Reliability through Criticality Ranking

    Directory of Open Access Journals (Sweden)

    Yu Bai

    2015-02-01

    Full Text Available With CMOS technology aggressively scaling towards the 22-nm node, modern FPGA devices face tremendous aging-induced reliability challenges due to bias temperature instability (BTI and hot carrier injection (HCI. This paper presents a novel anti-aging technique at the logic level that is both scalable and applicable for VLSI digital circuits implemented with FPGA devices. The key idea is to prolong the lifetime of FPGA-mapped designs by strategically elevating the VDD values of some LUTs based on their modular criticality values. Although the idea of scaling VDD in order to improve either energy efficiency or circuit reliability has been explored extensively, our study distinguishes itself by approaching this challenge through an analytical procedure, therefore being able to maximize the overall reliability of the target FPGA design by rigorously modeling the BTI-induced device reliability and optimally solving the VDD assignment problem. Specifically, we first develop a systematic framework to analytically model the reliability of an FPGA LUT (look-up table, which consists of both RAM memory bits and associated switching circuit. We also, for the first time, establish the relationship between signal transition density and a LUT’s reliability in an analytical way. This key observation further motivates us to define the modular criticality as the product of signal transition density and the logic observability of each LUT. Finally, we analytically prove, for the first time, that the optimal way to improve the overall reliability of a whole FPGA device is to fortify individual LUTs according to their modular criticality. To the best of our knowledge, this work is the first to draw such a conclusion.

  14. Reliability analysis for thermal cutting method based non-explosive separation device

    International Nuclear Information System (INIS)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu

    2016-01-01

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils

  15. Reliability analysis for thermal cutting method based non-explosive separation device

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Woo; Hwang, Kuk Ha; Kim, Byung Kyu [Korea Aerospace University, Goyang (Korea, Republic of)

    2016-12-15

    In order to increase the reliability of a separation device for a small satellite, a new non-explosive separation device is invented. This device is activated using a thermal cutting method with a Ni-Cr wire. A reliability analysis is carried out for the proposed non-explosive separation device by applying the Fault tree analysis (FTA) method. In the FTA results for the separation device, only ten single-point failure modes are found. The reliability modeling and analysis for the device are performed considering failure of the power supply, the Ni-Cr wire burns failure and unwinds, the holder separation failure, the balls separation failure, and the pin release failure. Ultimately, the reliability of the proposed device is calculated as 0.999989 with five Ni-Cr wire coils.

  16. Analytical chromatography. Methods, instrumentation and applications

    International Nuclear Information System (INIS)

    Yashin, Ya I; Yashin, A Ya

    2006-01-01

    The state-of-the-art and the prospects in the development of main methods of analytical chromatography, viz., gas, high performance liquid and ion chromatographic techniques, are characterised. Achievements of the past 10-15 years in the theory and general methodology of chromatography and also in the development of new sorbents, columns and chromatographic instruments are outlined. The use of chromatography in the environmental control, biology, medicine, pharmaceutics, and also for monitoring the quality of foodstuffs and products of chemical, petrochemical and gas industries, etc. is considered.

  17. The Emergence of the Analytical Method

    DEFF Research Database (Denmark)

    Plum, Maja

    2012-01-01

    accountability, visibility and documentation. It is argued that pedagogy is generated as a sequential and unit-specified way of working on the production of ‘the learning child’, forming a time- and material-optimising approach. Hereby, the nursery teacher, as a daily scientific researcher, comes to serve...... the nation by an ongoing observational intervention, producing the learning foundation for the entrepreneurial citizen, and thus the nation as a knowledge society in a globalised world. This is what this article terms the emergence of the analytical method....

  18. An Investment Level Decision Method to Secure Long-term Reliability

    Science.gov (United States)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  19. Analytical approach to linear fractional partial differential equations arising in fluid mechanics

    International Nuclear Information System (INIS)

    Momani, Shaher; Odibat, Zaid

    2006-01-01

    In this Letter, we implement relatively new analytical techniques, the variational iteration method and the Adomian decomposition method, for solving linear fractional partial differential equations arising in fluid mechanics. The fractional derivatives are described in the Caputo sense. The two methods in applied mathematics can be used as alternative methods for obtaining analytic and approximate solutions for different types of fractional differential equations. In these methods, the solution takes the form of a convergent series with easily computable components. The corresponding solutions of the integer order equations are found to follow as special cases of those of fractional order equations. Some numerical examples are presented to illustrate the efficiency and reliability of the two methods

  20. An analytical optimization method for electric propulsion orbit transfer vehicles

    International Nuclear Information System (INIS)

    Oleson, S.R.

    1993-01-01

    Due to electric propulsion's inherent propellant mass savings over chemical propulsion, electric propulsion orbit transfer vehicles (EPOTVs) are a highly efficient mode of orbit transfer. When selecting an electric propulsion device (ion, MPD, or arcjet) and propellant for a particular mission, it is preferable to use quick, analytical system optimization methods instead of time intensive numerical integration methods. It is also of interest to determine each thruster's optimal operating characteristics for a specific mission. Analytical expressions are derived which determine the optimal specific impulse (Isp) for each type of electric thruster to maximize payload fraction for a desired thrusting time. These expressions take into account the variation of thruster efficiency with specific impulse. Verification of the method is made with representative electric propulsion values on a LEO-to-GEO mission. Application of the method to specific missions is discussed

  1. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    Directory of Open Access Journals (Sweden)

    Nielsen Morten

    2009-07-01

    Full Text Available Abstract Background Estimation of the reliability of specific real value predictions is nontrivial and the efficacy of this is often questionable. It is important to know if you can trust a given prediction and therefore the best methods associate a prediction with a reliability score or index. For discrete qualitative predictions, the reliability is conventionally estimated as the difference between output scores of selected classes. Such an approach is not feasible for methods that predict a biological feature as a single real value rather than a classification. As a solution to this challenge, we have implemented a method that predicts the relative surface accessibility of an amino acid and simultaneously predicts the reliability for each prediction, in the form of a Z-score. Results An ensemble of artificial neural networks has been trained on a set of experimentally solved protein structures to predict the relative exposure of the amino acids. The method assigns a reliability score to each surface accessibility prediction as an inherent part of the training process. This is in contrast to the most commonly used procedures where reliabilities are obtained by post-processing the output. Conclusion The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability score with the individual predictions. However, our implementation of reliability scores in the form of a Z-score is shown to be the more informative measure for discriminating good predictions from bad ones in the entire range from completely buried to fully exposed amino acids. This is evident when comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0

  2. Bearing Procurement Analysis Method by Total Cost of Ownership Analysis and Reliability Prediction

    Science.gov (United States)

    Trusaji, Wildan; Akbar, Muhammad; Sukoyo; Irianto, Dradjad

    2018-03-01

    In making bearing procurement analysis, price and its reliability must be considered as decision criteria, since price determines the direct cost as acquisition cost and reliability of bearing determine the indirect cost such as maintenance cost. Despite the indirect cost is hard to identify and measured, it has high contribution to overall cost that will be incurred. So, the indirect cost of reliability must be considered when making bearing procurement analysis. This paper tries to explain bearing evaluation method with the total cost of ownership analysis to consider price and maintenance cost as decision criteria. Furthermore, since there is a lack of failure data when bearing evaluation phase is conducted, reliability prediction method is used to predict bearing reliability from its dynamic load rating parameter. With this method, bearing with a higher price but has higher reliability is preferable for long-term planning. But for short-term planning the cheaper one but has lower reliability is preferable. This contextuality can give rise to conflict between stakeholders. Thus, the planning horizon needs to be agreed by all stakeholder before making a procurement decision.

  3. The design and use of reliability data base with analysis tool

    Energy Technology Data Exchange (ETDEWEB)

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.

    1996-06-01

    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst`s current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs.

  4. The design and use of reliability data base with analysis tool

    International Nuclear Information System (INIS)

    Doorepall, J.; Cooke, R.; Paulsen, J.; Hokstadt, P.

    1996-06-01

    With the advent of sophisticated computer tools, it is possible to give a distributed population of users direct access to reliability component operational histories. This allows the user a greater freedom in defining statistical populations of components and selecting failure modes. However, the reliability data analyst's current analytical instrumentarium is not adequate for this purpose. The terminology used in organizing and gathering reliability data is standardized, and the statistical methods used in analyzing this data are not always suitably chosen. This report attempts to establish a baseline with regard to terminology and analysis methods, to support the use of a new analysis tool. It builds on results obtained in several projects for the ESTEC and SKI on the design of reliability databases. Starting with component socket time histories, we identify a sequence of questions which should be answered prior to the employment of analytical methods. These questions concern the homogeneity and stationarity of (possible dependent) competing failure modes and the independence of competing failure modes. Statistical tests, some of them new, are proposed for answering these questions. Attention is given to issues of non-identifiability of competing risk and clustering of failure-repair events. These ideas have been implemented in an analysis tool for grazing component socket time histories, and illustrative results are presented. The appendix provides background on statistical tests and competing failure modes. (au) 4 tabs., 17 ills., 61 refs

  5. Computer controlled quality of analytical measurements

    International Nuclear Information System (INIS)

    Clark, J.P.; Huff, G.A.

    1979-01-01

    A PDP 11/35 computer system is used in evaluating analytical chemistry measurements quality control data at the Barnwell Nuclear Fuel Plant. This computerized measurement quality control system has several features which are not available in manual systems, such as real-time measurement control, computer calculated bias corrections and standard deviation estimates, surveillance applications, evaluaton of measurement system variables, records storage, immediate analyst recertificaton, and the elimination of routine analysis of known bench standards. The effectiveness of the Barnwell computer system has been demonstrated in gathering and assimilating the measurements of over 1100 quality control samples obtained during a recent plant demonstration run. These data were used to determine equaitons for predicting measurement reliability estimates (bias and precision); to evaluate the measurement system; and to provide direction for modification of chemistry methods. The analytical chemistry measurement quality control activities represented 10% of the total analytical chemistry effort

  6. A first course in ordinary differential equations analytical and numerical methods

    CERN Document Server

    Hermann, Martin

    2014-01-01

    This book presents a modern introduction to analytical and numerical techniques for solving ordinary differential equations (ODEs). Contrary to the traditional format—the theorem-and-proof format—the book is focusing on analytical and numerical methods. The book supplies a variety of problems and examples, ranging from the elementary to the advanced level, to introduce and study the mathematics of ODEs. The analytical part of the book deals with solution techniques for scalar first-order and second-order linear ODEs, and systems of linear ODEs—with a special focus on the Laplace transform, operator techniques and power series solutions. In the numerical part, theoretical and practical aspects of Runge-Kutta methods for solving initial-value problems and shooting methods for linear two-point boundary-value problems are considered. The book is intended as a primary text for courses on the theory of ODEs and numerical treatment of ODEs for advanced undergraduate and early graduate students. It is assumed t...

  7. The Reliability, Impact, and Cost-Effectiveness of Value-Added Teacher Assessment Methods

    Science.gov (United States)

    Yeh, Stuart S.

    2012-01-01

    This article reviews evidence regarding the intertemporal reliability of teacher rankings based on value-added methods. Value-added methods exhibit low reliability, yet are broadly supported by prominent educational researchers and are increasingly being used to evaluate and fire teachers. The article then presents a cost-effectiveness analysis…

  8. Reliability of a semi-quantitative method for dermal exposure assessment (DREAM)

    NARCIS (Netherlands)

    Wendel de Joode, B. van; Hemmen, J.J. van; Meijster, T.; Major, V.; London, L.; Kromhout, H.

    2005-01-01

    Valid and reliable semi-quantitative dermal exposure assessment methods for epidemiological research and for occupational hygiene practice, applicable for different chemical agents, are practically nonexistent. The aim of this study was to assess the reliability of a recently developed

  9. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    Science.gov (United States)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  10. System reliability with correlated components: Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, A.C.W.M.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  11. System reliability with correlated components : Accuracy of the Equivalent Planes method

    NARCIS (Netherlands)

    Roscoe, K.; Diermanse, F.; Vrouwenvelder, T.

    2015-01-01

    Computing system reliability when system components are correlated presents a challenge because it usually requires solving multi-fold integrals numerically, which is generally infeasible due to the computational cost. In Dutch flood defense reliability modeling, an efficient method for computing

  12. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    Science.gov (United States)

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  13. Selected Methods For Increases Reliability The Of Electronic Systems Security

    Directory of Open Access Journals (Sweden)

    Paś Jacek

    2015-11-01

    Full Text Available The article presents the issues related to the different methods to increase the reliability of electronic security systems (ESS for example, a fire alarm system (SSP. Reliability of the SSP in the descriptive sense is a property preservation capacity to implement the preset function (e.g. protection: fire airport, the port, logistics base, etc., at a certain time and under certain conditions, e.g. Environmental, despite the possible non-compliance by a specific subset of elements this system. Analyzing the available literature on the ESS-SSP is not available studies on methods to increase the reliability (several works similar topics but moving with respect to the burglary and robbery (Intrusion. Based on the analysis of the set of all paths in the system suitability of the SSP for the scenario mentioned elements fire events (device critical because of security.

  14. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  15. Analytical procedures for determining the impacts of reliability mitigation strategies.

    Science.gov (United States)

    2013-01-01

    Reliability of transport, especially the ability to reach a destination within a certain amount of time, is a regular concern of travelers and shippers. The definition of reliability used in this research is how travel time varies over time. The vari...

  16. Use of scientometrics to assess nuclear and other analytical methods

    International Nuclear Information System (INIS)

    Lyon, W.S.

    1986-01-01

    Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs

  17. Use of scientometrics to assess nuclear and other analytical methods

    Energy Technology Data Exchange (ETDEWEB)

    Lyon, W.S.

    1986-01-01

    Scientometrics involves the use of quantitative methods to investigate science viewed as an information process. Scientometric studies can be useful in ascertaining which methods have been most employed for various analytical determinations as well as for predicting which methods will continue to be used in the immediate future and which appear to be losing favor with the analytical community. Published papers in the technical literature are the primary source materials for scientometric studies; statistical methods and computer techniques are the tools. Recent studies have included growth and trends in prompt nuclear analysis impact of research published in a technical journal, and institutional and national representation, speakers and topics at several IAEA conferences, at modern trends in activation analysis conferences, and at other non-nuclear oriented conferences. Attempts have also been made to predict future growth of various topics and techniques. 13 refs., 4 figs., 17 tabs.

  18. Prediction of polymer flooding performance using an analytical method

    International Nuclear Information System (INIS)

    Tan Czek Hoong; Mariyamni Awang; Foo Kok Wai

    2001-01-01

    The study investigated the applicability of an analytical method developed by El-Khatib in polymer flooding. Results from a simulator UTCHEM and experiments were compared with the El-Khatib prediction method. In general, by assuming a constant viscosity polymer injection, the method gave much higher recovery values than the simulation runs and the experiments. A modification of the method gave better correlation, albeit only oil production. Investigation is continuing on modifying the method so that a better overall fit can be obtained for polymer flooding. (Author)

  19. Solution standards for quality control of nuclear-material analytical measurements

    International Nuclear Information System (INIS)

    Clark, J.P.

    1981-01-01

    Analytical chemistry measurement control depends upon reliable solution standards. At the Savannah River Plant Control Laboratory over a thousand analytical measurements are made daily for process control, product specification, accountability, and nuclear safety. Large quantities of solution standards are required for a measurement quality control program covering the many different analytical chemistry methods. Savannah River Plant produced uranium, plutonium, neptunium, and americium metals or oxides are dissolved to prepare stock solutions for working or Quality Control Standards (QCS). Because extensive analytical effort is required to characterize or confirm these solutions, they are prepared in large quantities. These stock solutions are diluted and blended with different chemicals and/or each other to synthesize QCS that match the matrices of different process streams. The target uncertainty of a standard's reference value is 10% of the limit of error of the methods used for routine measurements. Standard Reference Materials from NBS are used according to special procedures to calibrate the methods used in measuring the uranium and plutonium standards so traceability can be established. Special precautions are required to minimize the effects of temperature, radiolysis, and evaporation. Standard reference values are periodically corrected to eliminate systematic errors caused by evaporation or decay products. Measurement control is achieved by requiring analysts to analyze a blind QCS each shift a measurement system is used on plant samples. Computer evaluation determines whether or not a measurement is within the +- 3 sigma control limits. Monthly evaluations of the QCS measurements are made to determine current bias correction factors for accountability measurements and detect significant changes in the bias and precision statistics. The evaluations are also used to plan activities for improving the reliability of the analytical chemistry measurements

  20. Reliability evaluation of deregulated electric power systems for planning applications

    International Nuclear Information System (INIS)

    Ehsani, A.; Ranjbar, A.M.; Jafari, A.; Fotuhi-Firuzabad, M.

    2008-01-01

    In a deregulated electric power utility industry in which a competitive electricity market can influence system reliability, market risks cannot be ignored. This paper (1) proposes an analytical probabilistic model for reliability evaluation of competitive electricity markets and (2) develops a methodology for incorporating the market reliability problem into HLII reliability studies. A Markov state space diagram is employed to evaluate the market reliability. Since the market is a continuously operated system, the concept of absorbing states is applied to it in order to evaluate the reliability. The market states are identified by using market performance indices and the transition rates are calculated by using historical data. The key point in the proposed method is the concept that the reliability level of a restructured electric power system can be calculated using the availability of the composite power system (HLII) and the reliability of the electricity market. Two case studies are carried out over Roy Billinton Test System (RBTS) to illustrate interesting features of the proposed methodology

  1. Groundwater Seepage Estimation into Amirkabir Tunnel Using Analytical Methods and DEM and SGR Method

    OpenAIRE

    Hadi Farhadian; Homayoon Katibeh

    2015-01-01

    In this paper, groundwater seepage into Amirkabir tunnel has been estimated using analytical and numerical methods for 14 different sections of the tunnel. Site Groundwater Rating (SGR) method also has been performed for qualitative and quantitative classification of the tunnel sections. The obtained results of above mentioned methods were compared together. The study shows reasonable accordance with results of the all methods unless for two sections of tunnel. In these t...

  2. Theoretical, analytical, and statistical interpretation of environmental data

    International Nuclear Information System (INIS)

    Lombard, S.M.

    1974-01-01

    The reliability of data from radiochemical analyses of environmental samples cannot be determined from nuclear counting statistics alone. The rigorous application of the principles of propagation of errors, an understanding of the physics and chemistry of the species of interest in the environment, and the application of information from research on the analytical procedure are all necessary for a valid estimation of the errors associated with analytical results. The specific case of the determination of plutonium in soil is considered in terms of analytical problems and data reliability. (U.S.)

  3. Issues in benchmarking human reliability analysis methods: A literature review

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Hendrickson, Stacey M.L.; Forester, John A.; Tran, Tuan Q.; Lois, Erasmia

    2010-01-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessments (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study comparing and evaluating HRA methods in assessing operator performance in simulator experiments is currently underway. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies is presented in order to aid in the design of future HRA benchmarking endeavors.

  4. Issues in benchmarking human reliability analysis methods : a literature review.

    Energy Technology Data Exchange (ETDEWEB)

    Lois, Erasmia (US Nuclear Regulatory Commission); Forester, John Alan; Tran, Tuan Q. (Idaho National Laboratory, Idaho Falls, ID); Hendrickson, Stacey M. Langfitt; Boring, Ronald L. (Idaho National Laboratory, Idaho Falls, ID)

    2008-04-01

    There is a diversity of human reliability analysis (HRA) methods available for use in assessing human performance within probabilistic risk assessment (PRA). Due to the significant differences in the methods, including the scope, approach, and underlying models, there is a need for an empirical comparison investigating the validity and reliability of the methods. To accomplish this empirical comparison, a benchmarking study is currently underway that compares HRA methods with each other and against operator performance in simulator studies. In order to account for as many effects as possible in the construction of this benchmarking study, a literature review was conducted, reviewing past benchmarking studies in the areas of psychology and risk assessment. A number of lessons learned through these studies are presented in order to aid in the design of future HRA benchmarking endeavors.

  5. Waste Tank Organic Safety Program: Analytical methods development. Progress report, FY 1994

    International Nuclear Information System (INIS)

    Campbell, J.A.; Clauss, S.A.; Grant, K.E.

    1994-09-01

    The objectives of this task are to develop and document extraction and analysis methods for organics in waste tanks, and to extend these methods to the analysis of actual core samples to support the Waste Tank organic Safety Program. This report documents progress at Pacific Northwest Laboratory (a) during FY 1994 on methods development, the analysis of waste from Tank 241-C-103 (Tank C-103) and T-111, and the transfer of documented, developed analytical methods to personnel in the Analytical Chemistry Laboratory (ACL) and 222-S laboratory. This report is intended as an annual report, not a completed work

  6. Evaluation of Information Requirements of Reliability Methods in Engineering Design

    DEFF Research Database (Denmark)

    Marini, Vinicius Kaster; Restrepo-Giraldo, John Dairo; Ahmed-Kristensen, Saeema

    2010-01-01

    This paper aims to characterize the information needed to perform methods for robustness and reliability, and verify their applicability to early design stages. Several methods were evaluated on their support to synthesis in engineering design. Of those methods, FMEA, FTA and HAZOP were selected...

  7. Human reliability analysis of Lingao Nuclear Power Station

    International Nuclear Information System (INIS)

    Zhang Li; Huang Shudong; Yang Hong; He Aiwu; Huang Xiangrui; Zheng Tao; Su Shengbing; Xi Haiying

    2001-01-01

    The necessity of human reliability analysis (HRA) of Lingao Nuclear Power Station are analyzed, and the method and operation procedures of HRA is briefed. One of the human factors events (HFE) is analyzed in detail and some questions of HRA are discussed. The authors present the analytical results of 61 HFEs, and make a brief introduction of HRA contribution to Lingao Nuclear Power Station

  8. Use of reference materials for validating analytical methods. Applied to the determination of As, Co, Na, Hg, Se and Fe using neutron activation analysis

    International Nuclear Information System (INIS)

    Munoz, L; Andonie, O; Kohnenkamp, I

    2000-01-01

    The main purpose of an analytical laboratory is to provide reliable information on the nature and composition of the materials submitted for analysis. This purpose can only be attained if analytical methodologies that have the attributes of accuracy, precision, specificity and sensitivity, among others, are used. The process by which these attributes are evaluated is called validation of the analytical method. The Chilean Nuclear Energy Commission's Neutron Activation Analysis Laboratory is applying a quality guarantee program to ensure the quality of its analytical results, which aims, as well, to attain accreditation for some of its measurements. Validation of the analytical methodologies used is an essential part of applying this program. There are many forms of validation, from comparison with reference techniques to participation in inter-comparison rounds. Certified reference materials were used in this work in order to validate the application of neutron activation analysis in determining As, Co, Na, Hg, Se and Fe in shellfish samples. The use of reference materials was chosen because it is a simple option that easily detects sources of systematic errors. Neutron activation analysis is an instrumental analytical method that does not need chemical treatment and that is based on processes which take place in the nuclei of atoms, making the matrix effects unimportant and different biological reference materials can be used. The following certified reference materials were used for validating the method used: BCR human hair 397, NRCC dogfish muscle DORM-2, NRCC -dogfish liver DOLT-2, NIST - oyster tissue 1566, NIES - mussel 6 and BCR - tuna fish 464. The reference materials were analyzed using the procedure developed for the shellfish samples and the above-mentioned elements were determined. With the results obtained, the parameters of accuracy, precision, detection limit, quantification limit and uncertainty associated with the method were determined for each

  9. Tank 48H Waste Composition and Results of Investigation of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Walker , D.D. [Westinghouse Savannah River Company, AIKEN, SC (United States)

    1997-04-02

    This report serves two purposes. First, it documents the analytical results of Tank 48H samples taken between April and August 1996. Second, it describes investigations of the precision of the sampling and analytical methods used on the Tank 48H samples.

  10. Literature Review on Processing and Analytical Methods for ...

    Science.gov (United States)

    Report The purpose of this report was to survey the open literature to determine the current state of the science regarding the processing and analytical methods currently available for recovery of F. tularensis from water and soil matrices, and to determine what gaps remain in the collective knowledge concerning F. tularensis identification from environmental samples.

  11. An analytical method of estimating Value-at-Risk on the Belgrade Stock Exchange

    Directory of Open Access Journals (Sweden)

    Obadović Milica D.

    2009-01-01

    Full Text Available This paper presents market risk evaluation for a portfolio consisting of shares that are continuously traded on the Belgrade Stock Exchange, by applying the Value-at-Risk model - the analytical method. It describes the manner of analytical method application and compares the results obtained by implementing this method at different confidence levels. Method verification was carried out on the basis of the failure rate that demonstrated the confidence level for which this method was acceptable in view of the given conditions.

  12. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  13. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  14. An Analytical Method of Auxiliary Sources Solution for Plane Wave Scattering by Impedance Cylinders

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2004-01-01

    Analytical Method of Auxiliary Sources solutions for plane wave scattering by circular impedance cylinders are derived by transformation of the exact eigenfunction series solutions employing the Hankel function wave transformation. The analytical Method of Auxiliary Sources solution thus obtained...

  15. Continuous Analytical Performances Monitoring at the On-Site Laboratory through Proficiency, Inter-Laboratory Testing and Inter-Comparison Analytical Methods

    International Nuclear Information System (INIS)

    Duhamel, G.; Decaillon, J.-G.; Dashdondog, S.; Kim, C.-K.; Toervenyi, A.; Hara, S.; Kato, S.; Kawaguchi, T.; Matsuzawa, K.

    2015-01-01

    Since 2008, as one measure to strengthen its quality management system, the On-Site Laboratory for nuclear safeguards at the Rokkasho Reprocessing Plant, has increased its participation in domestic and international proficiency and inter-laboratory testing for the purpose of determining analytical method accuracy, precision and robustness but also to support method development and improvement. This paper provides a description of the testing and its scheduling. It presents the way the testing was optimized to cover most of the analytical methods at the OSL. The paper presents the methodology used for the evaluation of the obtained results based on Analysis of variance (ANOVA). Results are discussed with respect to random, systematic and long term systematic error. (author)

  16. Analytical Energy Gradients for Excited-State Coupled-Cluster Methods

    Science.gov (United States)

    Wladyslawski, Mark; Nooijen, Marcel

    The equation-of-motion coupled-cluster (EOM-CC) and similarity transformed equation-of-motion coupled-cluster (STEOM-CC) methods have been firmly established as accurate and routinely applicable extensions of single-reference coupled-cluster theory to describe electronically excited states. An overview of these methods is provided, with emphasis on the many-body similarity transform concept that is the key to a rationalization of their accuracy. The main topic of the paper is the derivation of analytical energy gradients for such non-variational electronic structure approaches, with an ultimate focus on obtaining their detailed algebraic working equations. A general theoretical framework using Lagrange's method of undetermined multipliers is presented, and the method is applied to formulate the EOM-CC and STEOM-CC gradients in abstract operator terms, following the previous work in [P.G. Szalay, Int. J. Quantum Chem. 55 (1995) 151] and [S.R. Gwaltney, R.J. Bartlett, M. Nooijen, J. Chem. Phys. 111 (1999) 58]. Moreover, the systematics of the Lagrange multiplier approach is suitable for automation by computer, enabling the derivation of the detailed derivative equations through a standardized and direct procedure. To this end, we have developed the SMART (Symbolic Manipulation and Regrouping of Tensors) package of automated symbolic algebra routines, written in the Mathematica programming language. The SMART toolkit provides the means to expand, differentiate, and simplify equations by manipulation of the detailed algebraic tensor expressions directly. The Lagrangian multiplier formulation establishes a uniform strategy to perform the automated derivation in a standardized manner: A Lagrange multiplier functional is constructed from the explicit algebraic equations that define the energy in the electronic method; the energy functional is then made fully variational with respect to all of its parameters, and the symbolic differentiations directly yield the explicit

  17. Research on Control Method Based on Real-Time Operational Reliability Evaluation for Space Manipulator

    Directory of Open Access Journals (Sweden)

    Yifan Wang

    2014-05-01

    Full Text Available A control method based on real-time operational reliability evaluation for space manipulator is presented for improving the success rate of a manipulator during the execution of a task. In this paper, a method for quantitative analysis of operational reliability is given when manipulator is executing a specified task; then a control model which could control the quantitative operational reliability is built. First, the control process is described by using a state space equation. Second, process parameters are estimated in real time using Bayesian method. Third, the expression of the system's real-time operational reliability is deduced based on the state space equation and process parameters which are estimated using Bayesian method. Finally, a control variable regulation strategy which considers the cost of control is given based on the Theory of Statistical Process Control. It is shown via simulations that this method effectively improves the operational reliability of space manipulator control system.

  18. A graph algebra for scalable visual analytics.

    Science.gov (United States)

    Shaverdian, Anna A; Zhou, Hao; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Visual analytics (VA), which combines analytical techniques with advanced visualization features, is fast becoming a standard tool for extracting information from graph data. Researchers have developed many tools for this purpose, suggesting a need for formal methods to guide these tools' creation. Increased data demands on computing requires redesigning VA tools to consider performance and reliability in the context of analysis of exascale datasets. Furthermore, visual analysts need a way to document their analyses for reuse and results justification. A VA graph framework encapsulated in a graph algebra helps address these needs. Its atomic operators include selection and aggregation. The framework employs a visual operator and supports dynamic attributes of data to enable scalable visual exploration of data.

  19. Analytical chemistry methods for boron carbide absorber material. [Standard

    Energy Technology Data Exchange (ETDEWEB)

    DELVIN WL

    1977-07-01

    This standard provides analytical chemistry methods for the analysis of boron carbide powder and pellets for the following: total C and B, B isotopic composition, soluble C and B, fluoride, chloride, metallic impurities, gas content, water, nitrogen, and oxygen. (DLC)

  20. Review and evaluation of spark source mass spectrometry as an analytical method

    International Nuclear Information System (INIS)

    Beske, H.E.

    1981-01-01

    The analytical features and most important fields of application of spark source mass spectrometry are described with respect to the trace analysis of high-purity materials and the multielement analysis of technical alloys, geochemical and cosmochemical, biological and radioactive materials, as well as in environmental analysis. Comparisons are made to other analytical methods. The distribution of the method as well as opportunities for contract analysis are indicated and developmental tendencies discussed. (orig.) [de

  1. Study of Fuze Structure and Reliability Design Based on the Direct Search Method

    Science.gov (United States)

    Lin, Zhang; Ning, Wang

    2017-03-01

    Redundant design is one of the important methods to improve the reliability of the system, but mutual coupling of multiple factors is often involved in the design. In my study, Direct Search Method is introduced into the optimum redundancy configuration for design optimization, in which, the reliability, cost, structural weight and other factors can be taken into account simultaneously, and the redundant allocation and reliability design of aircraft critical system are computed. The results show that this method is convenient and workable, and applicable to the redundancy configurations and optimization of various designs upon appropriate modifications. And this method has a good practical value.

  2. An analytic method for S-expansion involving resonance and reduction

    Energy Technology Data Exchange (ETDEWEB)

    Ipinza, M.C.; Penafiel, D.M. [Departamento de Fisica, Universidad de Concepcion (Chile); DISAT, Politecnico di Torino (Italy); Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Torino (Italy); Lingua, F. [DISAT, Politecnico di Torino (Italy); Ravera, L. [DISAT, Politecnico di Torino (Italy); Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Torino (Italy)

    2016-11-15

    In this paper we describe an analytic method able to give the multiplication table(s) of the set(s) involved in an S-expansion process (with either resonance or 0{sub S}-resonant-reduction) for reaching a target Lie (super)algebra from a starting one, after having properly chosen the partitions over subspaces of the considered (super)algebras. This analytic method gives us a simple set of expressions to find the subset decomposition of the set(s) involved in the process. Then, we use the information coming from both the initial (super)algebra and the target one for reaching the multiplication table(s) of the mentioned set(s). Finally, we check associativity with an auxiliary computational algorithm, in order to understand whether the obtained set(s) can describe semigroup(s) or just abelian set(s) connecting two (super)algebras. We also give some interesting examples of application, which check and corroborate our analytic procedure and also generalize some result already presented in the literature. (copyright 2016 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Control Chart on Semi Analytical Weighting

    Science.gov (United States)

    Miranda, G. S.; Oliveira, C. C.; Silva, T. B. S. C.; Stellato, T. B.; Monteiro, L. R.; Marques, J. R.; Faustino, M. G.; Soares, S. M. V.; Ulrich, J. C.; Pires, M. A. F.; Cotrim, M. E. B.

    2018-03-01

    Semi-analytical balance verification intends to assess the balance performance using graphs that illustrate measurement dispersion, trough time, and to demonstrate measurements were performed in a reliable manner. This study presents internal quality control of a semi-analytical balance (GEHAKA BG400) using control charts. From 2013 to 2016, 2 weight standards were monitored before any balance operation. This work intended to evaluate if any significant difference or bias were presented on weighting procedure over time, to check the generated data reliability. This work also exemplifies how control intervals are established.

  4. Analytical method used for intermediate products in continuous distillation of furfural

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Z.L.; Jia, M.; Wang, L.J.; Deng, Y.X.

    1981-01-01

    During distillation of furfural, analysis of main components in the crude furfural condensate and intermediate products is very important. Since furfural and methylfurfural are homologous and both furfural and acetone contain a carbonyl group, components in the sample must be separated before analysis. An improved analytical method has been studied, the accuracy and precision of which would meet the requirement of industrial standards. The analytical procedure was provided as follows: to determine the furfural content with gravimetric method of barbituric acid; to determine the methanol content with dichromate method after precipitating furfural and acetone, and distilling the liquid for analysis; and to determine the methylfurfural content with bromide-bromate method, which can be used only in the sample containing higher content of methylfurfural. For the sample in low content, the gas-liquid chromatographic method can be used. 7 references.

  5. Method of sections in analytical calculations of pneumatic tires

    Science.gov (United States)

    Tarasov, V. N.; Boyarkina, I. V.

    2018-01-01

    Analytical calculations in the pneumatic tire theory are more preferable in comparison with experimental methods. The method of section of a pneumatic tire shell allows to obtain equations of intensities of internal forces in carcass elements and bead rings. Analytical dependencies of intensity of distributed forces have been obtained in tire equator points, on side walls (poles) and pneumatic tire bead rings. Along with planes in the capacity of secant surfaces cylindrical surfaces are used for the first time together with secant planes. The tire capacity equation has been obtained using the method of section, by means of which a contact body is cut off from the tire carcass along the contact perimeter by the surface which is normal to the bearing surface. It has been established that the Laplace equation for the solution of tasks of this class of pneumatic tires contains two unknown values that requires the generation of additional equations. The developed computational schemes of pneumatic tire sections and new equations allow to accelerate the pneumatic tire structure improvement process during engineering.

  6. Approximate Analytic Solutions for the Two-Phase Stefan Problem Using the Adomian Decomposition Method

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Qin

    2014-01-01

    Full Text Available An Adomian decomposition method (ADM is applied to solve a two-phase Stefan problem that describes the pure metal solidification process. In contrast to traditional analytical methods, ADM avoids complex mathematical derivations and does not require coordinate transformation for elimination of the unknown moving boundary. Based on polynomial approximations for some known and unknown boundary functions, approximate analytic solutions for the model with undetermined coefficients are obtained using ADM. Substitution of these expressions into other equations and boundary conditions of the model generates some function identities with the undetermined coefficients. By determining these coefficients, approximate analytic solutions for the model are obtained. A concrete example of the solution shows that this method can easily be implemented in MATLAB and has a fast convergence rate. This is an efficient method for finding approximate analytic solutions for the Stefan and the inverse Stefan problems.

  7. 40 CFR 425.03 - Sulfide analytical methods and applicability.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sulfide analytical methods and applicability. 425.03 Section 425.03 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS LEATHER TANNING AND FINISHING POINT SOURCE CATEGORY General Provisions...

  8. Elasto-plastic strain analysis by a semi-analytical method

    Indian Academy of Sciences (India)

    deformation problems following a semi-analytical method, incorporating the com- ..... The set of equations in (8) are non-linear in nature, which is solved by direct ...... Here, [K] and [M] are stiffness matrix and mass matrix which are of the form ...

  9. A two-dimensional, semi-analytic expansion method for nodal calculations

    International Nuclear Information System (INIS)

    Palmtag, S.P.

    1995-08-01

    Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure

  10. The analytic regularization ζ function method and the cut-off method in Casimir effect

    International Nuclear Information System (INIS)

    Svaiter, N.F.; Svaiter, B.F.

    1990-01-01

    The zero point energy associated to a hermitian massless scalar field in the presence of perfectly reflecting plates in a three dimensional flat space-time is discussed. A new technique to unify two different methods - the ζ function and a variant of the cut-off method - used to obtain the so called Casimir energy is presented, and the proof of the analytic equivalence between both methods is given. (author)

  11. Extended block diagram method for a multi-state system reliability assessment

    International Nuclear Information System (INIS)

    Lisnianski, Anatoly

    2007-01-01

    The presented method extends the classical reliability block diagram method to a repairable multi-state system. It is very suitable for engineering applications since the procedure is well formalized and based on the natural decomposition of the entire multi-state system (the system is represented as a collection of its elements). Until now, the classical block diagram method did not provide the reliability assessment for the repairable multi-state system. The straightforward stochastic process methods are very difficult for engineering application in such cases due to the 'dimension damnation'-huge number of system states. The suggested method is based on the combined random processes and the universal generating function technique and drastically reduces the number of states in the multi-state model

  12. Development of CAD implementing the algorithm of boundary elements’ numerical analytical method

    Directory of Open Access Journals (Sweden)

    Yulia V. Korniyenko

    2015-03-01

    Full Text Available Up to recent days the algorithms for numerical-analytical boundary elements method had been implemented with programs written in MATLAB environment language. Each program had a local character, i.e. used to solve a particular problem: calculation of beam, frame, arch, etc. Constructing matrices in these programs was carried out “manually” therefore being time-consuming. The research was purposed onto a reasoned choice of programming language for new CAD development, allows to implement algorithm of numerical analytical boundary elements method and to create visualization tools for initial objects and calculation results. Research conducted shows that among wide variety of programming languages the most efficient one for CAD development, employing the numerical analytical boundary elements method algorithm, is the Java language. This language provides tools not only for development of calculating CAD part, but also to build the graphic interface for geometrical models construction and calculated results interpretation.

  13. Risk-based methods for reliability investments in electric power distribution systems

    Energy Technology Data Exchange (ETDEWEB)

    Alvehag, Karin

    2011-07-01

    Society relies more and more on a continuous supply of electricity. However, while under investments in reliability lead to an unacceptable number of power interruptions, over investments result in too high costs for society. To give incentives for a socio economically optimal level of reliability, quality regulations have been adopted in many European countries. These quality regulations imply new financial risks for the distribution system operator (DSO) since poor reliability can reduce the allowed revenue for the DSO and compensation may have to be paid to affected customers. This thesis develops a method for evaluating the incentives for reliability investments implied by different quality regulation designs. The method can be used to investigate whether socio economically beneficial projects are also beneficial for a profit-maximizing DSO subject to a particular quality regulation design. To investigate which reinvestment projects are preferable for society and a DSO, risk-based methods are developed. With these methods, the probability of power interruptions and the consequences of these can be simulated. The consequences of interruptions for the DSO will to a large extent depend on the quality regulation. The consequences for the customers, and hence also society, will depend on factors such as the interruption duration and time of occurrence. The proposed risk-based methods consider extreme outage events in the risk assessments by incorporating the impact of severe weather, estimating the full probability distribution of the total reliability cost, and formulating a risk-averse strategy. Results from case studies performed show that quality regulation design has a significant impact on reinvestment project profitability for a DSO. In order to adequately capture the financial risk that the DSO is exposed to, detailed riskbased methods, such as the ones developed in this thesis, are needed. Furthermore, when making investment decisions, a risk

  14. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    International Nuclear Information System (INIS)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.

    2013-01-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  15. Analytical method for reconstruction pin to pin of the nuclear power density distribution

    Energy Technology Data Exchange (ETDEWEB)

    Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)

  16. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  17. Analytical methods and laboratory facility for the Defense Waste Processing Facility

    International Nuclear Information System (INIS)

    Coleman, C.J.; Dewberry, R.A.; Lethco, A.J.; Denard, C.D.

    1985-01-01

    This paper describes the analytical methods, instruments, and laboratory that will support vitrification of defense waste. The Defense Waste Processing Facility (DWPF) is now being constructed at Savannah River Plant (SRP). Beginning in 1989, SRP high-level defense waste will be immobilized in borosilicate glass for disposal in a federal repository. The DWPF will contain an analytical laboratory for performing process control analyses. Additional analyses will be performed for process history and process diagnostics. The DWPF analytical facility will consist of a large shielded sampling cell, three shielded analytical cells, a laboratory for instrumental analysis and chemical separations, and a counting room. Special instrumentation is being designed for use in the analytical cells, including microwave drying/dissolution apparatus, and remote pipetting devices. The instrumentation laboratory will contain inductively coupled plasma, atomic absorption, Moessbauer spectrometers, a carbon analyzer, and ion chromatography equipment. Counting equipment will include intrinsic germanium detectors, scintillation counters, Phoswich alpha, beta, gamma detectors, and a low-energy photon detector

  18. Frontier in nanoscale flows fractional calculus and analytical methods

    CERN Document Server

    Lewis, Roland; Liu, Hong-yan

    2014-01-01

    This ebook covers the basic properties of nanoscale flows, and various analytical and numerical methods for nanoscale flows and environmental flows. This ebook is a good reference not only for audience of the journal, but also for various communities in mathematics, nanotechnology and environmental science.

  19. Analytical method for determining colour intensities based on Cherenkov radiation colour quenching

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Gomez, C; Lopez-Gonzalez, J deD; Ferro-Garcia, M A [Univ. of Granada, Granada (Spain). Faculty of Sciences, Dept. of Inorganic Chemistry. Radiochemistry Section; Consejo Superior de Investigaciones Cientificas, Granada (Spain). Dept. of Chemical Research Coordinated Centre)

    1983-01-01

    A study was made for determining color intensities using as luminous non-monochromatic source produced by the Cherenkov emission in the walls of a glass capillary which acts as luminous source itself inside the colored solution to be evaluated. The reproducibility of this method has been compared with the spectrophotometric assay; the relative errors of both analytical methods have been calculated for different concentrations of congo red solution in the range of minimal error, according to Ringbom's criterion. The sensitivity of this analytical method has been studied for the two ..beta..-emitters employed: /sup 90/Sr//sup 90/Y and /sup 204/Tl.

  20. Analytical Methodology for the Determination of Radium Isotopes in Environmental Samples

    International Nuclear Information System (INIS)

    2010-01-01

    Reliable, comparable and 'fit for purpose' results are an essential requirement for any decision based on analytical measurements. For the analyst, the availability of tested and validated analytical procedures is an extremely important tool for production of such analytical measurements. For maximum utility, such procedures should be comprehensive, clearly formulated, and readily available to both the analyst and the customer for reference. Since 2004, the environment programme of the IAEA has included activities aimed at the development of a set of procedures for the determination of radionuclides in terrestrial environmental samples. Measurements of radium isotopes are important for radiological and environmental protection, geochemical and geochronological investigations, hydrology, etc. The suite of isotopes creates and stimulates continuing interest in the development of new methods for determination of radium in various media. In this publication, the four most routinely used analytical methods for radium determination in biological and environmental samples, i.e. alpha spectrometry, gamma spectrometry, liquid scintillation spectrometry and mass spectrometry, are reviewed

  1. Psychometrics Matter in Health Behavior: A Long-term Reliability Generalization Study.

    Science.gov (United States)

    Pickett, Andrew C; Valdez, Danny; Barry, Adam E

    2017-09-01

    Despite numerous calls for increased understanding and reporting of reliability estimates, social science research, including the field of health behavior, has been slow to respond and adopt such practices. Therefore, we offer a brief overview of reliability and common reporting errors; we then perform analyses to examine and demonstrate the variability of reliability estimates by sample and over time. Using meta-analytic reliability generalization, we examined the variability of coefficient alpha scores for a well-designed, consistent, nationwide health study, covering a span of nearly 40 years. For each year and sample, reliability varied. Furthermore, reliability was predicted by a sample characteristic that differed among age groups within each administration. We demonstrated that reliability is influenced by the methods and individuals from which a given sample is drawn. Our work echoes previous calls that psychometric properties, particularly reliability of scores, are important and must be considered and reported before drawing statistical conclusions.

  2. Reliability and validity of non-radiographic methods of thoracic kyphosis measurement: a systematic review.

    Science.gov (United States)

    Barrett, Eva; McCreesh, Karen; Lewis, Jeremy

    2014-02-01

    A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Modelling of packet traffic with matrix analytic methods

    DEFF Research Database (Denmark)

    Andersen, Allan T.

    1995-01-01

    BISDN network. The heuristic formula did not seem to yield substantially better results than already available approximations. Finally, some results for the finite capacity BMAP/G/1 queue have been obtained. The steady state probability vector of the embedded chain is found by a direct method where...... process. A heuristic formula for the tail behaviour of a single server queue fed by a superposition of renewal processes has been evaluated. The evaluation was performed by applying Matrix Analytic methods. The heuristic formula has applications in the Call Admission Control (CAC) procedure of the future...

  4. Analytical methods manual for the Mineral Resource Surveys Program, U.S. Geological Survey

    Science.gov (United States)

    Arbogast, Belinda F.

    1996-01-01

    The analytical methods validated by the Mineral Resource Surveys Program, Geologic Division, is the subject of this manual. This edition replaces the methods portion of Open-File Report 90-668 published in 1990. Newer methods may be used which have been approved by the quality assurance (QA) project and are on file with the QA coordinator.This manual is intended primarily for use by laboratory scientists; this manual can also assist laboratory users to evaluate the data they receive. The analytical methods are written in a step by step approach so that they may be used as a training tool and provide detailed documentation of the procedures for quality assurance. A "Catalog of Services" is available for customer (submitter) use with brief listings of:the element(s)/species determined,method of determination,reference to cite,contact person,summary of the technique,and analyte concentration range.For a copy please contact the Branch office at (303) 236-1800 or fax (303) 236-3200.

  5. Method for assessing software reliability of the document management system using the RFID technology

    Directory of Open Access Journals (Sweden)

    Kiedrowicz Maciej

    2016-01-01

    Full Text Available The deliberations presented in this study refer to the method for assessing software reliability of the docu-ment management system, using the RFID technology. A method for determining the reliability structure of the dis-cussed software, understood as the index vector for assessing reliability of its components, was proposed. The model of the analyzed software is the control transfer graph, in which the probability of activating individual components during the system's operation results from the so-called operational profile, which characterizes the actual working environment. The reliability structure is established as a result of the solution of a specific mathematical software task. The knowledge of the reliability structure of the software makes it possible to properly plan the time and finan-cial expenses necessary to build the software, which would meet the reliability requirements. The application of the presented method is illustrated by the number example, corresponding to the software reality of the RFID document management system.

  6. Structural system reliability calculation using a probabilistic fault tree analysis method

    Science.gov (United States)

    Torng, T. Y.; Wu, Y.-T.; Millwater, H. R.

    1992-01-01

    The development of a new probabilistic fault tree analysis (PFTA) method for calculating structural system reliability is summarized. The proposed PFTA procedure includes: developing a fault tree to represent the complex structural system, constructing an approximation function for each bottom event, determining a dominant sampling sequence for all bottom events, and calculating the system reliability using an adaptive importance sampling method. PFTA is suitable for complicated structural problems that require computer-intensive computer calculations. A computer program has been developed to implement the PFTA.

  7. An analytical nodal method for time-dependent one-dimensional discrete ordinates problems

    International Nuclear Information System (INIS)

    Barros, R.C. de

    1992-01-01

    In recent years, relatively little work has been done in developing time-dependent discrete ordinates (S N ) computer codes. Therefore, the topic of time integration methods certainly deserves further attention. In this paper, we describe a new coarse-mesh method for time-dependent monoenergetic S N transport problesm in slab geometry. This numerical method preserves the analytic solution of the transverse-integrated S N nodal equations by constants, so we call our method the analytical constant nodal (ACN) method. For time-independent S N problems in finite slab geometry and for time-dependent infinite-medium S N problems, the ACN method generates numerical solutions that are completely free of truncation errors. Bsed on this positive feature, we expect the ACN method to be more accurate than conventional numerical methods for S N transport calculations on coarse space-time grids

  8. Mission Reliability Estimation for Repairable Robot Teams

    Science.gov (United States)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  9. Survey of Technetium Analytical Production Methods Supporting Hanford Nuclear Materials Processing

    International Nuclear Information System (INIS)

    TROYER, G.L.

    1999-01-01

    This document provides a historical survey of analytical methods used for measuring 99 Tc in nuclear fuel reprocessing materials and wastes at Hanford. Method challenges including special sludge matrices tested are discussed. Special problems and recommendations are presented

  10. Analytical Evaluation of Beam Deformation Problem Using Approximate Methods

    DEFF Research Database (Denmark)

    Barari, Amin; Kimiaeifar, A.; Domairry, G.

    2010-01-01

    The beam deformation equation has very wide applications in structural engineering. As a differential equation, it has its own problem concerning existence, uniqueness and methods of solutions. Often, original forms of governing differential equations used in engineering problems are simplified......, and this process produces noise in the obtained answers. This paper deals with the solution of second order of differential equation governing beam deformation using four analytical approximate methods, namely the Perturbation, Homotopy Perturbation Method (HPM), Homotopy Analysis Method (HAM) and Variational...... Iteration Method (VIM). The comparisons of the results reveal that these methods are very effective, convenient and quite accurate for systems of non-linear differential equation....

  11. Assessment of Two Analytical Methods in Solving the Linear and Nonlinear Elastic Beam Deformation Problems

    DEFF Research Database (Denmark)

    Barari, Amin; Ganjavi, B.; Jeloudar, M. Ghanbari

    2010-01-01

    and fluid mechanics. Design/methodology/approach – Two new but powerful analytical methods, namely, He's VIM and HPM, are introduced to solve some boundary value problems in structural engineering and fluid mechanics. Findings – Analytical solutions often fit under classical perturbation methods. However......, as with other analytical techniques, certain limitations restrict the wide application of perturbation methods, most important of which is the dependence of these methods on the existence of a small parameter in the equation. Disappointingly, the majority of nonlinear problems have no small parameter at all......Purpose – In the last two decades with the rapid development of nonlinear science, there has appeared ever-increasing interest of scientists and engineers in the analytical techniques for nonlinear problems. This paper considers linear and nonlinear systems that are not only regarded as general...

  12. A human reliability based usability evaluation method for safety-critical software

    International Nuclear Information System (INIS)

    Boring, R. L.; Tran, T. Q.; Gertman, D. I.; Ragsdale, A.

    2006-01-01

    Boring and Gertman (2005) introduced a novel method that augments heuristic usability evaluation methods with that of the human reliability analysis method of SPAR-H. By assigning probabilistic modifiers to individual heuristics, it is possible to arrive at the usability error probability (UEP). Although this UEP is not a literal probability of error, it nonetheless provides a quantitative basis to heuristic evaluation. This method allows one to seamlessly prioritize and identify usability issues (i.e., a higher UEP requires more immediate fixes). However, the original version of this method required the usability evaluator to assign priority weights to the final UEP, thus allowing the priority of a usability issue to differ among usability evaluators. The purpose of this paper is to explore an alternative approach to standardize the priority weighting of the UEP in an effort to improve the method's reliability. (authors)

  13. Method Development in Forensic Toxicology.

    Science.gov (United States)

    Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona

    2017-01-01

    In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Analytical method for the identification and assay of 12 phthalates in cosmetic products: application of the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques".

    Science.gov (United States)

    Gimeno, Pascal; Maggio, Annie-Françoise; Bousquet, Claudine; Quoirez, Audrey; Civade, Corinne; Bonnet, Pierre-Antoine

    2012-08-31

    Esters of phthalic acid, more commonly named phthalates, may be present in cosmetic products as ingredients or contaminants. Their presence as contaminant can be due to the manufacturing process, to raw materials used or to the migration of phthalates from packaging when plastic (polyvinyl chloride--PVC) is used. 8 phthalates (DBP, DEHP, BBP, DMEP, DnPP, DiPP, DPP, and DiBP), classified H360 or H361, are forbidden in cosmetics according to the European regulation on cosmetics 1223/2009. A GC/MS method was developed for the assay of 12 phthalates in cosmetics, including the 8 phthalates regulated. Analyses are carried out on a GC/MS system with electron impact ionization mode (EI). The separation of phthalates is obtained on a cross-linked 5%-phenyl/95%-dimethylpolysiloxane capillary column 30 m × 0.25 mm (i.d.) × 0.25 mm film thickness using a temperature gradient. Phthalate quantification is performed by external calibration using an internal standard. Validation elements obtained on standard solutions, highlight a satisfactory system conformity (resolution>1.5), a common quantification limit at 0.25 ng injected, an acceptable linearity between 0.5 μg mL⁻¹ and 5.0 μg mL⁻¹ as well as a precision and an accuracy in agreement with in-house specifications. Cosmetic samples ready for analytical injection are analyzed after a dilution in ethanol whereas more complex cosmetic matrices, like milks and creams, are assayed after a liquid/liquid extraction using ter-butyl methyl ether (TBME). Depending on the type of cosmetics analyzed, the common limits of quantification for the 12 phthalates were set at 0.5 or 2.5 μg g⁻¹. All samples were assayed using the analytical approach described in the ISO 12787 international standard "Cosmetics-Analytical methods-Validation criteria for analytical results using chromatographic techniques". This analytical protocol is particularly adapted when it is not possible to make reconstituted sample matrices. Copyright © 2012

  15. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, Chris; Little, Mike; Huang, Thomas; Jacob, Joseph; Yang, Phil; Kuo, Kwo-Sen

    2016-01-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based file systems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  16. Benchmark Comparison of Cloud Analytics Methods Applied to Earth Observations

    Science.gov (United States)

    Lynnes, C.; Little, M. M.; Huang, T.; Jacob, J. C.; Yang, C. P.; Kuo, K. S.

    2016-12-01

    Cloud computing has the potential to bring high performance computing capabilities to the average science researcher. However, in order to take full advantage of cloud capabilities, the science data used in the analysis must often be reorganized. This typically involves sharding the data across multiple nodes to enable relatively fine-grained parallelism. This can be either via cloud-based filesystems or cloud-enabled databases such as Cassandra, Rasdaman or SciDB. Since storing an extra copy of data leads to increased cost and data management complexity, NASA is interested in determining the benefits and costs of various cloud analytics methods for real Earth Observation cases. Accordingly, NASA's Earth Science Technology Office and Earth Science Data and Information Systems project have teamed with cloud analytics practitioners to run a benchmark comparison on cloud analytics methods using the same input data and analysis algorithms. We have particularly looked at analysis algorithms that work over long time series, because these are particularly intractable for many Earth Observation datasets which typically store data with one or just a few time steps per file. This post will present side-by-side cost and performance results for several common Earth observation analysis operations.

  17. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    Science.gov (United States)

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  18. Development of analytical methods for the determination of trace elements in sediment with Neutron ActivAtion method (NAA) and Inductively Coupled Plasma Mass Spectrometry (ICP-MS)

    International Nuclear Information System (INIS)

    Nam, Sang Ho; Kim, Jae Jin; Chung, Yong Sam; Kim, Sun Ha

    2005-01-01

    The analytical methods for the determination of major elements (Al, Ca, K, Fe, Mg) in sediment have been investigated with ICP-MS. The analytical results of major elements with Cool ICP-MS were much better than those with normal ICP-MS. The analytical results were compared with those of NAA. NAA were a little superior to ICP-MS for the determination of major elements in sediment, and NAA is a non-destructive analytical method. The analytical methods for the determination of minor elements (Cr, Ce, U, Co, Pb, As, Se) in sediment have been also studied with ICP-MS. The analytical results by standard calibration with ICP-MS were not accurate due to matrix interferences. Thus, internal standard method was applied, then the analytical results for minor element with ICP-MS were greatly improved. The analytical results obtained by ICP-MS were compared with those obtained by NAA. It showed that the two analytical methods have great capabilities for the determination of minor elements in sediments

  19. Advantages of Analytical Transformations in Monte Carlo Methods for Radiation Transport

    International Nuclear Information System (INIS)

    McKinley, M S; Brooks III, E D; Daffin, F

    2004-01-01

    Monte Carlo methods for radiation transport typically attempt to solve an integral by directly sampling analog or weighted particles, which are treated as physical entities. Improvements to the methods involve better sampling, probability games or physical intuition about the problem. We show that significant improvements can be achieved by recasting the equations with an analytical transform to solve for new, non-physical entities or fields. This paper looks at one such transform, the difference formulation for thermal photon transport, showing a significant advantage for Monte Carlo solution of the equations for time dependent transport. Other related areas are discussed that may also realize significant benefits from similar analytical transformations

  20. THE QuEChERS ANALYTICAL METHOD COMBINED WITH LOW ...

    African Journals Online (AJOL)

    The method has also been applied to different cereal samples and satisfactory average recoveries ... Analysis of multiclass pesticide residues in foods is a challenging task because of the ... compounds set by regulatory bodies. ..... analytes were used to evaluate the influences of the selected factors on performance of the.

  1. The use of analytical procedures in the internal audit of the restaurant business expenses

    Directory of Open Access Journals (Sweden)

    T.Yu. Kopotienko

    2015-06-01

    Full Text Available The important task of carrying out the internal audit of expenses is to get the sufficient and reliable audit evidence. This can be achieved by using the analytical procedures in the audit process. The identification of the analytical procedures with the financial analysis of the business activities prevents from the efficient usage of them in the internal audit of the restaurant business expenses. The knowledge of internal auditors about the instructional techniques of analytical procedures and their tasks, depending on the verification steps are insufficient. The purpose of the article is the developing the methods of the internal audit of the restaurant business expenses based on an integrated application of analytical procedures. The nature and purpose of analytical procedures have been investigated in the article. It have been identified the factors influencing on auditor’s decision about the choice of analytical procedures complex. It was recommended to identify among them the purpose of analytical procedures, the type and structure of the enterprise, the source of the available information, the existence of financial and non-financial information, reliability and comparability of the available information. It have been identified the tasks of analytical procedures, depending on the verification steps. It was offered the analytical procedures complex as a part of internal audit of the restaurant business expenses. This complex contains a list of the analytical procedures, instructional techniques of analysis that are used in the appropriate procedure and the brief overview of the content of procedure.

  2. MULTIPLE CRITERA METHODS WITH FOCUS ON ANALYTIC HIERARCHY PROCESS AND GROUP DECISION MAKING

    Directory of Open Access Journals (Sweden)

    Lidija Zadnik-Stirn

    2010-12-01

    Full Text Available Managing natural resources is a group multiple criteria decision making problem. In this paper the analytic hierarchy process is the chosen method for handling the natural resource problems. The one decision maker problem is discussed and, three methods: the eigenvector method, data envelopment analysis method, and logarithmic least squares method are presented for the derivation of the priority vector. Further, the group analytic hierarchy process is discussed and six methods for the aggregation of individual judgments or priorities: weighted arithmetic mean method, weighted geometric mean method, and four methods based on data envelopment analysis are compared. The case study on land use in Slovenia is applied. The conclusions review consistency, sensitivity analyses, and some future directions of research.

  3. Analytical methods of heat transfer compared with numerical methods as related to nuclear waste repositories

    International Nuclear Information System (INIS)

    Estrada-Gasca, C.A.

    1986-01-01

    Analytical methods were applied to the prediction of the far-field thermal impact of a nuclear waste repository. Specifically, the transformation of coordinates and the Kirchhoff transformation were used to solve one-dimensional nonlinear heat conduction problems. Calculations for the HLW and TRU nuclear waste with initial areal thermal loadings of 12 kW/acre and 0.7 kW/acre, respectively, are carried out for various models. Also, finite difference and finite element methods are applied. The last method is used to solve two-dimensional linear and nonlinear heat conduction problems. Results of the analysis are temperature distributions and temperature histories. Explicit analytical expressions of the maximum temperature rise as a function of the system parameters are presented. The theoretical approaches predict maximum temperature increases in the overburden with an error of 10%. When the finite solid one-dimensional NWR thermal problem is solved with generic salt and HLW thermal load as parameters, the maximum temperature rises predicted by the finite difference and finite element methods had maximum errors of 2.6 and 6.7%, respectively. In all the other cases the finite difference method also gave a smaller error than the finite element method

  4. Developing automated analytical methods for scientific environments using LabVIEW.

    Science.gov (United States)

    Wagner, Christoph; Armenta, Sergio; Lendl, Bernhard

    2010-01-15

    The development of new analytical techniques often requires the building of specially designed devices, each requiring its own dedicated control software. Especially in the research and development phase, LabVIEW has proven to be one highly useful tool for developing this software. Yet, it is still common practice to develop individual solutions for different instruments. In contrast to this, we present here a single LabVIEW-based program that can be directly applied to various analytical tasks without having to change the program code. Driven by a set of simple script commands, it can control a whole range of instruments, from valves and pumps to full-scale spectrometers. Fluid sample (pre-)treatment and separation procedures can thus be flexibly coupled to a wide range of analytical detection methods. Here, the capabilities of the program have been demonstrated by using it for the control of both a sequential injection analysis - capillary electrophoresis (SIA-CE) system with UV detection, and an analytical setup for studying the inhibition of enzymatic reactions using a SIA system with FTIR detection.

  5. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    KAUST Repository

    Gasda, S. E.

    2009-04-23

    Large-scale implementation of geological CO2 sequestration requires quantification of risk and leakage potential. One potentially important leakage pathway for the injected CO2 involves existing oil and gas wells. Wells are particularly important in North America, where more than a century of drilling has created millions of oil and gas wells. Models of CO 2 injection and leakage will involve large uncertainties in parameters associated with wells, and therefore a probabilistic framework is required. These models must be able to capture both the large-scale CO 2 plume associated with the injection and the small-scale leakage problem associated with localized flow along wells. Within a typical simulation domain, many hundreds of wells may exist. One effective modeling strategy combines both numerical and analytical models with a specific set of simplifying assumptions to produce an efficient numerical-analytical hybrid model. The model solves a set of governing equations derived by vertical averaging with assumptions of a macroscopic sharp interface and vertical equilibrium. These equations are solved numerically on a relatively coarse grid, with an analytical model embedded to solve for wellbore flow occurring at the sub-gridblock scale. This vertical equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid refinement for sub-scale features. Through a series of benchmark problems, we show that VESA compares well with traditional numerical simulations and to a semi-analytical model which applies to appropriately simple systems. We believe that the VESA model provides the necessary accuracy and efficiency for applications of risk analysis in many CO2 sequestration problems. © 2009 Springer Science+Business Media B.V.

  6. Structural reliability methods: Code development status

    Science.gov (United States)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  7. Reliability evaluation of power supply and distribution for special heat removal systems in nuclear power stations

    International Nuclear Information System (INIS)

    Jazbec, D.

    1982-01-01

    An example of the power supply and distribution of a Special Emergency Heat Removal System (SEHR) shows how an engineering organization may, with the aid of the analytical method of min-cut sets optimize the system reliability. Herein are given the necessary simple calculation methods. (Auth.)

  8. Integrated Markov-neural reliability computation method: A case for multiple automated guided vehicle system

    International Nuclear Information System (INIS)

    Fazlollahtabar, Hamed; Saidi-Mehrabad, Mohammad; Balakrishnan, Jaydeep

    2015-01-01

    This paper proposes an integrated Markovian and back propagation neural network approaches to compute reliability of a system. While states of failure occurrences are significant elements for accurate reliability computation, Markovian based reliability assessment method is designed. Due to drawbacks shown by Markovian model for steady state reliability computations and neural network for initial training pattern, integration being called Markov-neural is developed and evaluated. To show efficiency of the proposed approach comparative analyses are performed. Also, for managerial implication purpose an application case for multiple automated guided vehicles (AGVs) in manufacturing networks is conducted. - Highlights: • Integrated Markovian and back propagation neural network approach to compute reliability. • Markovian based reliability assessment method. • Managerial implication is shown in an application case for multiple automated guided vehicles (AGVs) in manufacturing networks

  9. Reliability Analysis Of Fire System On The Industry Facility By Use Fameca Method

    International Nuclear Information System (INIS)

    Sony T, D.T.; Situmorang, Johnny; Ismu W, Puradwi; Demon H; Mulyanto, Dwijo; Kusmono, Slamet; Santa, Sigit Asmara

    2000-01-01

    FAMECA is one of the analysis method to determine system reliability on the industry facility. Analysis is done by some procedure that is identification of component function, determination of failure mode, severity level and effect of their failure. Reliability value is determined by three combinations that is severity level, component failure value and critical component. Reliability of analysis has been done for fire system on the industry by FAMECA method. Critical component which identified is pump, air release valve, check valve, manual test valve, isolation valve, control system etc

  10. Determination of 237Np in environmental and nuclear samples: A review of the analytical method

    International Nuclear Information System (INIS)

    Thakur, P.; Mulholland, G.P.

    2012-01-01

    A number of analytical methods has been developed and used for the determination of neptunium in environmental and nuclear fuel samples using alpha, ICP–MS spectrometry, and other analytical techniques. This review summarizes and discusses development of the radiochemical procedures for separation of neptunium (Np), since the beginning of the nuclear industry, followed by a more detailed discussion on recent trends in the separation of neptunium. This article also highlights the progress in analytical methods and issues associated with the determination of neptunium in environmental samples. - Highlights: ► Determination of Np in environmental and nuclear samples is reviewed. ► Various analytical methods used for the determination of Np are listed. ► Progress and issues associated with the determination of Np are discussed.

  11. Advances in the Analytical Methods for Determining the Antioxidant ...

    African Journals Online (AJOL)

    Advances in the Analytical Methods for Determining the Antioxidant Properties of Honey: A Review. M Moniruzzaman, MI Khalil, SA Sulaiman, SH Gan. Abstract. Free radicals and reactive oxygen species (ROS) have been implicated in contributing to the processes of aging and disease. In an effort to combat free radical ...

  12. Analytic thinking reduces belief in conspiracy theories.

    Science.gov (United States)

    Swami, Viren; Voracek, Martin; Stieger, Stefan; Tran, Ulrich S; Furnham, Adrian

    2014-12-01

    Belief in conspiracy theories has been associated with a range of negative health, civic, and social outcomes, requiring reliable methods of reducing such belief. Thinking dispositions have been highlighted as one possible factor associated with belief in conspiracy theories, but actual relationships have only been infrequently studied. In Study 1, we examined associations between belief in conspiracy theories and a range of measures of thinking dispositions in a British sample (N=990). Results indicated that a stronger belief in conspiracy theories was significantly associated with lower analytic thinking and open-mindedness and greater intuitive thinking. In Studies 2-4, we examined the causational role played by analytic thinking in relation to conspiracist ideation. In Study 2 (N=112), we showed that a verbal fluency task that elicited analytic thinking reduced belief in conspiracy theories. In Study 3 (N=189), we found that an alternative method of eliciting analytic thinking, which related to cognitive disfluency, was effective at reducing conspiracist ideation in a student sample. In Study 4, we replicated the results of Study 3 among a general population sample (N=140) in relation to generic conspiracist ideation and belief in conspiracy theories about the July 7, 2005, bombings in London. Our results highlight the potential utility of supporting attempts to promote analytic thinking as a means of countering the widespread acceptance of conspiracy theories. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Analytical studies on the Benney-Luke equation in mathematical physics

    Science.gov (United States)

    Islam, S. M. Rayhanul; Khan, Kamruzzaman; Woadud, K. M. Abdul Al

    2018-04-01

    The enhanced (G‧/G)-expansion method presents wide applicability to handling nonlinear wave equations. In this article, we find the new exact traveling wave solutions of the Benney-Luke equation by using the enhanced (G‧/G)-expansion method. This method is a useful, reliable, and concise method to easily solve the nonlinear evaluation equations (NLEEs). The traveling wave solutions have expressed in term of the hyperbolic and trigonometric functions. We also have plotted the 2D and 3D graphics of some analytical solutions obtained in this paper.

  14. The development of precisely analytical method for the concentrated boric acid solution in the NPP systems

    Energy Technology Data Exchange (ETDEWEB)

    Sung, G. B.; Jung, K. H.; Kang, D. W. [KEPRI, Taejon (Korea, Republic of); Park, C. S. [KEPCO, Taejon (Korea, Republic of)

    1999-05-01

    Boric acid is used for reactivity control in nuclear reactors, which frequently results in leftover boric acid. This extra boric acid is stored in boric acid storage tank after the concentration process by boric acid evaporator. Apart from this excess, highly concentrated boric acid is stored in safety-related boric acid storage tank. Accordingly, proper maintenance of these boric acid is one of the greatest safety concerns. The solubility of boric acid decreases with decreasing temperature resulting in its precipitation. Consequently, the temperature of boric acid storage tanks is maintained at high temperature. The following analysis should be also performed at the similar temperature to prevent the formation of boric acid precipitation, which is difficult to achieve affecting the accuracy of analytical results. This paper presents a new sampling and measuring technique that makes up for the difficulties mentioned above and shows several advantages including improved reliability and short analysis time. This method is based on gravimetry and dilution method and is expected to be widely used in field application.

  15. AN ANALYTICAL FRAMEWORK FOR ASSESSING RELIABLE NUCLEAR FUEL SERVICE APPROACHES: ECONOMIC AND NON-PROLIFERATION MERITS OF NUCLEAR FUEL LEASING

    International Nuclear Information System (INIS)

    Kreyling, Sean J.; Brothers, Alan J.; Short, Steven M.; Phillips, Jon R.; Weimar, Mark R.

    2010-01-01

    The goal of international nuclear policy since the dawn of nuclear power has been the peaceful expansion of nuclear energy while controlling the spread of enrichment and reprocessing technology. Numerous initiatives undertaken in the intervening decades to develop international agreements on providing nuclear fuel supply assurances, or reliable nuclear fuel services (RNFS) attempted to control the spread of sensitive nuclear materials and technology. In order to inform the international debate and the development of government policy, PNNL has been developing an analytical framework to holistically evaluate the economics and non-proliferation merits of alternative approaches to managing the nuclear fuel cycle (i.e., cradle-to-grave). This paper provides an overview of the analytical framework and discusses preliminary results of an economic assessment of one RNFS approach: full-service nuclear fuel leasing. The specific focus of this paper is the metrics under development to systematically evaluate the non-proliferation merits of fuel-cycle management alternatives. Also discussed is the utility of an integrated assessment of the economics and non-proliferation merits of nuclear fuel leasing.

  16. COMPOSITE METHOD OF RELIABILITY RESEARCH FOR HIERARCHICAL MULTILAYER ROUTING SYSTEMS

    Directory of Open Access Journals (Sweden)

    R. B. Tregubov

    2016-09-01

    Full Text Available The paper deals with the idea of a research method for hierarchical multilayer routing systems. The method represents a composition of methods of graph theories, reliability, probabilities, etc. These methods are applied to the solution of different private analysis and optimization tasks and are systemically connected and coordinated with each other through uniform set-theoretic representation of the object of research. The hierarchical multilayer routing systems are considered as infrastructure facilities (gas and oil pipelines, automobile and railway networks, systems of power supply and communication with distribution of material resources, energy or information with the use of hierarchically nested functions of routing. For descriptive reasons theoretical constructions are considered on the example of task solution of probability determination for up state of specific infocommunication system. The author showed the possibility of constructive combination of graph representation of structure of the object of research and a logic probable analysis method of its reliability indices through uniform set-theoretic representation of its elements and processes proceeding in them.

  17. Analytical method and result of radiation exposure for depressurization accident of HTTR

    International Nuclear Information System (INIS)

    Sawa, K.; Shiozawa, S.; Mikami, H.

    1990-01-01

    The Japan Atomic Energy Research Institute (JAERI) is now proceeding with the construction design of the High Temperature Engineering Test Reactor (HTTR). Since the HTTR has some characteristics different from LWRs, analytical method of radiation exposure in accidents provided for LWRs can not be applied directly. This paper describes the analytical method of radiation exposure developed by JAERI for the depressurization accident, which is the severest accident in respect to radiation exposure among the design basis accidents of the HTTR. The result is also described in this paper

  18. Pilot testing of SHRP 2 reliability data and analytical products: Washington.

    Science.gov (United States)

    2014-07-30

    The second Strategic Highway Research Program (SHRP 2) addresses the challenges of moving people and goods efficiently and safely on the nations highways. In its Reliability focus area, the research emphasizes improving the reliability of highway ...

  19. A study in the reliability analysis method for nuclear power plant structures (I)

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Byung Hwan; Choi, Seong Cheol; Shin, Ho Sang; Yang, In Hwan; Kim, Yi Sung; Yu, Young; Kim, Se Hun [Seoul, Nationl Univ., Seoul (Korea, Republic of)

    1999-03-15

    Nuclear power plant structures may be exposed to aggressive environmental effects that may cause their strength and stiffness to decrease over their service life. Although the physics of these damage mechanisms are reasonably well understood and quantitative evaluation of their effects on time-dependent structural behavior is possible in some instances, such evaluations are generally very difficult and remain novel. The assessment of existing steel containment in nuclear power plants for continued service must provide quantitative evidence that they are able to withstand future extreme loads during a service period with an acceptable level of reliability. Rational methodologies to perform the reliability assessment can be developed from mechanistic models of structural deterioration, using time-dependent structural reliability analysis to take loading and strength uncertainties into account. The final goal of this study is to develop the analysis method for the reliability of containment structures. The cause and mechanism of corrosion is first clarified and the reliability assessment method has been established. By introducing the equivalent normal distribution, the procedure of reliability analysis which can determine the failure probabilities has been established. The influence of design variables to reliability and the relation between the reliability and service life will be continued second year research.

  20. Simplified Analytical Methods to Analyze Lock Gates Submitted to Ship Collisions and Earthquakes

    Directory of Open Access Journals (Sweden)

    Buldgen Loic

    2015-09-01

    Full Text Available This paper presents two simplified analytical methods to analyze lock gates submitted to two different accidental loads. The case of an impact involving a vessel is first investigated. In this situation, the resistance of the struck gate is evaluated by assuming a local and a global deforming mode. The super-element method is used in the first case, while an equivalent beam model is simultaneously introduced to capture the overall bending motion of the structure. The second accidental load considered in this paper is the seismic action, for which an analytical method is presented to evaluate the total hydrodynamic pressure applied on a lock gate during an earthquake, due account being taken of the fluid-structure interaction. For each of these two actions, numerical validations are presented and the analytical results are compared to finite-element solutions.

  1. Selecting analytical tools for characterization of polymersomes in aqueous solution

    DEFF Research Database (Denmark)

    Habel, Joachim Erich Otto; Ogbonna, Anayo; Larsen, Nanna

    2015-01-01

    Selecting the appropriate analytical methods for characterizing the assembly and morphology of polymer-based vesicles, or polymersomes are required to reach their full potential in biotechnology. This work presents and compares 17 different techniques for their ability to adequately report size....../purification. Of the analytical methods tested, Cryo-transmission electron microscopy and atomic force microscopy (AFM) turned out to be advantageous for polymersomes with smaller diameter than 200 nm, whereas confocal microscopy is ideal for diameters >400 nm. Polymersomes in the intermediate diameter range can be characterized...... using freeze fracture Cryo-scanning electron microscopy (FF-Cryo-SEM) and nanoparticle tracking analysis (NTA). Small angle X-ray scattering (SAXS) provides reliable data on bilayer thickness and internal structure, Cryo-TEM on multilamellarity. Taken together, these tools are valuable...

  2. Investigation by perturbative and analytical method of electronic properties of square quantum well under electric field

    Directory of Open Access Journals (Sweden)

    Mustafa Kemal BAHAR

    2010-06-01

    Full Text Available In this study, the effects of applied electric field on the isolated square quantum well was investigated by analytic and perturbative method. The energy eigen values and wave functions in quantum well were found by perturbative method. Later, the electric field effects were investigated by analytic method, the results of perturbative and analytic method were compared. As well as both of results fit with each other, it was observed that externally applied electric field changed importantly electronic properties of the system.

  3. Microgenetic Learning Analytics Methods: Workshop Report

    Science.gov (United States)

    Aghababyan, Ani; Martin, Taylor; Janisiewicz, Philip; Close, Kevin

    2016-01-01

    Learning analytics is an emerging discipline and, as such, benefits from new tools and methodological approaches. This work reviews and summarizes our workshop on microgenetic data analysis techniques using R, held at the second annual Learning Analytics Summer Institute in Cambridge, Massachusetts, on 30 June 2014. Specifically, this paper…

  4. Differential reliability : probabilistic engineering applied to wood members in bending-tension

    Science.gov (United States)

    Stanley K. Suddarth; Frank E. Woeste; William L. Galligan

    1978-01-01

    Reliability analysis is a mathematical technique for appraising the design and materials of engineered structures to provide a quantitative estimate of probability of failure. Two or more cases which are similar in all respects but one may be analyzed by this method; the contrast between the probabilities of failure for these cases allows strong analytical focus on the...

  5. Reliable determination of new lipid peroxidation compounds as potential early Alzheimer Disease biomarkers.

    Science.gov (United States)

    García-Blanco, Ana; Peña-Bautista, Carmen; Oger, Camille; Vigor, Claire; Galano, Jean-Marie; Durand, Thierry; Martín-Ibáñez, Nuria; Baquero, Miguel; Vento, Máximo; Cháfer-Pericás, Consuelo

    2018-07-01

    Lipid peroxidation plays an important role in Alzheimer Disease, so corresponding metabolites found in urine samples could be potential biomarkers. The aim of this work is to develop a reliable ultra-performance liquid chromatography-tandem mass spectrometry analytical method to determine a new set of lipid peroxidation compounds in urine samples. Excellent sensitivity was achieved with limits of detection between 0.08 and 17 nmol L -1 , which renders this method suitable to monitor analytes concentrations in real samples. The method's precision was satisfactory with coefficients of variation around 5-17% (intra-day) and 8-19% (inter-day). The accuracy of the method was assessed by analysis of spiked urine samples obtaining recoveries between 70% and 120% for most of the analytes. The utility of the described method was tested by analyzing urine samples from patients early diagnosed with mild cognitive impairment or mild dementia Alzheimer Disease following the clinical standard criteria. As preliminary results, some analytes (17(RS)-10-epi-SC-Δ 15 -11-dihomo-IsoF, PGE 2 ) and total parameters (Neuroprostanes, Isoprostanes, Isofurans) show differences between the control and the clinical groups. So, these analytes could be potential early Alzheimer Disease biomarkers assessing the patients' pro-oxidant condition. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Comparative analysis of methods for real-time analytical control of chemotherapies preparations.

    Science.gov (United States)

    Bazin, Christophe; Cassard, Bruno; Caudron, Eric; Prognon, Patrice; Havard, Laurent

    2015-10-15

    Control of chemotherapies preparations are now an obligation in France, though analytical control is compulsory. Several methods are available and none of them is presumed as ideal. We wanted to compare them so as to determine which one could be the best choice. We compared non analytical (visual and video-assisted, gravimetric) and analytical (HPLC/FIA, UV/FT-IR, UV/Raman, Raman) methods thanks to our experience and a SWOT analysis. The results of the analysis show great differences between the techniques, but as expected none us them is without defects. However they can probably be used in synergy. Overall for the pharmacist willing to get involved, the implementation of the control for chemotherapies preparations must be widely anticipated, with the listing of every parameter, and remains according to us an analyst's job. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Application of Multi-Analyte Methods for Pesticide Formulations

    Energy Technology Data Exchange (ETDEWEB)

    Lantos, J.; Virtics, I. [Plant Protection & Soil Conservation Service of Szabolcs-Szatmár-Bereg County, Nyíregyháza (Hungary)

    2009-07-15

    The application of multi-analyte methods for pesticide formulations by GC analysis is discussed. HPLC was used to determine active ingredients. HPLC elution sequences were related to individual n-octanol/water partition coefficients. Real laboratory data are presented and evaluated with regard to validation requirements. The retention time data of pesticides on different HPLC columns under gradient and isocratic conditions are compared to illustrate the applicability of the methodologies. (author)

  8. Experimental setup and analytical methods for the non-invasive determination of volatile organic compounds, formaldehyde and NOx in exhaled human breath

    International Nuclear Information System (INIS)

    Riess, Ulrich; Tegtbur, Uwe; Fauck, Christian; Fuhrmann, Frank; Markewitz, Doreen; Salthammer, Tunga

    2010-01-01

    Different analytical devices were tested and evaluated for their suitability of breath gas analysis by examining the physiological parameters and chemical substances in the exhaled breath of ten healthy probands during light cycling in dependence of methanol-rich nutrition. The probands exercised under normal breathing conditions on a bicycle ergometer. Breath air was exhaled into a glass cylinder and collected under steady-state conditions. Non-invasively measured parameters were pulse rate, breath frequency, temperature, relative humidity, NO x , total volatile organic compounds (TVOC PAS ), carbon dioxide (CO 2 ), formaldehyde, methanol, acetaldehyde, acetone, isoprene and volatile organic compounds (VOCs). Methanol rich food and beverages strongly influenced the concentration of methanol and other organic substances in human breath. On the other hand, nutrition and smoking had no clear effect on the physical conditions of the probands. The proton transfer reaction mass spectrometry (PTR-MS) method was found to be very suitable for the analysis of breath gas but the m/z 31, if assigned to formaldehyde, is sensitive to interferences. The time vs. concentration curves of nitric oxide showed sudden peaks up to 120 ppb in most of the measurements. In one case a strong interference of the NO x signal was observed. The time resolved analysis of exhaled breath gas is of high capability and significance for different applications if reliable analytical techniques are used. Some compounds like nitric oxide (NO), methanol, different VOCs as well as sum parameters like TVOC PAS are especially suitable as markers. Formaldehyde, which is rapidly metabolized in the human body, could be measured reliably as a trace component by the acetylacetone (acac) method but not by PTR-MS.

  9. Analytic methods for field induced tunneling in quantum wells

    Indian Academy of Sciences (India)

    Analytic methods for field induced tunneling in quantum wells with arbitrary potential profiles ... Electric field induced tunneling is studied in three different types of quantum wells by solving time-independent effective mass ... Current Issue : Vol.

  10. Research on the reliability of friction system under combined additive and multiplicative random excitations

    Science.gov (United States)

    Sun, Jiaojiao; Xu, Wei; Lin, Zifei

    2018-01-01

    In this paper, the reliability of a non-linearly damped friction oscillator under combined additive and multiplicative Gaussian white noise excitations is investigated. The stochastic averaging method, which is usually applied to the research of smooth system, has been extended to the study of the reliability of non-smooth friction system. The results indicate that the reliability of friction system can be improved by Coulomb friction and reduced by random excitations. In particular, the effect of the external random excitation on the reliability is larger than the effect of the parametric random excitation. The validity of the analytical results is verified by the numerical results.

  11. A sample preparation method for recovering suppressed analyte ions in MALDI TOF MS

    NARCIS (Netherlands)

    Lou, X.; Waal, de B.F.M.; Milroy, L.G.; Dongen, van J.L.J.

    2015-01-01

    In matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI TOF MS), analyte signals can be substantially suppressed by other compounds in the sample. In this technical note, we describe a modified thin-layer sample preparation method that significantly reduces the analyte

  12. A comparative study on the HW reliability assessment methods for digital I and C equipment

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Hoan Sung; Sung, T. Y.; Eom, H. S.; Park, J. K.; Kang, H. G.; Lee, G. Y. [Korea Atomic Energy Research Institute, Taejeon (Korea); Kim, M. C. [Korea Advanced Institute of Science and Technology, Taejeon (Korea); Jun, S. T. [KHNP, Taejeon (Korea)

    2002-03-01

    It is necessary to predict or to evaluate the reliability of electronic equipment for the probabilistic safety analysis of digital instrument and control equipment. But most databases for the reliability prediction have no data for the up-to-date equipment and the failure modes are not classified. The prediction results for the specific component show different values according to the methods and databases. For boards and systems each method shows different values than others also. This study is for reliability prediction of PDC system for Wolsong NPP1 as a digital I and C equipment. Various reliability prediction methods and failure databases are used in calculation of the reliability to compare the effects of sensitivity and accuracy of each model and database. Many considerations for the reliability assessment of digital systems are derived with the results of this study. 14 refs., 19 figs., 15 tabs. (Author)

  13. Analytical methods in rotor dynamics

    CERN Document Server

    Dimarogonas, Andrew D; Chondros, Thomas G

    2013-01-01

    The design and construction of rotating machinery operating at supercritical speeds was, in the 1920s, an event of revolutionary importance for the then new branch of dynamics known as rotor dynamics. In the 1960s, another revolution occurred: In less than a decade, imposed by operational and economic needs, an increase in the power of turbomachinery by one order of magnitude took place. Dynamic analysis of complex rotor forms became a necessity, while the importance of approximate methods for dynamic analysis was stressed. Finally, the emergence of fracture mechanics, as a new branch of applied mechanics, provided analytical tools to investigate crack influence on the dynamic behavior of rotors. The scope of this book is based on all these developments. No topics related to the well-known classical problems are included, rather the book deals exclusively with modern high-power turbomachinery.

  14. Dynamic reliability and risk assessment of the accident localization system of the Ignalina NPP RBMK-1500 reactor

    International Nuclear Information System (INIS)

    Kopustinskas, V.; Augutis, J.; Rimkevicius, S.

    2005-01-01

    The paper presents reliability and risk analysis of the RBMK-1500 reactor accident localization system (ALS) (confinement), which prevents radioactive releases to the environment. Reliability of the system was estimated and compared by two methods: the conventional fault tree method and an innovative dynamic reliability model, based on stochastic differential equations. Frequency of radioactive release through ALS was also estimated. The results of the study indicate that conventional fault tree modeling techniques in this case apply high degree of conservatism in the system reliability estimates. One of the purposes of the ALS reliability study was to demonstrate advantages of the dynamic reliability analysis against the conventional fault/event tree methods. The Markovian framework to deal with dynamic aspects of system behavior is presented. Although not analyzed in detail, the framework is also capable of accounting for non-constant component failure rates. Computational methods are proposed to solve stochastic differential equations, including analytical solution, which is possible only for relatively small and simple systems. Other numerical methods, like Monte Carlo and numerical schemes of differential equations are analyzed and compared. The study is finalized with concluding remarks regarding both the studied system reliability and computational methods used

  15. Analytic function expansion nodal method for nuclear reactor core design

    International Nuclear Information System (INIS)

    Noh, Hae Man

    1995-02-01

    In most advanced nodal methods the transverse integration is commonly used to reduce the multi-dimensional diffusion equation into equivalent one- dimensional diffusion equations when derving the nodal coupling equations. But the use of the transverse integration results in some limitations. The first limitation is that the transverse leakage term which appears in the transverse integration procedure must be appropriately approximated. The second limitation is that the one-dimensional flux shapes in each spatial direction resulted from the nodal calculation are not accurate enough to be directly used in reconstructing the pinwise flux distributions. Finally the transverse leakage defined for a non-rectangular node such as a hexagonal node or a triangular node is too complicated to be easily handled and may contain non-physical singular terms of step-function and delta-function types. In this thesis, the Analytic Function Expansion Nodal (AFEN) method and its two variations : the Polynomial Expansion Nodal (PEN) method and the hybrid of the AFEN and PEN methods, have been developed to overcome the limitations of the transverse integration procedure. All of the methods solve the multidimensional diffusion equation without the transverse integration. The AFEN method which we believe is the major contribution of this study to the reactor core analysis expands the homogeneous flux distributions within a node in non-separable analytic basis functions satisfying the neutron diffusion equations at any point of the node and expresses the coefficients of the flux expansion in terms of the nodal unknowns which comprise a node-average flux, node-interface fluxes, and corner-point fluxes. Then, the nodal coupling equations composed of the neutron balance equations, the interface current continuity equations, and the corner-point leakage balance equations are solved iteratively to determine all the nodal unknowns. Since the AFEN method does not use the transverse integration in

  16. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  17. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  18. The application of analytical methods to the study of Pareto - optimal control systems

    Directory of Open Access Journals (Sweden)

    I. K. Romanova

    2014-01-01

    ; for Pareto - tasks in the presence touch formed the system of algebraic equations whose solution yields the equation of the line Pareto in the parameter space. Display space criteria provides the required Pareto - border; in the absence touch analyzes the property of monotonicity. Analysis of derivatives enables you to define two borders in the parameter space (abscissa and the ordinate axis or at the borders, their parallel.The obtained analytical results are applied to the problem of parametric synthesis of control systems. In the framework of the given structure of double-circuit system suitable made use of a system of differential equations instead of the traditional structural schemes. The formula that shows the change dynamic coefficients of unmanaged system after applying the correction. The choice of direct quality criteria, calculated according to the transition process, made the decision problem is most visible and effective. As quality criteria selected rise time and overshoot. As parameters used the gains of the sensors of angular velocity and linear accelerations. Isolated in the space of parameters of stability and oscillatory. Given the author's conclusion formulas for gradients of direct quality criteria - and rise time overshoot. Built fields antigradient and the level lines of the criteria. The analysis of the properties of contramonotonicity all of the criteria specified parameters. In accordance with the claims put forward by tradeoff line in the space of parameters and displaying them in the form of lines of Pareto on space criteria. The use of traditional means of sensing space parameters showed the reliability of the obtained analytical manner. The calculation results for the synthesis of double-circuit motion control systems of flying devices can be used for selection of possible directions of improvement of conflicting criteria. The proposed method is able to replace traditional methods of synthesis of the systems under consideration on the

  19. A dynamic discretization method for reliability inference in Dynamic Bayesian Networks

    International Nuclear Information System (INIS)

    Zhu, Jiandao; Collette, Matthew

    2015-01-01

    The material and modeling parameters that drive structural reliability analysis for marine structures are subject to a significant uncertainty. This is especially true when time-dependent degradation mechanisms such as structural fatigue cracking are considered. Through inspection and monitoring, information such as crack location and size can be obtained to improve these parameters and the corresponding reliability estimates. Dynamic Bayesian Networks (DBNs) are a powerful and flexible tool to model dynamic system behavior and update reliability and uncertainty analysis with life cycle data for problems such as fatigue cracking. However, a central challenge in using DBNs is the need to discretize certain types of continuous random variables to perform network inference while still accurately tracking low-probability failure events. Most existing discretization methods focus on getting the overall shape of the distribution correct, with less emphasis on the tail region. Therefore, a novel scheme is presented specifically to estimate the likelihood of low-probability failure events. The scheme is an iterative algorithm which dynamically partitions the discretization intervals at each iteration. Through applications to two stochastic crack-growth example problems, the algorithm is shown to be robust and accurate. Comparisons are presented between the proposed approach and existing methods for the discretization problem. - Highlights: • A dynamic discretization method is developed for low-probability events in DBNs. • The method is compared to existing approaches on two crack growth problems. • The method is shown to improve on existing methods for low-probability events

  20. Heat Conduction Analysis Using Semi Analytical Finite Element Method

    International Nuclear Information System (INIS)

    Wargadipura, A. H. S.

    1997-01-01

    Heat conduction problems are very often found in science and engineering fields. It is of accrual importance to determine quantitative descriptions of this important physical phenomena. This paper discusses the development and application of a numerical formulation and computation that can be used to analyze heat conduction problems. The mathematical equation which governs the physical behaviour of heat conduction is in the form of second order partial differential equations. The numerical resolution used in this paper is performed using the finite element method and Fourier series, which is known as semi-analytical finite element methods. The numerical solution results in simultaneous algebraic equations which is solved using the Gauss elimination methodology. The computer implementation is carried out using FORTRAN language. In the final part of the paper, a heat conduction problem in a rectangular plate domain with isothermal boundary conditions in its edge is solved to show the application of the computer program developed and also a comparison with analytical solution is discussed to assess the accuracy of the numerical solution obtained

  1. Multiplier ideal sheaves and analytic methods in algebraic geometry

    International Nuclear Information System (INIS)

    Demailly, J.-P.

    2001-01-01

    Our main purpose here is to describe a few analytic tools which are useful to study questions such as linear series and vanishing theorems for algebraic vector bundles. One of the early successes of analytic methods in this context is Kodaira's use of the Bochner technique in relation with the theory of harmonic forms, during the decade 1950-60.The idea is to represent cohomology classes by harmonic forms and to prove vanishing theorems by means of suitable a priori curvature estimates. We pursue the study of L2 estimates, in relation with the Nullstellenstatz and with the extension problem. We show how subadditivity can be used to derive an approximation theorem for (almost) plurisubharmonic functions: any such function can be approximated by a sequence of (almost) plurisubharmonic functions which are smooth outside an analytic set, and which define the same multiplier ideal sheaves. From this, we derive a generalized version of the hard Lefschetz theorem for cohomology with values in a pseudo-effective line bundle; namely, the Lefschetz map is surjective when the cohomology groups are twisted by the relevant multiplier ideal sheaves. These notes are essentially written with the idea of serving as an analytic tool- box for algebraic geometers. Although efficient algebraic techniques exist, our feeling is that the analytic techniques are very flexible and offer a large variety of guidelines for more algebraic questions (including applications to number theory which are not discussed here). We made a special effort to use as little prerequisites and to be as self-contained as possible; hence the rather long preliminary sections dealing with basic facts of complex differential geometry

  2. Multiplier ideal sheaves and analytic methods in algebraic geometry

    Energy Technology Data Exchange (ETDEWEB)

    Demailly, J -P [Universite de Grenoble I, Institut Fourier, Saint-Martin d' Heres (France)

    2001-12-15

    Our main purpose here is to describe a few analytic tools which are useful to study questions such as linear series and vanishing theorems for algebraic vector bundles. One of the early successes of analytic methods in this context is Kodaira's use of the Bochner technique in relation with the theory of harmonic forms, during the decade 1950-60.The idea is to represent cohomology classes by harmonic forms and to prove vanishing theorems by means of suitable a priori curvature estimates. We pursue the study of L2 estimates, in relation with the Nullstellenstatz and with the extension problem. We show how subadditivity can be used to derive an approximation theorem for (almost) plurisubharmonic functions: any such function can be approximated by a sequence of (almost) plurisubharmonic functions which are smooth outside an analytic set, and which define the same multiplier ideal sheaves. From this, we derive a generalized version of the hard Lefschetz theorem for cohomology with values in a pseudo-effective line bundle; namely, the Lefschetz map is surjective when the cohomology groups are twisted by the relevant multiplier ideal sheaves. These notes are essentially written with the idea of serving as an analytic tool- box for algebraic geometers. Although efficient algebraic techniques exist, our feeling is that the analytic techniques are very flexible and offer a large variety of guidelines for more algebraic questions (including applications to number theory which are not discussed here). We made a special effort to use as little prerequisites and to be as self-contained as possible; hence the rather long preliminary sections dealing with basic facts of complex differential geometry.

  3. Status of photonuclear method of analysis among other nuclear analytical methods and main fields of its application

    International Nuclear Information System (INIS)

    Burmistenko, Yu.N.

    1986-01-01

    Technical, organizational and economical aspects as applied to the field of application of photonuclear methods of analysis of substance composition are considered. As for the technical aspect, the most important factors are nuclear-physical characteristics of the elements under determination and the elements composing the sample matrix. As for the organizational aspect, the governing factor in a number of cases is the availability of an irradiation device in the close vicinity of the analytical laboratory. Studying the technical and organizational aspects while choosing the proper method one can obtain the main source data to perform feasibility studies of a nuclear analytical complex with this or that activation source. Therefore, the economical aspect is governing for the choice of the method

  4. A Table Lookup Method for Exact Analytical Solutions of Nonlinear Fractional Partial Differential Equations

    Directory of Open Access Journals (Sweden)

    Ji Juan-Juan

    2017-01-01

    Full Text Available A table lookup method for solving nonlinear fractional partial differential equations (fPDEs is proposed in this paper. Looking up the corresponding tables, we can quickly obtain the exact analytical solutions of fPDEs by using this method. To illustrate the validity of the method, we apply it to construct the exact analytical solutions of four nonlinear fPDEs, namely, the time fractional simplified MCH equation, the space-time fractional combined KdV-mKdV equation, the (2+1-dimensional time fractional Zoomeron equation, and the space-time fractional ZKBBM equation. As a result, many new types of exact analytical solutions are obtained including triangular periodic solution, hyperbolic function solution, singular solution, multiple solitary wave solution, and Jacobi elliptic function solution.

  5. An analytically based numerical method for computing view factors in real urban environments

    Science.gov (United States)

    Lee, Doo-Il; Woo, Ju-Wan; Lee, Sang-Hyun

    2018-01-01

    A view factor is an important morphological parameter used in parameterizing in-canyon radiative energy exchange process as well as in characterizing local climate over urban environments. For realistic representation of the in-canyon radiative processes, a complete set of view factors at the horizontal and vertical surfaces of urban facets is required. Various analytical and numerical methods have been suggested to determine the view factors for urban environments, but most of the methods provide only sky-view factor at the ground level of a specific location or assume simplified morphology of complex urban environments. In this study, a numerical method that can determine the sky-view factors ( ψ ga and ψ wa ) and wall-view factors ( ψ gw and ψ ww ) at the horizontal and vertical surfaces is presented for application to real urban morphology, which are derived from an analytical formulation of the view factor between two blackbody surfaces of arbitrary geometry. The established numerical method is validated against the analytical sky-view factor estimation for ideal street canyon geometries, showing a consolidate confidence in accuracy with errors of less than 0.2 %. Using a three-dimensional building database, the numerical method is also demonstrated to be applicable in determining the sky-view factors at the horizontal (roofs and roads) and vertical (walls) surfaces in real urban environments. The results suggest that the analytically based numerical method can be used for the radiative process parameterization of urban numerical models as well as for the characterization of local urban climate.

  6. Reliability and risk functions for structural components taking into account inspection results

    International Nuclear Information System (INIS)

    Rackwitz, R.; Schall, G.

    1989-01-01

    The method of outcrossings has been shown to be efficient when calculating the failure probability of metallic structural components under ergodic Gaussian loading. Using Paris/Erdogan's crack growth law it is possible to develop a semi-analytical calculation model for both the reliability and the risk function. For numerical studies an approximate method of asymptotic nature is proposed. The same methodology also enables to incorporate inspection observations. (orig.) [de

  7. An analytical study on the thermal stress of mass concrete

    International Nuclear Information System (INIS)

    Yoshida, H.; Sawada, T.; Yamazaki, M.; Miyashita, T.; Morikawa, H.; Hayami, Y.; Shibata, K.

    1983-01-01

    The thermal stress in mass concrete occurs as a result of the effect associated with the heat of hydration of the cement. Sometimes, the excessive stresses cause the cracking or other tensile failure in concrete. Therefore it is becoming necessary in the design and construction of mass concrete to predict the thermal stress. The thermal stress analysis of mass concrete requires to take account of the dependence of the elastic modulus on the age of concrete as well as the stress relaxation by creep effect. The studies of those phenomena and the analytical methods have been reported so far. The paper presents the analytical method and discusses its reliability through the application of the method to the actual structure, measuring the temperatures and the thermal stresses. The method is the time dependent thermal stress analysis based on the finite element method, which takes account of creep effect, the aging of concrete and the effect of temperature variation in time. (orig./HP)

  8. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  9. Analytical method for establishing indentation rolling resistance

    Science.gov (United States)

    Gładysiewicz, Lech; Konieczna, Martyna

    2018-01-01

    Belt conveyors are highly reliable machines able to work in special operating conditions. Harsh environment, long distance of transporting and great mass of transported martials are cause of high energy usage. That is why research in the field of belt conveyor transportation nowadays focuses on reducing the power consumption without lowering their efficiency. In this paper, previous methods for testing rolling resistance are described, and new method designed by authors was presented. New method of testing rolling resistance is quite simple and inexpensive. Moreover it allows to conduct the experimental tests of the impact of different parameters on the value of indentation rolling resistance such as core design, cover thickness, ambient temperature, idler travel frequency, or load value as well. Finally results of tests of relationship between rolling resistance and idler travel frequency and between rolling resistance and idler travel speed was presented.

  10. Analytical quality by design: a tool for regulatory flexibility and robust analytics.

    Science.gov (United States)

    Peraman, Ramalingam; Bhadraya, Kalva; Padmanabha Reddy, Yiragamreddy

    2015-01-01

    Very recently, Food and Drug Administration (FDA) has approved a few new drug applications (NDA) with regulatory flexibility for quality by design (QbD) based analytical approach. The concept of QbD applied to analytical method development is known now as AQbD (analytical quality by design). It allows the analytical method for movement within method operable design region (MODR). Unlike current methods, analytical method developed using analytical quality by design (AQbD) approach reduces the number of out-of-trend (OOT) results and out-of-specification (OOS) results due to the robustness of the method within the region. It is a current trend among pharmaceutical industry to implement analytical quality by design (AQbD) in method development process as a part of risk management, pharmaceutical development, and pharmaceutical quality system (ICH Q10). Owing to the lack explanatory reviews, this paper has been communicated to discuss different views of analytical scientists about implementation of AQbD in pharmaceutical quality system and also to correlate with product quality by design and pharmaceutical analytical technology (PAT).

  11. Enhancing reproducibility of SALDI MS detection by concentrating analytes within laser spot.

    Science.gov (United States)

    Teng, Fei; Zhu, Qunyan; Wang, Yalei; Du, Juan; Lu, Nan

    2018-03-01

    Surface-assisted laser desorption/ionization time-of-flight mass spectrometry (SALDI TOF MS) has become one of the most important analytical methods due to its less interference at low molecular weight range. However, it is still a challenge to obtain a good reproducibility of SALDI TOF MS because of the inhomogeneous distribution of analyte molecules induced by coffee ring effect. We propose a universal and reliable method to eliminate the coffee ring effect by concentrating all the analyte molecules within the laser spot. This method exhibits an excellent reproducibility of spot-to-spot and substrate-to-substrate, and the relative standard deviations (RSDs) for different concentrations are lower than 12.6%. It also performs good linear dependency (R 2 > 0.98) in the log-log plot with the concentration range of 1nM to 1μM, and the limit of detection for R6G is down to 1fmol. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Simplified Analytical Method for Optimized Initial Shape Analysis of Self-Anchored Suspension Bridges and Its Verification

    Directory of Open Access Journals (Sweden)

    Myung-Rag Jung

    2015-01-01

    Full Text Available A simplified analytical method providing accurate unstrained lengths of all structural elements is proposed to find the optimized initial state of self-anchored suspension bridges under dead loads. For this, equilibrium equations of the main girder and the main cable system are derived and solved by evaluating the self-weights of cable members using unstrained cable lengths and iteratively updating both the horizontal tension component and the vertical profile of the main cable. Furthermore, to demonstrate the validity of the simplified analytical method, the unstrained element length method (ULM is applied to suspension bridge models based on the unstressed lengths of both cable and frame members calculated from the analytical method. Through numerical examples, it is demonstrated that the proposed analytical method can indeed provide an optimized initial solution by showing that both the simplified method and the nonlinear FE procedure lead to practically identical initial configurations with only localized small bending moment distributions.

  13. Reliability and discriminatory power of methods for dental plaque quantification

    Directory of Open Access Journals (Sweden)

    Daniela Prócida Raggio

    2010-04-01

    Full Text Available OBJECTIVE: This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI and fluorescence camera (FC to detect plaque. MATERIAL AND METHODS: Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. RESULTS: Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. CONCLUSION: The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque.

  14. Application of reliability analysis methods to the comparison of two safety circuits

    International Nuclear Information System (INIS)

    Signoret, J.-P.

    1975-01-01

    Two circuits of different design, intended for assuming the ''Low Pressure Safety Injection'' function in PWR reactors are analyzed using reliability methods. The reliability analysis of these circuits allows the failure trees to be established and the failure probability derived. The dependence of these results on test use and maintenance is emphasized as well as critical paths. The great number of results obtained may allow a well-informed choice taking account of the reliability wanted for the type of circuits [fr

  15. Reliability of Lyapunov characteristic exponents computed by the two-particle method

    Science.gov (United States)

    Mei, Lijie; Huang, Li

    2018-03-01

    For highly complex problems, such as the post-Newtonian formulation of compact binaries, the two-particle method may be a better, or even the only, choice to compute the Lyapunov characteristic exponent (LCE). This method avoids the complex calculations of variational equations compared with the variational method. However, the two-particle method sometimes provides spurious estimates to LCEs. In this paper, we first analyze the equivalence in the definition of LCE between the variational and two-particle methods for Hamiltonian systems. Then, we develop a criterion to determine the reliability of LCEs computed by the two-particle method by considering the magnitude of the initial tangent (or separation) vector ξ0 (or δ0), renormalization time interval τ, machine precision ε, and global truncation error ɛT. The reliable Lyapunov characteristic indicators estimated by the two-particle method form a V-shaped region, which is restricted by d0, ε, and ɛT. Finally, the numerical experiments with the Hénon-Heiles system, the spinning compact binaries, and the post-Newtonian circular restricted three-body problem strongly support the theoretical results.

  16. Characterization of Some Real Mixed Plastics from WEEE: A Focus on Chlorine and Bromine Determination by Different Analytical Methods

    Directory of Open Access Journals (Sweden)

    Beatrice Beccagutti

    2016-10-01

    Full Text Available Bromine and chlorine are almost ubiquitous in waste of electrical and electronic equipment (WEEE and the knowledge of their content in the plastic fraction is an essential step for proper end of life management. The aim of this study is to compare the following analytical methods: energy dispersive X-ray fluorescence spectroscopy (ED-XRF, ion chromatography (IC, ion-selective electrodes (ISEs, and elemental analysis for the quantitative determination of chlorine and bromine in four real samples taken from different WEEE treatment plants, identifying the best analytical technique for waste management workers. Home-made plastic standard materials with known concentrations of chlorine or bromine have been used for calibration of ED-XRF and to test the techniques before the sample analysis. Results showed that IC and ISEs, based upon dissolution of the products of the sample combustion, have not always achieved a quantitative absorption of the analytes in the basic solutions and that bromine could be underestimated since several oxidation states occur after combustion. Elemental analysis designed for chlorine determination is subjected to strong interference from bromine and required frequent regeneration and recalibration of the measurement cell. The most reliable method seemed to be the non-destructive ED-XRF. Calibration with home-made standards, having a similar plastic matrix of the samples, enabled us to carry out quantitative determinations, which have been revealed to be satisfactorily accurate and precise. In all the analyzed samples a total concentration of chlorine and/or bromine between 0.6 and 4 w/w% was detected, compromising the feasibility of a mechanical recycling and suggesting the exploration of an alternative route for managing these plastic wastes.

  17. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  18. Analytical method for estimating the thermal expansion coefficient of metals at high temperature

    International Nuclear Information System (INIS)

    Takamoto, S; Izumi, S; Nakata, T; Sakai, S; Oinuma, S; Nakatani, Y

    2015-01-01

    In this paper, we propose an analytical method for estimating the thermal expansion coefficient (TEC) of metals at high-temperature ranges. Although the conventional method based on quasiharmonic approximation (QHA) shows good results at low temperatures, anharmonic effects caused by large-amplitude thermal vibrations reduces its accuracy at high temperatures. Molecular dynamics (MD) naturally includes the anharmonic effect. However, since the computational cost of MD is relatively high, in order to make an interatomic potential capable of reproducing TEC, an analytical method is essential. In our method, analytical formulation of the radial distribution function (RDF) at finite temperature realizes the estimation of the TEC. Each peak of the RDF is approximated by the Gaussian distribution. The average and variance of the Gaussian distribution are formulated by decomposing the fluctuation of interatomic distance into independent elastic waves. We incorporated two significant anharmonic effects into the method. One is the increase in the averaged interatomic distance caused by large amplitude vibration. The second is the variation in the frequency of elastic waves. As a result, the TECs of fcc and bcc crystals estimated by our method show good agreement with those of MD. Our method enables us to make an interatomic potential that reproduces the TEC at high temperature. We developed the GEAM potential for nickel. The TEC of the fitted potential showed good agreement with experimental data from room temperature to 1000 K. As compared with the original potential, it was found that the third derivative of the wide-range curve was modified, while the zeroth, first and second derivatives were unchanged. This result supports the conventional theory of solid state physics. We believe our analytical method and developed interatomic potential will contribute to future high-temperature material development. (paper)

  19. A fast approximation method for reliability analysis of cold-standby systems

    International Nuclear Information System (INIS)

    Wang, Chaonan; Xing, Liudong; Amari, Suprasad V.

    2012-01-01

    Analyzing reliability of large cold-standby systems has been a complicated and time-consuming task, especially for systems with components having non-exponential time-to-failure distributions. In this paper, an approximation model, which is based on the central limit theorem, is presented for the reliability analysis of binary cold-standby systems. The proposed model can estimate the reliability of large cold-standby systems with binary-state components having arbitrary time-to-failure distributions in an efficient and easy way. The accuracy and efficiency of the proposed method are illustrated using several different types of distributions for both 1-out-of-n and k-out-of-n cold-standby systems.

  20. A Novel Reliability Enhanced Handoff Method in Future Wireless Heterogeneous Networks

    Directory of Open Access Journals (Sweden)

    Wang YuPeng

    2016-01-01

    Full Text Available As the demand increases, future networks will follow the trends of network variety and service flexibility, which requires heterogeneous type of network deployment and reliable communication method. In practice, most communication failure happens due to the bad radio link quality, i.e., high-speed users suffers a lot on the problem of radio link failure, which causes the problem of communication interrupt and radio link recovery. To make the communication more reliable, especially for the high mobility users, we propose a novel communication handoff mechanism to reduce the occurrence of service interrupt. Based on computer simulation, we find that the reliability on the service is greatly improved.

  1. A Bayesian reliability evaluation method with integrated accelerated degradation testing and field information

    International Nuclear Information System (INIS)

    Wang, Lizhi; Pan, Rong; Li, Xiaoyang; Jiang, Tongmin

    2013-01-01

    Accelerated degradation testing (ADT) is a common approach in reliability prediction, especially for products with high reliability. However, oftentimes the laboratory condition of ADT is different from the field condition; thus, to predict field failure, one need to calibrate the prediction made by using ADT data. In this paper a Bayesian evaluation method is proposed to integrate the ADT data from laboratory with the failure data from field. Calibration factors are introduced to calibrate the difference between the lab and the field conditions so as to predict a product's actual field reliability more accurately. The information fusion and statistical inference procedure are carried out through a Bayesian approach and Markov chain Monte Carlo methods. The proposed method is demonstrated by two examples and the sensitivity analysis to prior distribution assumption

  2. How to simplify the analytics for input-output accountability measurements in a reprocessing plant

    International Nuclear Information System (INIS)

    Ottmar, H.; Eberle, H.; Matussek, P.; Michel-Piper, I.

    1986-02-01

    An analytical approach to high-performance uranium and plutonium accountancy measurements in reprocessing input and output solutions is presented, which provides larger operational simplicity than the conventionally applied chemical methods. The proposed alternative is based on energy-dispersive absorption edge and fluorescence X-ray spectrometry, using the proven and reliable K-edge densitometry technique as reference method. Two X-ray densitometers developed for accurate and reliable uranium and plutonium analysis in both the feed and product solutions are described. Practical experiences and results from their performance evaluation on actual process solutions from a reprocessing plant are presented and discussed. (orig.) [de

  3. Reliability analysis of reactor systems by applying probability method; Analiza pouzdanosti reaktorskih sistema primenom metoda verovatnoce

    Energy Technology Data Exchange (ETDEWEB)

    Milivojevic, S [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)

    1974-12-15

    Probability method was chosen for analysing the reactor system reliability is considered realistic since it is based on verified experimental data. In fact this is a statistical method. The probability method developed takes into account the probability distribution of permitted levels of relevant parameters and their particular influence on the reliability of the system as a whole. The proposed method is rather general, and was used for problem of thermal safety analysis of reactor system. This analysis enables to analyze basic properties of the system under different operation conditions, expressed in form of probability they show the reliability of the system on the whole as well as reliability of each component.

  4. [Progress in sample preparation and analytical methods for trace polar small molecules in complex samples].

    Science.gov (United States)

    Zhang, Qianchun; Luo, Xialin; Li, Gongke; Xiao, Xiaohua

    2015-09-01

    Small polar molecules such as nucleosides, amines, amino acids are important analytes in biological, food, environmental, and other fields. It is necessary to develop efficient sample preparation and sensitive analytical methods for rapid analysis of these polar small molecules in complex matrices. Some typical materials in sample preparation, including silica, polymer, carbon, boric acid and so on, are introduced in this paper. Meanwhile, the applications and developments of analytical methods of polar small molecules, such as reversed-phase liquid chromatography, hydrophilic interaction chromatography, etc., are also reviewed.

  5. Review of Reliability-Based Design Optimization Approach and Its Integration with Bayesian Method

    Science.gov (United States)

    Zhang, Xiangnan

    2018-03-01

    A lot of uncertain factors lie in practical engineering, such as external load environment, material property, geometrical shape, initial condition, boundary condition, etc. Reliability method measures the structural safety condition and determine the optimal design parameter combination based on the probabilistic theory. Reliability-based design optimization (RBDO) is the most commonly used approach to minimize the structural cost or other performance under uncertainty variables which combines the reliability theory and optimization. However, it cannot handle the various incomplete information. The Bayesian approach is utilized to incorporate this kind of incomplete information in its uncertainty quantification. In this paper, the RBDO approach and its integration with Bayesian method are introduced.

  6. A GPU code for analytic continuation through a sampling method

    Directory of Open Access Journals (Sweden)

    Johan Nordström

    2016-01-01

    Full Text Available We here present a code for performing analytic continuation of fermionic Green’s functions and self-energies as well as bosonic susceptibilities on a graphics processing unit (GPU. The code is based on the sampling method introduced by Mishchenko et al. (2000, and is written for the widely used CUDA platform from NVidia. Detailed scaling tests are presented, for two different GPUs, in order to highlight the advantages of this code with respect to standard CPU computations. Finally, as an example of possible applications, we provide the analytic continuation of model Gaussian functions, as well as more realistic test cases from many-body physics.

  7. Application of an analytical method for the field calculation in superconducting magnets

    International Nuclear Information System (INIS)

    Martinelli, G.; Morini, A.

    1983-01-01

    Superconducting magnets are taking on ever-growing importance due to their increasing prospects of utilization in electrical machines, nuclear fusion, MHD conversion and high-energy physics. These magnets are generally composed of cylindrical or saddle coils, while a ferromagnetic shield is generally situated outside them. This paper uses an analytical method for calculating the magnetic field at every point in a superconducting magnet composed of cylindrical or saddle coils. The method takes into account the real lengths and finite thickness of the coils as well as their radial and axial ferromagnetic shields, if present. The values and distribution of the flux density for some superconducting magnets of high dimensions and high magnetic field, composed of cylindrical or saddle coils, are also given. The results obtained with analytical method are compared with those obtained using numerical methods

  8. A reliable method for the stability analysis of structures ...

    African Journals Online (AJOL)

    The detection of structural configurations with singular tangent stiffness matrix is essential because they can be unstable. The secondary paths, especially in unstable buckling, can play the most important role in the loss of stability and collapse of the structure. A new method for reliable detection and accurate computation of ...

  9. Flammable gas safety program. Analytical methods development: FY 1994 progress report

    International Nuclear Information System (INIS)

    Campbell, J.A.; Clauss, S.; Grant, K.; Hoopes, V.; Lerner, B.; Lucke, R.; Mong, G.; Rau, J.; Wahl, K.; Steele, R.

    1994-09-01

    This report describes the status of developing analytical methods to account for the organic components in Hanford waste tanks, with particular focus on tanks assigned to the Flammable Gas Watch List. The methods that have been developed are illustrated by their application to samples obtained from Tank 241-SY-101 (Tank 101-SY)

  10. Application of model-based and knowledge-based measuring methods as analytical redundancy

    International Nuclear Information System (INIS)

    Hampel, R.; Kaestner, W.; Chaker, N.; Vandreier, B.

    1997-01-01

    The safe operation of nuclear power plants requires the application of modern and intelligent methods of signal processing for the normal operation as well as for the management of accident conditions. Such modern and intelligent methods are model-based and knowledge-based ones being founded on analytical knowledge (mathematical models) as well as experiences (fuzzy information). In addition to the existing hardware redundancies analytical redundancies will be established with the help of these modern methods. These analytical redundancies support the operating staff during the decision-making. The design of a hybrid model-based and knowledge-based measuring method will be demonstrated by the example of a fuzzy-supported observer. Within the fuzzy-supported observer a classical linear observer is connected with a fuzzy-supported adaptation of the model matrices of the observer model. This application is realized for the estimation of the non-measurable variables as steam content and mixture level within pressure vessels with water-steam mixture during accidental depressurizations. For this example the existing non-linearities will be classified and the verification of the model will be explained. The advantages of the hybrid method in comparison to the classical model-based measuring methods will be demonstrated by the results of estimation. The consideration of the parameters which have an important influence on the non-linearities requires the inclusion of high-dimensional structures of fuzzy logic within the model-based measuring methods. Therefore methods will be presented which allow the conversion of these high-dimensional structures to two-dimensional structures of fuzzy logic. As an efficient solution of this problem a method based on cascaded fuzzy controllers will be presented. (author). 2 refs, 12 figs, 5 tabs

  11. A functional-analytic method for the study of difference equations

    Directory of Open Access Journals (Sweden)

    Siafarikas Panayiotis D

    2004-01-01

    Full Text Available We will give the generalization of a recently developed functional-analytic method for studying linear and nonlinear, ordinary and partial, difference equations in the and spaces, p∈ℕ, . The method will be illustrated by use of two examples concerning a nonlinear ordinary difference equation known as the Putnam equation, and a linear partial difference equation of three variables describing the discrete Newton law of cooling in three dimensions.

  12. Analytical method for establishing indentation rolling resistance

    Directory of Open Access Journals (Sweden)

    Gładysiewicz Lech

    2018-01-01

    Full Text Available Belt conveyors are highly reliable machines able to work in special operating conditions. Harsh environment, long distance of transporting and great mass of transported martials are cause of high energy usage. That is why research in the field of belt conveyor transportation nowadays focuses on reducing the power consumption without lowering their efficiency. In this paper, previous methods for testing rolling resistance are described, and new method designed by authors was presented. New method of testing rolling resistance is quite simple and inexpensive. Moreover it allows to conduct the experimental tests of the impact of different parameters on the value of indentation rolling resistance such as core design, cover thickness, ambient temperature, idler travel frequency, or load value as well. Finally results of tests of relationship between rolling resistance and idler travel frequency and between rolling resistance and idler travel speed was presented.

  13. An analytical method for calculating stresses and strains of ATF cladding based on thick walled theory

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Hyun; Kim, Hak Sung [Hanyang University, Seoul (Korea, Republic of); Kim, Hyo Chan; Yang, Yong Sik; In, Wang kee [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this paper, an analytical method based on thick walled theory has been studied to calculate stress and strain of ATF cladding. In order to prescribe boundary conditions of the analytical method, two algorithms were employed which are called subroutine 'Cladf' and 'Couple' of FRACAS, respectively. To evaluate the developed method, equivalent model using finite element method was established and stress components of the method were compared with those of equivalent FE model. One of promising ATF concepts is the coated cladding, which take advantages such as high melting point, a high neutron economy, and low tritium permeation rate. To evaluate the mechanical behavior and performance of the coated cladding, we need to develop the specified model to simulate the ATF behaviors in the reactor. In particular, the model for simulation of stress and strain for the coated cladding should be developed because the previous model, which is 'FRACAS', is for one body model. The FRACAS module employs the analytical method based on thin walled theory. According to thin-walled theory, radial stress is defined as zero but this assumption is not suitable for ATF cladding because value of the radial stress is not negligible in the case of ATF cladding. Recently, a structural model for multi-layered ceramic cylinders based on thick-walled theory was developed. Also, FE-based numerical simulation such as BISON has been developed to evaluate fuel performance. An analytical method that calculates stress components of ATF cladding was developed in this study. Thick-walled theory was used to derive equations for calculating stress and strain. To solve for these equations, boundary and loading conditions were obtained by subroutine 'Cladf' and 'Couple' and applied to the analytical method. To evaluate the developed method, equivalent FE model was established and its results were compared to those of analytical model. Based on the

  14. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  15. An overview of reliability methods in mechanical and structural design

    Science.gov (United States)

    Wirsching, P. H.; Ortiz, K.; Lee, S. J.

    1987-01-01

    An evaluation is made of modern methods of fast probability integration and Monte Carlo treatment for the assessment of structural systems' and components' reliability. Fast probability integration methods are noted to be more efficient than Monte Carlo ones. This is judged to be an important consideration when several point probability estimates must be made in order to construct a distribution function. An example illustrating the relative efficiency of the various methods is included.

  16. Toxicologic evaluation of analytes from Tank 241-C-103

    International Nuclear Information System (INIS)

    Mahlum, D.D.; Young, J.Y.; Weller, R.E.

    1994-11-01

    Westinghouse Hanford Company requested PNL to assemble a toxicology review panel (TRP) to evaluate analytical data compiled by WHC, and provide advice concerning potential health effects associated with exposure to tank-vapor constituents. The team's objectives would be to (1) review procedures used for sampling vapors from tanks, (2) identify constituents in tank-vapor samples that could be related to symptoms reported by workers, (3) evaluate the toxicological implications of those constituents by comparison to establish toxicological databases, (4) provide advice for additional analytical efforts, and (5) support other activities as requested by WHC. The TRP represents a wide range of expertise, including toxicology, industrial hygiene, and occupational medicine. The TRP prepared a list of target analytes that chemists at the Oregon Graduate Institute/Sandia (OGI), Oak Ridge National Laboratory (ORNL), and PNL used to establish validated methods for quantitative analysis of head-space vapors from Tank 241-C-103. this list was used by the analytical laboratories to develop appropriate analytical methods for samples from Tank 241-C-103. Target compounds on the list included acetone, acetonitrile, ammonia, benzene, 1, 3-butadiene, butanal, n-butanol, hexane, 2-hexanone, methylene chloride, nitric oxide, nitrogen dioxide, nitrous oxide, dodecane, tridecane, propane nitrile, sulfur oxide, tributyl phosphate, and vinylidene chloride. The TRP considered constituent concentrations, current exposure limits, reliability of data relative to toxicity, consistency of the analytical data, and whether the material was carcinogenic or teratogenic. A final consideration in the analyte selection process was to include representative chemicals for each class of compounds found

  17. Reliability and Validity of the Footprint Assessment Method Using Photoshop CS5 Software.

    Science.gov (United States)

    Gutiérrez-Vilahú, Lourdes; Massó-Ortigosa, Núria; Costa-Tutusaus, Lluís; Guerra-Balic, Myriam

    2015-05-01

    Several sophisticated methods of footprint analysis currently exist. However, it is sometimes useful to apply standard measurement methods of recognized evidence with an easy and quick application. We sought to assess the reliability and validity of a new method of footprint assessment in a healthy population using Photoshop CS5 software (Adobe Systems Inc, San Jose, California). Forty-two footprints, corresponding to 21 healthy individuals (11 men with a mean ± SD age of 20.45 ± 2.16 years and 10 women with a mean ± SD age of 20.00 ± 1.70 years) were analyzed. Footprints were recorded in static bipedal standing position using optical podography and digital photography. Three trials for each participant were performed. The Hernández-Corvo, Chippaux-Smirak, and Staheli indices and the Clarke angle were calculated by manual method and by computerized method using Photoshop CS5 software. Test-retest was used to determine reliability. Validity was obtained by intraclass correlation coefficient (ICC). The reliability test for all of the indices showed high values (ICC, 0.98-0.99). Moreover, the validity test clearly showed no difference between techniques (ICC, 0.99-1). The reliability and validity of a method to measure, assess, and record the podometric indices using Photoshop CS5 software has been demonstrated. This provides a quick and accurate tool useful for the digital recording of morphostatic foot study parameters and their control.

  18. Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction

    International Nuclear Information System (INIS)

    Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee

    2007-01-01

    A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)

  19. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  20. Solution of the isotopic depletion equation using decomposition method and analytical solution

    Energy Technology Data Exchange (ETDEWEB)

    Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: fprata@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)

  1. Solution of the isotopic depletion equation using decomposition method and analytical solution

    International Nuclear Information System (INIS)

    Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S.

    2011-01-01

    In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)

  2. Analytical Method Development for the Determination of Α-Endosulfan and Bifenthrin Pesticide Residues in Tea

    Directory of Open Access Journals (Sweden)

    Dyah Styarini

    2014-03-01

    Full Text Available The development of analytical method for the determination of α-endosulfan and bifenthrin residues in tea has been done. The complex matrices and also the pigment were the challenge in doing quantification of the pesticide residues in tea matrices. In order to get appropriate analysis method for the determination of pesticide residues in tea, the modification was done in the analytical method for the determination of organochlorine multiresidue in non fat matrices: seasoning and spicy that is published by Directorate General of Food Crops, Directorate of Food Plant Protection. The modification was done particularly in clean-up step to remove the interferences from the extract of tea matrices such as the pigment that usually interfere the measurement with Gas Chromatography (GC. The result showed that the MDL value for both analytes were 0.5 ng/g that were much lower than MRLs. The percent recovery obtained from the method was 78.58 and 90.19% for α-endosulfan and bifenthrin, respectively. The precision of the analysis method for both analytes were good since the % RSD values were below than the Horwitz’s value that was 19.18% at spiking level concentration of 300 ng/g.

  3. Analytical Methods for Mycotoxin Detection in Southeast Asian Nations (ASEAN).

    Science.gov (United States)

    Lim, Chee Wei; Chung, Gerald; Chan, Sheot Harn

    2017-10-03

    Aflatoxins B 1 (AFB 1 ) and B₂ (AFB₂) and G 1 and G₂ remain the top mycotoxins routinely analyzed and monitored by Association of Southeast Asian Nations (ASEAN) national laboratories primarily for food safety regulation in the major food commodities, nuts and spices. LC tandem fluorescence detection (LC–fluorescence) represents a current mainstream analytical method, with a progressive migration to a primary method by LC tandem MS (MS/MS) for the next half decade. Annual proficiency testing (PT) is conducted by ASEAN Food Reference Laboratories (AFRLs) for mycotoxin testing as part of capability building in national laboratories, with the scope of PT materials spanning from naturally mycotoxin-contaminated spices and nuts in the early 2010s to the recent contamination of corn flour in 2017 for total aflatoxin assay development. The merits of the mainstream LC–fluorescence method are witnessed by a significant improvement ( P < 0.05) in PT z -score passing rates (≤2) from 11.8 to 79.2% for AFB 1 , 23.5 to 83.3% for AFB₂, and 23.5 to 79.2% for total aflatoxins in the last 5 years. This paper discusses the journey of ASEAN national laboratories in analytical testing through AFRLs, and the progressive collective adoption of a multimycotoxin LC-MS/MS method aided by an isotopic dilution assay as a future primary method for safer food commodities.

  4. A generic method for assignment of reliability scores applied to solvent accessibility predictions

    DEFF Research Database (Denmark)

    Petersen, Bent; Petersen, Thomas Nordahl; Andersen, Pernille

    2009-01-01

    : The performance of the neural networks was evaluated on a commonly used set of sequences known as the CB513 set. An overall Pearson's correlation coefficient of 0.72 was obtained, which is comparable to the performance of the currently best public available method, Real-SPINE. Both methods associate a reliability...... comparing the Pearson's correlation coefficient for the upper 20% of predictions sorted according to reliability. For this subset, values of 0.79 and 0.74 are obtained using our and the compared method, respectively. This tendency is true for any selected subset....

  5. Study on reliability analysis based on multilevel flow models and fault tree method

    International Nuclear Information System (INIS)

    Chen Qiang; Yang Ming

    2014-01-01

    Multilevel flow models (MFM) and fault tree method describe the system knowledge in different forms, so the two methods express an equivalent logic of the system reliability under the same boundary conditions and assumptions. Based on this and combined with the characteristics of MFM, a method mapping MFM to fault tree was put forward, thus providing a way to establish fault tree rapidly and realizing qualitative reliability analysis based on MFM. Taking the safety injection system of pressurized water reactor nuclear power plant as an example, its MFM was established and its reliability was analyzed qualitatively. The analysis result shows that the logic of mapping MFM to fault tree is correct. The MFM is easily understood, created and modified. Compared with the traditional fault tree analysis, the workload is greatly reduced and the modeling time is saved. (authors)

  6. Laser: a Tool for Optimization and Enhancement of Analytical Methods

    Energy Technology Data Exchange (ETDEWEB)

    Preisler, Jan [Iowa State Univ., Ames, IA (United States)

    1997-01-01

    In this work, we use lasers to enhance possibilities of laser desorption methods and to optimize coating procedure for capillary electrophoresis (CE). We use several different instrumental arrangements to characterize matrix-assisted laser desorption (MALD) at atmospheric pressure and in vacuum. In imaging mode, 488-nm argon-ion laser beam is deflected by two acousto-optic deflectors to scan plumes desorbed at atmospheric pressure via absorption. All absorbing species, including neutral molecules, are monitored. Interesting features, e.g. differences between the initial plume and subsequent plumes desorbed from the same spot, or the formation of two plumes from one laser shot are observed. Total plume absorbance can be correlated with the acoustic signal generated by the desorption event. A model equation for the plume velocity as a function of time is proposed. Alternatively, the use of a static laser beam for observation enables reliable determination of plume velocities even when they are very high. Static scattering detection reveals negative influence of particle spallation on MS signal. Ion formation during MALD was monitored using 193-nm light to photodissociate a portion of insulin ion plume. These results define the optimal conditions for desorbing analytes from matrices, as opposed to achieving a compromise between efficient desorption and efficient ionization as is practiced in mass spectrometry. In CE experiment, we examined changes in a poly(ethylene oxide) (PEO) coating by continuously monitoring the electroosmotic flow (EOF) in a fused-silica capillary during electrophoresis. An imaging CCD camera was used to follow the motion of a fluorescent neutral marker zone along the length of the capillary excited by 488-nm Ar-ion laser. The PEO coating was shown to reduce the velocity of EOF by more than an order of magnitude compared to a bare capillary at pH 7.0. The coating protocol was important, especially at an intermediate pH of 7.7. The increase of p

  7. A new diffusion nodal method based on analytic basis function expansion

    International Nuclear Information System (INIS)

    Noh, J.M.; Cho, N.Z.

    1993-01-01

    The transverse integration procedure commonly used in most advanced nodal methods results in some limitations. The first is that the transverse leakage term that appears in the transverse integration procedure must be appropriately approximated. In most advanced nodal methods, this term is expanded in a quadratic polynomial. The second arises when reconstructing the pinwise flux distribution within a node. The available one-dimensional flux shapes from nodal calculation in each spatial direction cannot be used directly in the flux reconstruction. Finally, the transverse leakage defined for a hexagonal node becomes so complicated as not to be easily handled and contains nonphysical singular terms. In this paper, a new nodal method called the analytic function expansion nodal (AFEN) method is described for both the rectangular geometry and the hexagonal geometry in order to overcome these limitations. This method does not solve the transverse-integrated one-dimensional diffusion equations but instead solves directly the original multidimensional diffusion equation within a node. This is a accomplished by expanding the solution (or the intranodal homogeneous flux distribution) in terms of nonseparable analytic basis functions satisfying the diffusion equation at any point in the node

  8. Recent applications of nuclear analytical methods to the certification of elemental content in NIST standard reference materials

    International Nuclear Information System (INIS)

    Greenberg, R.R.; Zeisler, R.; Mackey, E.A.

    2006-01-01

    Well-characterized, certified reference materials (CRMs) play an essential role in assuring the quality of analytical measurements. NIST has been producing CRMs, currently called NIST Standard Reference Materials (SRMs), to validate analytical measurements for nearly one hundred years. The predominant mode of certifying inorganic constituents in complex-matrix SRMs is through the use of two critically evaluated, independent analytical techniques at NIST. These techniques should have no significant sources of error in common. The use of nuclear analytical methods in combination with one of the chemically based analytical method at NIST eliminates the possibility of any significant, common error source. The inherent characteristics of the various forms of nuclear analytical methods make them extremely valuable for SRM certification. Instrumental NAA is nondestructive, which eliminates the possibility of any dissolution problems, and often provides homogeneity information. Radiochemical NAA typically provides nearly blank-free determinations of some highly important, but difficult elements at very low levels. Prompt-gamma NAA complements INAA, and provides independent determinations of some key elements. In addition, all significant uncertainty components can be evaluated for these techniques, and we believe these methods can meet all the requirements of a primary method of measurement as defined by ISO and the CCQM. NIST has certified several SRMs using INAA and RNAA as primary methods. In addition, NIST has compared measurements by INAA and PGAA with other primary methods as part of the CCQM intercomparisons of national metrology institutes. Some significant SRMs recently certified for inorganic constituents with contributions from the nuclear analytical methods include: Toxic Substances in Urine (SRM 2670a), Lake Superior Fish Tissue (SRM 1946), Air Particulate on Filter Media (SRM 2783), Inorganics in Marine Sediment (SRM 2702), Sediment for Solid Sampling (Small

  9. Go-flow: a reliability analysis methodology applicable to piping system

    International Nuclear Information System (INIS)

    Matsuoka, T.; Kobayashi, M.

    1985-01-01

    Since the completion of the Reactor Safety Study, the use of probabilistic risk assessment technique has been becoming more widespread in the nuclear community. Several analytical methods are used for the reliability analysis of nuclear power plants. The GO methodology is one of these methods. Using the GO methodology, the authors performed a reliability analysis of the emergency decay heat removal system of the nuclear ship Mutsu, in order to examine its applicability to piping systems. By this analysis, the authors have found out some disadvantages of the GO methodology. In the GO methodology, the signal is on-to-off or off-to-on signal, therefore the GO finds out the time point at which the state of a system changes, and can not treat a system which state changes as off-on-off. Several computer runs are required to obtain the time dependent failure probability of a system. In order to overcome these disadvantages, the authors propose a new analytical methodology: GO-FLOW. In GO-FLOW, the modeling method (chart) and the calculation procedure are similar to those in the GO methodology, but the meaning of signal and time point, and the definitions of operators are essentially different. In the paper, the GO-FLOW methodology is explained and two examples of the analysis by GO-FLOW are given

  10. En introduktion til CARM: The Conversation Analytic Role-Play Method

    DEFF Research Database (Denmark)

    Lange, Simon Bierring

    2014-01-01

    Dette working paper er en introduktion til og kort diskussion af workshopmetoden Conversation Analytic Role-Play Method (CARM), som er en metode udviklet til at afholde workshops på baggrund af resultater fra interaktionsanalyser. Artiklen er den første introduktion til CARM-metoden på dansk, og...

  11. Optimization of offshore wind turbine support structures using analytical gradient-based method

    OpenAIRE

    Chew, Kok Hon; Tai, Kang; Ng, E.Y.K.; Muskulus, Michael

    2015-01-01

    Design optimization of the offshore wind turbine support structure is an expensive task; due to the highly-constrained, non-convex and non-linear nature of the design problem. This report presents an analytical gradient-based method to solve this problem in an efficient and effective way. The design sensitivities of the objective and constraint functions are evaluated analytically while the optimization of the structure is performed, subject to sizing, eigenfrequency, extreme load an...

  12. Dynamic reliability networks with self-healing units

    International Nuclear Information System (INIS)

    Jenab, K.; Seyed Hosseini, S.M.; Dhillon, B.S.

    2008-01-01

    This paper presents an analytical approach for dynamic reliability networks used for the failure limit strategy in maintenance optimization. The proposed approach utilizes the moment generating function (MGF) and the flow-graph concept to depict the functional and reliability diagrams of the system comprised of series, parallel or mix configuration of self-healing units. The self-healing unit is featured by the embedded failure detection and recovery mechanisms presented by self-loop in flow-graph networks. The newly developed analytical approach provides the probability of the system failure and time-to-failure data i.e., mean and standard deviation time-to-failure used for maintenance optimization

  13. The analytic nodal method in cylindrical geometry

    International Nuclear Information System (INIS)

    Prinsloo, Rian H.; Tomasevic, Djordje I.

    2008-01-01

    Nodal diffusion methods have been used extensively in nuclear reactor calculations, specifically for their performance advantage, but also for their superior accuracy. More specifically, the Analytic Nodal Method (ANM), utilising the transverse integration principle, has been applied to numerous reactor problems with much success. In this work, a nodal diffusion method is developed for cylindrical geometry. Application of this method to three-dimensional (3D) cylindrical geometry has never been satisfactorily addressed and we propose a solution which entails the use of conformal mapping. A set of 1D-equations with an adjusted, geometrically dependent, inhomogeneous source, is obtained. This work describes the development of the method and associated test code, as well as its application to realistic reactor problems. Numerical results are given for the PBMR-400 MW benchmark problem, as well as for a 'cylindrisized' version of the well-known 3D LWR IAEA benchmark. Results highlight the improved accuracy and performance over finite-difference core solutions and investigate the applicability of nodal methods to 3D PBMR type problems. Results indicate that cylindrical nodal methods definitely have a place within PBMR applications, yielding performance advantage factors of 10 and 20 for 2D and 3D calculations, respectively, and advantage factors of the order of 1000 in the case of the LWR problem

  14. What we-authors, reviewers and editors of scientific work-can learn from the analytical history of biological 3-nitrotyrosine.

    Science.gov (United States)

    Tsikas, Dimitrios

    2017-07-15

    Tyrosine and tyrosine residues in proteins are attacked by the reactive oxygen and nitrogen species peroxynitrite (O=N-OO - ) to generate 3-nitrotyrosine (3-NT) and 3-nitrotyrosine-proteins (3-NTProt), respectively. 3-NT and 3-NTProt are widely accepted as biomarkers of nitr(os)ative stress. Over the years many different analytical methods have been reported for 3-NT and 3-NTProt. Reported concentrations often differ by more than three orders of magnitude, indicative of serious analytical problems. Strategies to overcome pre-analytical and analytical shortcomings and pitfalls have been proposed. The present review investigated whether recently published work on the quantitative measurement of biological 3-nitrotyrosine did adequately consider the analytical past of this biomolecule. 3-Nitrotyrosine was taken as a representative of biomolecules that occur in biological samples in the pM-to-nM concentration range. This examination revealed that in many cases the main protagonists involved in the publication of scientific work, i.e., authors, reviewers and editors, failed to do so. Learning from the analytical history of 3-nitrotyrosine means advancing analytical and biological science and implies the following key issues. (1) Choosing the most reliable analytical approach in terms of sensitivity and accuracy; presently this is best feasible by stable-isotope dilution tandem mass spectrometry coupled with gas chromatography (GC-MS/MS) or liquid chromatography (LC-MS/MS). (2) Minimizing artificial formation of 3-nitrotyrosine during sample work up, a major pitfall in 3-nitrotyrosine analysis. (3) Validating adequately the final method in the intendent biological matrix and the established concentration range. (4) Inviting experts in the field for critical evaluation of the novelty and reliability of the proposed analytical method, placing special emphasis on the compliance of the analytical outcome with 3-nitrotyrosine concentrations obtained by validated GC-MS/MS and

  15. Validation of an analytical method for simultaneous high-precision measurements of greenhouse gas emissions from wastewater treatment plants using a gas chromatography-barrier discharge detector system.

    Science.gov (United States)

    Pascale, Raffaella; Caivano, Marianna; Buchicchio, Alessandro; Mancini, Ignazio M; Bianco, Giuliana; Caniani, Donatella

    2017-01-13

    Wastewater treatment plants (WWTPs) emit CO 2 and N 2 O, which may lead to climate change and global warming. Over the last few years, awareness of greenhouse gas (GHG) emissions from WWTPs has increased. Moreover, the development of valid, reliable, and high-throughput analytical methods for simultaneous gas analysis is an essential requirement for environmental applications. In the present study, an analytical method based on a gas chromatograph (GC) equipped with a barrier ionization discharge (BID) detector was developed for the first time. This new method simultaneously analyses CO 2 and N 2 O and has a precision, measured in terms of relative standard of variation RSD%, equal to or less than 6.6% and 5.1%, respectively. The method's detection limits are 5.3ppm v for CO 2 and 62.0ppb v for N 2 O. The method's selectivity, linearity, accuracy, repeatability, intermediate precision, limit of detection and limit of quantification were good at trace concentration levels. After validation, the method was applied to a real case of N 2 O and CO 2 emissions from a WWTP, confirming its suitability as a standard procedure for simultaneous GHG analysis in environmental samples containing CO 2 levels less than 12,000mg/L. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Analytical method for the isotopic characterization of soils

    International Nuclear Information System (INIS)

    Sibello Hernandez, Rita; Cozzella, Maria Letizia; Mariani, Mario

    2014-01-01

    The aim of this work was to develop an analytical method in order to determine the isotopic composition of different elements in soil samples and to determine the existence of contamination. The method used in the digestion of the samples was the EPA 3050B, and some metal concentration were determined including uranium and thorium. For elements with even lower concentrations such as plutonium and radium a treatment after mineralization by EPA, was necessary. The measurement technique used was mass spectrometry with quadrupole and plasma induced associated (ICP-MS). Results of the analysis performed in two laboratories showed a good correspondence. This method allowed to perform the isotopic characterization of studied soils and results showed that the studied soils do not present any local pollution and that the presence of plutonium-239, is due to global failure

  17. Radiochemical methods. Analytical chemistry by open learning

    Energy Technology Data Exchange (ETDEWEB)

    Geary, W.J.; James, A.M. (ed.)

    1986-01-01

    This book presents the analytical uses of radioactive isotopes within the context of radiochemistry as a whole. It is designed for scientists with relatively little background knowledge of the subject. Thus the initial emphasis is on developing the basic concepts of radioactive decay, particularly as they affect the potential usage of radioisotopes. Discussion of the properties of various types of radiation, and of factors such as half-life, is related to practical considerations such as counting and preparation methods, and handling/disposal problems. Practical aspects are then considered in more detail, and the various radioanalytical methods are outlined with particular reference to their applicability. The approach is 'user friendly' and the use of self assessment questions allows the reader to test his/her understanding of individual sections easily. For those who wish to develop their knowledge further, a reading list is provided.

  18. Simulation of an Electromagnetic Acoustic Transducer Array by Using Analytical Method and FDTD

    Directory of Open Access Journals (Sweden)

    Yuedong Xie

    2016-01-01

    Full Text Available Previously, we developed a method based on FEM and FDTD for the study of an Electromagnetic Acoustic Transducer Array (EMAT. This paper presents a new analytical solution to the eddy current problem for the meander coil used in an EMAT, which is adapted from the classic Deeds and Dodd solution originally intended for circular coils. The analytical solution resulting from this novel adaptation exploits the large radius extrapolation and shows several advantages over the finite element method (FEM, especially in the higher frequency regime. The calculated Lorentz force density from the analytical EM solver is then coupled to the ultrasonic simulations, which exploit the finite-difference time-domain (FDTD method to describe the propagation of ultrasound waves, in particular for Rayleigh waves. Radiation pattern obtained with Hilbert transform on time-domain waveforms is proposed to characterise the sensor in terms of its beam directivity and field distribution along the steering angle, which can produce performance parameters for an EMAT array, facilitating the optimum design of such sensors.

  19. A dynamic particle filter-support vector regression method for reliability prediction

    International Nuclear Information System (INIS)

    Wei, Zhao; Tao, Tao; ZhuoShu, Ding; Zio, Enrico

    2013-01-01

    Support vector regression (SVR) has been applied to time series prediction and some works have demonstrated the feasibility of its use to forecast system reliability. For accuracy of reliability forecasting, the selection of SVR's parameters is important. The existing research works on SVR's parameters selection divide the example dataset into training and test subsets, and tune the parameters on the training data. However, these fixed parameters can lead to poor prediction capabilities if the data of the test subset differ significantly from those of training. Differently, the novel method proposed in this paper uses particle filtering to estimate the SVR model parameters according to the whole measurement sequence up to the last observation instance. By treating the SVR training model as the observation equation of a particle filter, our method allows updating the SVR model parameters dynamically when a new observation comes. Because of the adaptability of the parameters to dynamic data pattern, the new PF–SVR method has superior prediction performance over that of standard SVR. Four application results show that PF–SVR is more robust than SVR to the decrease of the number of training data and the change of initial SVR parameter values. Also, even if there are trends in the test data different from those in the training data, the method can capture the changes, correct the SVR parameters and obtain good predictions. -- Highlights: •A dynamic PF–SVR method is proposed to predict the system reliability. •The method can adjust the SVR parameters according to the change of data. •The method is robust to the size of training data and initial parameter values. •Some cases based on both artificial and real data are studied. •PF–SVR shows superior prediction performance over standard SVR

  20. Methods to compute reliabilities for genomic predictions of feed intake

    Science.gov (United States)

    For new traits without historical reference data, cross-validation is often the preferred method to validate reliability (REL). Time truncation is less useful because few animals gain substantial REL after the truncation point. Accurate cross-validation requires separating genomic gain from pedigree...

  1. Verification of practicability of quantitative reliability evaluation method (De-BDA) in nuclear power plants

    International Nuclear Information System (INIS)

    Takahashi, Kinshiro; Yukimachi, Takeo.

    1988-01-01

    A variety of methods have been applied to study of reliability analysis in which human factors are included in order to enhance the safety and availability of nuclear power plants. De-BDA (Detailed Block Diagram Analysis) is one of such mehtods developed with the objective of creating a more comprehensive and understandable tool for quantitative analysis of reliability associated with plant operations. The practicability of this method has been verified by applying it to reliability analysis of various phases of plant operation as well as evaluation of enhanced man-machine interface in the central control room. (author)

  2. MS-Based Analytical Techniques: Advances in Spray-Based Methods and EI-LC-MS Applications

    Science.gov (United States)

    Medina, Isabel; Cappiello, Achille; Careri, Maria

    2018-01-01

    Mass spectrometry is the most powerful technique for the detection and identification of organic compounds. It can provide molecular weight information and a wealth of structural details that give a unique fingerprint for each analyte. Due to these characteristics, mass spectrometry-based analytical methods are showing an increasing interest in the scientific community, especially in food safety, environmental, and forensic investigation areas where the simultaneous detection of targeted and nontargeted compounds represents a key factor. In addition, safety risks can be identified at the early stage through online and real-time analytical methodologies. In this context, several efforts have been made to achieve analytical instrumentation able to perform real-time analysis in the native environment of samples and to generate highly informative spectra. This review article provides a survey of some instrumental innovations and their applications with particular attention to spray-based MS methods and food analysis issues. The survey will attempt to cover the state of the art from 2012 up to 2017.

  3. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Science.gov (United States)

    Smaluk, Victor; Fielder, Richard; Blednykh, Alexei; Rehm, Guenther; Bartolini, Riccardo

    2014-07-01

    One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID) straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  4. Analytical chemistry in semiconductor manufacturing: Techniques, role of nuclear methods and need for quality control

    International Nuclear Information System (INIS)

    1989-06-01

    This report is the result of a consultants meeting held in Gaithersburg, USA, 2-3 October 1987. The meeting was hosted by the National Bureau of Standards and Technology, and it was attended by 18 participants from Denmark, Finland, India, Japan, Norway, People's Republic of China and the USA. The purpose of the meeting was to assess the present status of analytical chemistry in semiconductor manufacturing, the role of nuclear analytical methods and the need for internationally organized quality control of the chemical analysis. The report contains the three presentations in full and a summary report of the discussions. Thus, it gives an overview of the need of analytical chemistry in manufacturing of silicon based devices, the use of nuclear analytical methods, and discusses the need for quality control. Refs, figs and tabs

  5. Detection of food irradiation - two analytical methods

    International Nuclear Information System (INIS)

    1994-01-01

    This publication summarizes the activities of Nordic countries in the field of detection of irradiated food. The National Food Agency of Denmark has coordinated the project. The two analytical methods investigated were: the gas-chromatographic determination of the hydrocarbon/lipid ratio in irradiated chicken meat, and a bioassay based on microelectrophoresis of DNA from single cells. Also a method for determination of o-tyrosine in the irradiated and non-irradiated chicken meat has been tested. The first method based on radiolytical changes in fatty acids, contained in chicken meat, has been tested and compared in the four Nordic countries. Four major hydrocarbons (C16:2, C16:3, C17:1 and C17:2) have been determined and reasonable agreement was observed between the dose level and hydrocarbons concentration. Results of a bioassay, where strand breaks of DNA are demonstrated by microelectrophoresis of single cells, prove a correlation between the dose levels and the pattern of DNA fragments migration. The hydrocarbon method can be applied to detect other irradiated, fat-containing foods, while the DNA method can be used for some animal and some vegetable foods as well.Both methods allow to determine the fact of food irradiation beyond any doubt, thus making them suitable for food control analysis. The detailed determination protocols are given. (EG)

  6. Experimental setup and analytical methods for the non-invasive determination of volatile organic compounds, formaldehyde and NO{sub x} in exhaled human breath

    Energy Technology Data Exchange (ETDEWEB)

    Riess, Ulrich; Tegtbur, Uwe [Hannover Medical School, Sports Physiology and Sports Medicine, Carl-Neuberg-Str. 1, 30625 Hannover (Germany); Fauck, Christian; Fuhrmann, Frank; Markewitz, Doreen [Fraunhofer WKI, Department of Material Analysis and Indoor Chemistry, Bienroder Weg 54 E, 38108 Braunschweig (Germany); Salthammer, Tunga, E-mail: tunga.salthammer@wki.fraunhofer.de [Fraunhofer WKI, Department of Material Analysis and Indoor Chemistry, Bienroder Weg 54 E, 38108 Braunschweig (Germany)

    2010-06-11

    Different analytical devices were tested and evaluated for their suitability of breath gas analysis by examining the physiological parameters and chemical substances in the exhaled breath of ten healthy probands during light cycling in dependence of methanol-rich nutrition. The probands exercised under normal breathing conditions on a bicycle ergometer. Breath air was exhaled into a glass cylinder and collected under steady-state conditions. Non-invasively measured parameters were pulse rate, breath frequency, temperature, relative humidity, NO{sub x}, total volatile organic compounds (TVOC{sub PAS}), carbon dioxide (CO{sub 2}), formaldehyde, methanol, acetaldehyde, acetone, isoprene and volatile organic compounds (VOCs). Methanol rich food and beverages strongly influenced the concentration of methanol and other organic substances in human breath. On the other hand, nutrition and smoking had no clear effect on the physical conditions of the probands. The proton transfer reaction mass spectrometry (PTR-MS) method was found to be very suitable for the analysis of breath gas but the m/z 31, if assigned to formaldehyde, is sensitive to interferences. The time vs. concentration curves of nitric oxide showed sudden peaks up to 120 ppb in most of the measurements. In one case a strong interference of the NO{sub x} signal was observed. The time resolved analysis of exhaled breath gas is of high capability and significance for different applications if reliable analytical techniques are used. Some compounds like nitric oxide (NO), methanol, different VOCs as well as sum parameters like TVOC{sub PAS} are especially suitable as markers. Formaldehyde, which is rapidly metabolized in the human body, could be measured reliably as a trace component by the acetylacetone (acac) method but not by PTR-MS.

  7. The Methodical Instrumentarium for Analytical Monitoring of Markets for High-Tech Products

    Directory of Open Access Journals (Sweden)

    Mikaelian Suren G.

    2017-10-01

    Full Text Available The article is aimed at clarifying the essential characteristics of high-tech products and specifying the features of analytical monitoring of markets for high-tech products. The conceptual approaches to interpretation of the essence of high-tech products as a basic concept in the categorical apparatus for researching the systemic and complex processes of technological development have been clarified. The most efficient instruments for assessing innovation processes in the high-tech sphere have been systematized. The methodical instrumentarium for analytical monitoring of the markets for high-tech products has been clarified. The terminology of a high-tech product has been clarified in order to formulate the methodical instrumentarium for analytical monitoring of market for high-tech products. It has been determined that «high-tech products» are the original basic concept in the categorical apparatus for researching the systemic and complex processes of the high-tech market that needs to be concretized. Conceptual approaches to the essence of high-tech products have been systematized.

  8. A method of bias correction for maximal reliability with dichotomous measures.

    Science.gov (United States)

    Penev, Spiridon; Raykov, Tenko

    2010-02-01

    This paper is concerned with the reliability of weighted combinations of a given set of dichotomous measures. Maximal reliability for such measures has been discussed in the past, but the pertinent estimator exhibits a considerable bias and mean squared error for moderate sample sizes. We examine this bias, propose a procedure for bias correction, and develop a more accurate asymptotic confidence interval for the resulting estimator. In most empirically relevant cases, the bias correction and mean squared error correction can be performed simultaneously. We propose an approximate (asymptotic) confidence interval for the maximal reliability coefficient, discuss the implementation of this estimator, and investigate the mean squared error of the associated asymptotic approximation. We illustrate the proposed methods using a numerical example.

  9. Rational Selection, Criticality Assessment, and Tiering of Quality Attributes and Test Methods for Analytical Similarity Evaluation of Biosimilars.

    Science.gov (United States)

    Vandekerckhove, Kristof; Seidl, Andreas; Gutka, Hiten; Kumar, Manish; Gratzl, Gyöngyi; Keire, David; Coffey, Todd; Kuehne, Henriette

    2018-05-10

    Leading regulatory agencies recommend biosimilar assessment to proceed in a stepwise fashion, starting with a detailed analytical comparison of the structural and functional properties of the proposed biosimilar and reference product. The degree of analytical similarity determines the degree of residual uncertainty that must be addressed through downstream in vivo studies. Substantive evidence of similarity from comprehensive analytical testing may justify a targeted clinical development plan, and thus enable a shorter path to licensing. The importance of a careful design of the analytical similarity study program therefore should not be underestimated. Designing a state-of-the-art analytical similarity study meeting current regulatory requirements in regions such as the USA and EU requires a methodical approach, consisting of specific steps that far precede the work on the actual analytical study protocol. This white paper discusses scientific and methodological considerations on the process of attribute and test method selection, criticality assessment, and subsequent assignment of analytical measures to US FDA's three tiers of analytical similarity assessment. Case examples of selection of critical quality attributes and analytical methods for similarity exercises are provided to illustrate the practical implementation of the principles discussed.

  10. Analytical method validation of GC-FID for the simultaneous measurement of hydrocarbons (C2-C4 in their gas mixture

    Directory of Open Access Journals (Sweden)

    Oman Zuas

    2016-09-01

    Full Text Available An accurate gas chromatography coupled to a flame ionization detector (GC-FID method was validated for the simultaneous analysis of light hydrocarbons (C2-C4 in their gas mixture. The validation parameters were evaluated based on the ISO/IEC 17025 definition including method selectivity, repeatability, accuracy, linearity, limit of detection (LOD, limit of quantitation (LOQ, and ruggedness. Under the optimum analytical conditions, the analysis of gas mixture revealed that each target component was well-separated with high selectivity property. The method was also found to be precise and accurate. The method linearity was found to be high with good correlation coefficient values (R2 ≥ 0.999 for all target components. It can be concluded that the GC-FID developed method is reliable and suitable for determination of light C2-C4 hydrocarbons (including ethylene, propane, propylene, isobutane, and n-butane in their gas mixture. The validated method has successfully been applied to the estimation of hydrocarbons light C2-C4 hydrocarbons in natural gas samples, showing high performance repeatability with relative standard deviation (RSD less than 1.0% and good selectivity with no interference from other possible components could be observed.

  11. Analytical study of stress and deformation of HTR fuel blocks

    International Nuclear Information System (INIS)

    Tanaka, M.

    1982-01-01

    A two-dimensional finite element computer code named HANS-GR has been developed to predict the mechanical behavior of the graphite fuel blocks with realistic material properties and core environment. When graphite material is exposed to high temperature and fast neutron flux of high density, strains arise due to thermal expansion, irradiation-induced shrinkage and creep. Thus stresses and distortions are induced in the fuel block in which there are spatial variation of these strains. The analytical method used in the program to predcit these induced stresses and distortions by finite element method is discussed. In order to illustrate the versatility of the computer code, numerical results of two example analyses of the multi-hole type fuel elements in the VHTR Reactor are given. Two example analyses presented are those concerning the stresses in fuel blocks with control rod holes and distortions of the fuel blocks at the periphery of the reactor core. It is considered these phenomena should be carefully examined when the multi-hole type fuel elements are applied to VHTR. It is assured that the predicted mechanical behavior of the graphite components is strongly dependent on the material properties used and obtaining the reliable material property is important to make the analytical prediction a reliable one

  12. Method for analysis and assessment of the relation between stress and reliability of knowledge-based actions in the probabilistic safety analysis

    International Nuclear Information System (INIS)

    Fassmann, Werner

    2014-06-01

    According to the current theoretical and empirical state-of-the-art, stress has to be understood as the emotional and cognitive reaction by which humans adapt to situations which imply real or imagined danger, threat, or frustration of important personal goals or needs. The emotional reaction to such situations can be so extreme that rational coping with the situation will be precluded. In less extreme cases, changes of cognitive processes underlying human action will occur, which may systematically affect the reliability of tasks personnel has to perform in a stressful situation. Reliable task performance by personnel of nuclear power plants and other risk technologies is also affected by such effects. The method developed in the frame of the research and development project RS1198 sponsored by the German Federal Ministry for Economic Affairs and Energy (BMWi) addresses both aspects of emotional and cognitive coping with stressful situations. Analytical and evaluation steps of the approach provide guidance to the end users on how to capture and quantify the contribution of stress-related emotional and cognitive factors to the reliable performance of knowledge-based actions. For this purpose, a suitable guideline has been developed. Further research for clarifying open questions has been identified. A case study application illustrates how to use the method. Part of the work performed in this project was dedicated to a review addressing the question to which extent Swain's approach to the analysis and evaluation of stress is in line with current scientific knowledge. Suitable suggestions for updates have been developed.

  13. Methods for the calculation of uncertainty in analytical chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Suh, M. Y.; Sohn, S. C.; Park, Y. J.; Park, K. K.; Jee, K. Y.; Joe, K. S.; Kim, W. H

    2000-07-01

    This report describes the statistical rules for evaluating and expressing uncertainty in analytical chemistry. The procedures for the evaluation of uncertainty in chemical analysis are illustrated by worked examples. This report, in particular, gives guidance on how uncertainty can be estimated from various chemical analyses. This report can be also used for planning the experiments which will provide the information required to obtain an estimate of uncertainty for the method.

  14. Analytical Radiation Transport Benchmarks for The Next Century

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    2005-01-01

    Verification of large-scale computational algorithms used in nuclear engineering and radiological applications is an essential element of reliable code performance. For this reason, the development of a suite of multidimensional semi-analytical benchmarks has been undertaken to provide independent verification of proper operation of codes dealing with the transport of neutral particles. The benchmarks considered cover several one-dimensional, multidimensional, monoenergetic and multigroup, fixed source and critical transport scenarios. The first approach, called the Green's Function. In slab geometry, the Green's function is incorporated into a set of integral equations for the boundary fluxes. Through a numerical Fourier transform inversion and subsequent matrix inversion for the boundary fluxes, a semi-analytical benchmark emerges. Multidimensional solutions in a variety of infinite media are also based on the slab Green's function. In a second approach, a new converged SN method is developed. In this method, the SN solution is ''minded'' to bring out hidden high quality solutions. For this case multigroup fixed source and criticality transport problems are considered. Remarkably accurate solutions can be obtained with this new method called the Multigroup Converged SN (MGCSN) method as will be demonstrated

  15. Downstream processing and chromatography based analytical methods for production of vaccines, gene therapy vectors, and bacteriophages

    Science.gov (United States)

    Kramberger, Petra; Urbas, Lidija; Štrancar, Aleš

    2015-01-01

    Downstream processing of nanoplexes (viruses, virus-like particles, bacteriophages) is characterized by complexity of the starting material, number of purification methods to choose from, regulations that are setting the frame for the final product and analytical methods for upstream and downstream monitoring. This review gives an overview on the nanoplex downstream challenges and chromatography based analytical methods for efficient monitoring of the nanoplex production. PMID:25751122

  16. A New Method of Reliability Evaluation Based on Wavelet Information Entropy for Equipment Condition Identification

    International Nuclear Information System (INIS)

    He, Z J; Zhang, X L; Chen, X F

    2012-01-01

    Aiming at reliability evaluation of condition identification of mechanical equipment, it is necessary to analyze condition monitoring information. A new method of reliability evaluation based on wavelet information entropy extracted from vibration signals of mechanical equipment is proposed. The method is quite different from traditional reliability evaluation models that are dependent on probability statistics analysis of large number sample data. The vibration signals of mechanical equipment were analyzed by means of second generation wavelet package (SGWP). We take relative energy in each frequency band of decomposed signal that equals a percentage of the whole signal energy as probability. Normalized information entropy (IE) is obtained based on the relative energy to describe uncertainty of a system instead of probability. The reliability degree is transformed by the normalized wavelet information entropy. A successful application has been achieved to evaluate the assembled quality reliability for a kind of dismountable disk-drum aero-engine. The reliability degree indicates the assembled quality satisfactorily.

  17. No Impact of the Analytical Method Used for Determining Cystatin C on Estimating Glomerular Filtration Rate in Children.

    Science.gov (United States)

    Alberer, Martin; Hoefele, Julia; Benz, Marcus R; Bökenkamp, Arend; Weber, Lutz T

    2017-01-01

    Measurement of inulin clearance is considered to be the gold standard for determining kidney function in children, but this method is time consuming and expensive. The glomerular filtration rate (GFR) is on the other hand easier to calculate by using various creatinine- and/or cystatin C (Cys C)-based formulas. However, for the determination of serum creatinine (Scr) and Cys C, different and non-interchangeable analytical methods exist. Given the fact that different analytical methods for the determination of creatinine and Cys C were used in order to validate existing GFR formulas, clinicians should be aware of the type used in their local laboratory. In this study, we compared GFR results calculated on the basis of different GFR formulas and either used Scr and Cys C values as determined by the analytical method originally employed for validation or values obtained by an alternative analytical method to evaluate any possible effects on the performance. Cys C values determined by means of an immunoturbidimetric assay were used for calculating the GFR using equations in which this analytical method had originally been used for validation. Additionally, these same values were then used in other GFR formulas that had originally been validated using a nephelometric immunoassay for determining Cys C. The effect of using either the compatible or the possibly incompatible analytical method for determining Cys C in the calculation of GFR was assessed in comparison with the GFR measured by creatinine clearance (CrCl). Unexpectedly, using GFR equations that employed Cys C values derived from a possibly incompatible analytical method did not result in a significant difference concerning the classification of patients as having normal or reduced GFR compared to the classification obtained on the basis of CrCl. Sensitivity and specificity were adequate. On the other hand, formulas using Cys C values derived from a compatible analytical method partly showed insufficient

  18. Planning of operation & maintenance using risk and reliability based methods

    DEFF Research Database (Denmark)

    Florian, Mihai; Sørensen, John Dalsgaard

    2015-01-01

    Operation and maintenance (OM) of offshore wind turbines contributes with a substantial part of the total levelized cost of energy (LCOE). The objective of this paper is to present an application of risk- and reliability-based methods for planning of OM. The theoretical basis is presented...

  19. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    Science.gov (United States)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  20. A Comparison of Three Methods for the Analysis of Skin Flap Viability: Reliability and Validity.

    Science.gov (United States)

    Tim, Carla Roberta; Martignago, Cintia Cristina Santi; da Silva, Viviane Ribeiro; Dos Santos, Estefany Camila Bonfim; Vieira, Fabiana Nascimento; Parizotto, Nivaldo Antonio; Liebano, Richard Eloin

    2018-05-01

    Objective: Technological advances have provided new alternatives to the analysis of skin flap viability in animal models; however, the interrater validity and reliability of these techniques have yet to be analyzed. The present study aimed to evaluate the interrater validity and reliability of three different methods: weight of paper template (WPT), paper template area (PTA), and photographic analysis. Approach: Sixteen male Wistar rats had their cranially based dorsal skin flap elevated. On the seventh postoperative day, the viable tissue area and the necrotic area of the skin flap were recorded using the paper template method and photo image. The evaluation of the percentage of viable tissue was performed using three methods, simultaneously and independently by two raters. The analysis of interrater reliability and viability was performed using the intraclass correlation coefficient and Bland Altman Plot Analysis was used to visualize the presence or absence of systematic bias in the evaluations of data validity. Results: The results showed that interrater reliability for WPT, measurement of PTA, and photographic analysis were 0.995, 0.990, and 0.982, respectively. For data validity, a correlation >0.90 was observed for all comparisons made between the three methods. In addition, Bland Altman Plot Analysis showed agreement between the comparisons of the methods and the presence of systematic bias was not observed. Innovation: Digital methods are an excellent choice for assessing skin flap viability; moreover, they make data use and storage easier. Conclusion: Independently from the method used, the interrater reliability and validity proved to be excellent for the analysis of skin flaps' viability.

  1. Matrix-based system reliability method and applications to bridge networks

    International Nuclear Information System (INIS)

    Kang, W.-H.; Song Junho; Gardoni, Paolo

    2008-01-01

    Using a matrix-based system reliability (MSR) method, one can estimate the probabilities of complex system events by simple matrix calculations. Unlike existing system reliability methods whose complexity depends highly on that of the system event, the MSR method describes any general system event in a simple matrix form and therefore provides a more convenient way of handling the system event and estimating its probability. Even in the case where one has incomplete information on the component probabilities and/or the statistical dependence thereof, the matrix-based framework enables us to estimate the narrowest bounds on the system failure probability by linear programming. This paper presents the MSR method and applies it to a transportation network consisting of bridge structures. The seismic failure probabilities of bridges are estimated by use of the predictive fragility curves developed by a Bayesian methodology based on experimental data and existing deterministic models of the seismic capacity and demand. Using the MSR method, the probability of disconnection between each city/county and a critical facility is estimated. The probability mass function of the number of failed bridges is computed as well. In order to quantify the relative importance of bridges, the MSR method is used to compute the conditional probabilities of bridge failures given that there is at least one city disconnected from the critical facility. The bounds on the probability of disconnection are also obtained for cases with incomplete information

  2. An off-line two-dimensional analytical procedure for determination of polcyclic aromatic hydrocarbons in smoke aerosol

    NARCIS (Netherlands)

    Claessens, H.A.; Lammerts van Bueren, L.G.D.

    1987-01-01

    Smoke aerosol from stoves consists of a wide variety of chemical substances of which a number have toxic properties. To study the impact of aerosol emissions on health and environment reliable analytical procedures must be available for these samples. An off-line two-dimensional HPLC method is

  3. Development of an analytical method to quantify total isoflavones in phytotherapic capsules using high-performance liquid chromatography

    Directory of Open Access Journals (Sweden)

    Liliane C. C. Auwerter

    2012-12-01

    Full Text Available Isoflavones can be found in grains and leaves of soybean. Currently, these are sold in pharmacies as phytotherapic capsules. Isoflavones have been recommended by doctors, especially for women, due to their ability to relieve menopause symptoms, among other benefits. However, no method exists for the official control of isoflavone content in capsules sold in the Brazilian market. This study aims to develop an appropriate analytical method to determine the total isoflavone content (daidzin, glycitin, and genistin, and their respective aglycone forms in phytotherapic capsules purchased in pharmacies in Curitiba, Parana State, Brazil, using the technique of high-performance liquid chromatography with UV detection (UV-HPLC. The HPLC system consisted of a quaternary pump, an autosampler, and Waters reversed-phase C18 column (5 μm × 300 mm. Analyses were carried out at 40 °C, using a flow rate of 1.0 mL/min (acetonitrile and acetic acid 0.1%, and detection was performed at 254 nm. The method was validated as required by ANVISA and showed to be reliable for the following parameters: linearity (r² >0.99, selectivity (correlation between 0.99 and 1.00, precision (relative standard derivation <1.59%, accuracy (from 80% to 111.63% intraday and from 80% to 117.88% interday recovery, and robustness.

  4. Development of an analytical method to quantify total isoflavones in phytotherapic capsules using high-performance liquid chromatography

    Directory of Open Access Journals (Sweden)

    Liliane C. C. Auwerter

    2012-10-01

    Full Text Available Isoflavones can be found in grains and leaves of soybean. Currently, these are sold in pharmacies as phytotherapic capsules. Isoflavones have been recommended by doctors, especially for women, due to their ability to relieve menopause symptoms, among other benefits. However, no method exists for the official control of isoflavone content in capsules sold in the Brazilian market. This study aims to develop an appropriate analytical method to determine the total isoflavone content (daidzin, glycitin, and genistin, and their respective aglycone forms in phytotherapic capsules purchased in pharmacies in Curitiba, Parana State, Brazil, using the technique of high-performance liquid chromatography with UV detection (UV-HPLC. The HPLC system consisted of a quaternary pump, an autosampler, and Waters reversed-phase C18 column (5 μm × 300 mm. Analyses were carried out at 40 °C, using a flow rate of 1.0 mL/min (acetonitrile and acetic acid 0.1%, and detection was performed at 254 nm. The method was validated as required by ANVISA and showed to be reliable for the following parameters: linearity (r² >0.99, selectivity (correlation between 0.99 and 1.00, precision (relative standard derivation <1.59%, accuracy (from 80% to 111.63% intraday and from 80% to 117.88% interday recovery, and robustness.

  5. Reliability Study Regarding the Use of Histogram Similarity Methods for Damage Detection

    Directory of Open Access Journals (Sweden)

    Nicoleta Gillich

    2013-01-01

    Full Text Available The paper analyses the reliability of three dissimilarity estimators to compare histograms, as support for a frequency-based damage detection method, able to identify structural changes in beam-like structures. First a brief presentation of the own developed damage detection method is made, with focus on damage localization. It consists actually in comparing a histogram derived from measurement results, with a large series of histograms, namely the damage location indexes for all locations along the beam, obtained by calculus. We tested some dissimilarity estimators like the Minkowski-form Distances, the Kullback-Leibler Divergence and the Histogram Intersection and found the Minkowski Distance as the method providing best results. It was tested for numerous locations, using real measurement results and with results artificially debased by noise, proving its reliability.

  6. Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method

    Directory of Open Access Journals (Sweden)

    Shaoyun Ge

    2014-01-01

    Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.

  7. Reliability assessment of aging structures subjected to gradual and shock deteriorations

    International Nuclear Information System (INIS)

    Wang, Cao; Zhang, Hao; Li, Quanwang

    2017-01-01

    Civil structures and infrastructure facilities are susceptible to deterioration posed by the effects of natural hazards and aggressive environmental conditions. These factors may increase the risk of service interruption of infrastructures, and should be taken into account when assessing the structural reliability during an infrastructure's service life. Modeling the resistance deterioration process reasonably is the basis for structural reliability analysis. In this paper, a novel model is developed for describing the deterioration of aging structures. The deterioration is a combination of two stochastic processes: the gradual deterioration posed by environmental effects and the shock deterioration caused by severe load attacks. The dependency of the deterioration magnitude on the load intensity is considered. The Gaussian copula function is employed to help construct the joint distribution of correlated random variables. Semi-analytical methods are developed to assess the structural failure time and the number of significant load events (shocks) to failure. Illustrative examples are presented to demonstrate the applicability of the proposed model in structural reliability analysis. Parametric studies are performed to investigate the role of deterioration-load correlation in structural reliability. - Highlights: • A new resistance deterioration model for aging structures is proposed. • Time-dependent reliability analysis methods incorporating the proposed deterioration model are developed. • Parametric studies are performed to investigate the role of deterioration-load correlation in structural reliability.

  8. Reliability testing of tendon disease using two different scanning methods in patients with rheumatoid arthritis

    DEFF Research Database (Denmark)

    Bruyn, George A W; Möller, Ingrid; Garrido, Jesus

    2012-01-01

    To assess the intra- and interobserver reliability of musculoskeletal ultrasonography (US) in detecting inflammatory and destructive tendon abnormalities in patients with RA using two different scanning methods.......To assess the intra- and interobserver reliability of musculoskeletal ultrasonography (US) in detecting inflammatory and destructive tendon abnormalities in patients with RA using two different scanning methods....

  9. An Analytical Method for the Abel Inversion of Asymmetrical Gaussian Profiles

    International Nuclear Information System (INIS)

    Xu Guosheng; Wan Baonian

    2007-01-01

    An analytical algorithm for fast calculation of the Abel inversion for density profile measurement in tokamak is developed. Based upon the assumptions that the particle source is negligibly small in the plasma core region, density profiles can be approximated by an asymmetrical Gaussian distribution controlled only by one parameter V 0 /D and V 0 /D is constant along the radial direction, the analytical algorithm is presented and examined against a testing profile. The validity is confirmed by benchmark with the standard Abel inversion method and the theoretical profile. The scope of application as well as the error analysis is also discussed in detail

  10. Analytical methods used at IPR (Instituto de Pesquisas Radioativas - Minas Gerais, Brazil)

    International Nuclear Information System (INIS)

    Murta, C.C.

    The analytical methods available at IPR (MG-Brazil) for the routine determination of uranium are described. These methods are: gravimetric analysis; fluorescence spectroscopy, voltametry, polarography, absorption spectroscopy, beta-and gamma-radiometric analysis, gamma spectroscopy, activation analysis, X-rays fluorescence analysis and delayed neutron analysis. Some additional methods for the study of mineral ores, such as X-rays diffractometry, emmission spectroscopy, thermal analysis, etc, are also discussed [pt

  11. A Systematic Review of Statistical Methods Used to Test for Reliability of Medical Instruments Measuring Continuous Variables

    Directory of Open Access Journals (Sweden)

    Rafdzah Zaki

    2013-06-01

    Full Text Available   Objective(s: Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice.   Materials and Methods: In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. Results: The Intra-class Correlation Coefficient (ICC is the most popular method with 25 (60% studies having used this method followed by the comparing means (8 or 19%. Out of 25 studies using the ICC, only 7 (28% reported the confidence intervals and types of ICC used. Most studies (71% also tested the agreement of instruments. Conclusion: This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.

  12. 3D analytical method for the external dynamics of ship collisions and investigation of the coefficient of restitution

    Directory of Open Access Journals (Sweden)

    LIU Junfeng

    2017-03-01

    Full Text Available The analytical method for predicting the dynamic responses of a ship in a collision scenario features speed and accuracy,and the external dynamics constitute an important part. A 3D simplified analytical method is implemented by MATLAB and used to calculate the energy dissipation of ship-ship collisions. The results obtained by the proposed method are then compared with those of a 2D simplified analytical method. The total dissipated energy can be obtained through the proposed analytical method, and the influence of the collision heights,angles and locations on the dissipated energy is discussed on that basis. Furthermore,the effects of restitution on the conservative coefficients and the effects of conservative coefficients on energy dissipation are discussed. It is concluded that the proposed 3D analysis yields a lesser energy dissipation than that of the 2D analysis,and the collision height has a significant influence on the dissipated energy. In using the proposed simplified method,it is not safe to simplify the conservative coefficient as zero when the collision angle is greater than 90 degrees. In the future research, to get more accurate energy dissipation, it is a good way to adopt the 3D simplified analytical method instead of the 2D method.

  13. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  14. The chemical speciation and analysis of trace elements in sediment with Neutron Activation Analytical method(NAA) and atomic mass spectrometry

    International Nuclear Information System (INIS)

    Nam, Sang-Ho; Kim, Jae-Jin; Chung, Yong-Sam; Kim, Sun-Ha

    2003-01-01

    In this research, first of all, the analytical methods for the determination of major elements in sediment have been developed with ICP-MS (Inductively Coupled Plasma Mass Spectrometry). The analytical results of major elements (Al, Ca, K, Fe, Mg) with Cool ICP-MS were much better than those with normal ICP-MS. The analytical results were compared with those of NAA (Neutron Activation Analysis). NAA were a little superior to ICP-MS for the determination of major elements in sediment as a non-destructive trace analytical trace analytical method. The analytical methods for the determination of minor elements (Cr, Ce, U, Co, Pb, As, Se) have been also developed with ICP-MS. The analytical results by standard calibration curve with ICP-MS were not accurate due to the matrix interferences. Thus, the internal standard method was applied, then the analytical results for minor elements with ICP-MS were greatly improved. The analytical results obtained by ICP-MS were compared with those obtained by NAA. It showed that the two analytical methods have great capabilities for the determination of minor elements in sediments. Accordingly, the NAA will plan an important role in analysis of environment sample with complex matrix. ICP-MS also will play an important role because it has a great capability for the determination of Pb that could not be determined by NAA

  15. A Validated Reverse Phase HPLC Analytical Method for Quantitation of Glycoalkaloids in Solanum lycocarpum and Its Extracts

    Directory of Open Access Journals (Sweden)

    Renata Fabiane Jorge Tiossi

    2012-01-01

    Full Text Available Solanum lycocarpum (Solanaceae is native to the Brazilian Cerrado. Fruits of this species contain the glycoalkaloids solasonine (SN and solamargine (SM, which display antiparasitic and anticancer properties. A method has been developed for the extraction and HPLC-UV analysis of the SN and SM in different parts of S. lycocarpum, mainly comprising ripe and unripe fruits, leaf, and stem. This analytical method was validated and gave good detection response with linearity over a dynamic range of 0.77–1000.00 μg mL−1 and recovery in the range of 80.92–91.71%, allowing a reliable quantitation of the target compounds. Unripe fruits displayed higher concentrations of glycoalkaloids (1.04% ± 0.01 of SN and 0.69% ± 0.00 of SM than the ripe fruits (0.83% ± 0.02 of SN and 0.60% ± 0.01 of SM. Quantitation of glycoalkaloids in the alkaloidic extract gave 45.09% ± 1.14 of SN and 44.37% ± 0.60 of SM, respectively.

  16. Screening, sensitivity, and uncertainty for the CREAM method of Human Reliability Analysis

    International Nuclear Information System (INIS)

    Bedford, Tim; Bayley, Clare; Revie, Matthew

    2013-01-01

    This paper reports a sensitivity analysis of the Cognitive Reliability and Error Analysis Method for Human Reliability Analysis. We consider three different aspects: the difference between the outputs of the Basic and Extended methods, on the same HRA scenario; the variability in outputs through the choices made for common performance conditions (CPCs); and the variability in outputs through the assignment of choices for cognitive function failures (CFFs). We discuss the problem of interpreting categories when applying the method, compare its quantitative structure to that of first generation methods and discuss also how dependence is modelled with the approach. We show that the control mode intervals used in the Basic method are too narrow to be consistent with the Extended method. This motivates a new screening method that gives improved accuracy with respect to the Basic method, in the sense that (on average) halves the uncertainty associated with the Basic method. We make some observations on the design of a screening method that are generally applicable in Risk Analysis. Finally, we propose a new method of combining CPC weights with nominal probabilities so that the calculated probabilities are always in range (i.e. between 0 and 1), while satisfying sensible properties that are consistent with the overall CREAM method

  17. Assessment of reliability of Greulich and Pyle (gp) method for ...

    African Journals Online (AJOL)

    Background: Greulich and Pyle standards are the most widely used age estimation standards all over the world. The applicability of the Greulich and Pyle standards to populations which differ from their reference population is often questioned. This study aimed to assess the reliability of Greulich and Pyle (GP) method for ...

  18. Proceeding of 35th domestic symposium on applications of structural reliability and risk assessment methods to nuclear power plants

    International Nuclear Information System (INIS)

    2005-06-01

    As the 35th domestic symposium of Atomic Energy Research Committee, the Japan Welding Engineering Society, the symposium was held titled as Applications of structural reliability/risk assessment methods to nuclear energy'. Six speakers gave lectures titled as 'Structural reliability and risk assessment methods', 'Risk-informed regulation of US nuclear energy and role of probabilistic risk assessment', 'Reliability and risk assessment methods in chemical plants', 'Practical structural design methods based on reliability in architectural and civil areas', 'Maintenance activities based on reliability in thermal power plants' and 'LWR maintenance strategies based on Probabilistic Fracture Mechanics'. (T. Tanaka)

  19. Contextual and Analytic Qualities of Research Methods Exemplified in Research on Teaching

    Science.gov (United States)

    Svensson, Lennart; Doumas, Kyriaki

    2013-01-01

    The aim of the present article is to discuss contextual and analytic qualities of research methods. The arguments are specified in relation to research on teaching. A specific investigation is used as an example to illustrate the general methodological approach. It is argued that research methods should be carefully grounded in an understanding of…

  20. Application of the invariant embedding method to analytically solvable transport problems

    Energy Technology Data Exchange (ETDEWEB)

    Wahlberg, Malin

    2005-05-01

    The applicability and performance of the invariant embedding method for calculating various transport quantities is investigated in this thesis. The invariant embedding method is a technique to calculate the reflected or transmitted fluxes in homogeneous half-spaces and slabs, without the need for solving for the flux inside the medium. In return, the embedding equations become non-linear, and in practical cases they need to be solved by numerical methods. There are, however, fast and effective iterative methods available for this purpose. The objective of this thesis is to investigate the performance of these iterative methods in model problems, in which also an exact analytical solution can be obtained. Some of these analytical solutions are also new, hence their derivation constitutes a part of the thesis work. The cases investigated in the thesis all concern the calculation of reflected fluxes from half-spaces. The first problem treated was the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production (i.e. like binary fission), when bombarded by o flux of monoenergetic particles of the same type. Both constant cross sections and energy dependent cross sections with a power law dependence were used in the calculations. The second class of problems concerned the calculation of the path length distribution of reflected particles from a medium without multiplication. It is an interesting new observation that the distribution of the path length travelled in the medium before reflection can be calculated with invariant embedding methods, which actually do not solve the flux distribution in the medium. We have tested the accuracy and the convergence properties of the embedding method also for this case. Finally, very recently a theory of connecting the infinite and half-space medium solutions by embedding-like integral equations was developed and reported in the literature

  1. Application of the invariant embedding method to analytically solvable transport problems

    International Nuclear Information System (INIS)

    Wahlberg, Malin

    2005-05-01

    The applicability and performance of the invariant embedding method for calculating various transport quantities is investigated in this thesis. The invariant embedding method is a technique to calculate the reflected or transmitted fluxes in homogeneous half-spaces and slabs, without the need for solving for the flux inside the medium. In return, the embedding equations become non-linear, and in practical cases they need to be solved by numerical methods. There are, however, fast and effective iterative methods available for this purpose. The objective of this thesis is to investigate the performance of these iterative methods in model problems, in which also an exact analytical solution can be obtained. Some of these analytical solutions are also new, hence their derivation constitutes a part of the thesis work. The cases investigated in the thesis all concern the calculation of reflected fluxes from half-spaces. The first problem treated was the calculation of the energy spectrum of reflected (sputtered) particles from a multiplying medium, where the multiplication arises from recoil production (i.e. like binary fission), when bombarded by o flux of monoenergetic particles of the same type. Both constant cross sections and energy dependent cross sections with a power law dependence were used in the calculations. The second class of problems concerned the calculation of the path length distribution of reflected particles from a medium without multiplication. It is an interesting new observation that the distribution of the path length travelled in the medium before reflection can be calculated with invariant embedding methods, which actually do not solve the flux distribution in the medium. We have tested the accuracy and the convergence properties of the embedding method also for this case. Finally, very recently a theory of connecting the infinite and half-space medium solutions by embedding-like integral equations was developed and reported in the literature

  2. Stress analysis of R2 pressure vessel. Structural reliability benchmark exercise

    International Nuclear Information System (INIS)

    Vestergaard, N.

    1987-05-01

    The Structural Reliability Benchmark Exercise (SRBE) is sponsored by the EEC as part of the Reactor Safety Programme. The objectives of the SRBE are to evaluate and improve 1) inspection procedures, which use non-destructive methods to locate defects in pressure (reactor) vessels, as well as 2) analytical damage accumulation models, which predict the time to failure of vessels containing defects. In order to focus attention, an experimental presure vessel has been inspected, subjected fatigue loadings and subsequently analysed by several teams using methods of their choice. The present report contains the first part of the analytical damage accumulation analysis. The stress distributions in the welds of the experimental pressure vessel were determined. These stress distributions will be used to determine the driving forces of the damage accumulation models, which will be addressed in a future report. (author)

  3. Comparison of three analytical methods for the determination of trace elements in whole blood

    International Nuclear Information System (INIS)

    Ward, N.I.; Stephens, R.; Ryan, D.E.

    1979-01-01

    Three different analytical techniques were compared in a study of the role of trace elements in multiple sclerosis. Data for eight elements (Cd, Co, Cr, Cu, Mg, Mn, Pb, Zn) from neutron activation, flame atomic absorption and electrothermal atomic absorption methods were compared and evaluated statistically. No difference (probability less than 0.001) was observed in the elemental values obtained. Comparison of data between suitably different analytical methods gives increased confidence in the results obtained and is of particular value when standard reference materials are not available. (Auth.)

  4. GenoSets: visual analytic methods for comparative genomics.

    Directory of Open Access Journals (Sweden)

    Aurora A Cain

    Full Text Available Many important questions in biology are, fundamentally, comparative, and this extends to our analysis of a growing number of sequenced genomes. Existing genomic analysis tools are often organized around literal views of genomes as linear strings. Even when information is highly condensed, these views grow cumbersome as larger numbers of genomes are added. Data aggregation and summarization methods from the field of visual analytics can provide abstracted comparative views, suitable for sifting large multi-genome datasets to identify critical similarities and differences. We introduce a software system for visual analysis of comparative genomics data. The system automates the process of data integration, and provides the analysis platform to identify and explore features of interest within these large datasets. GenoSets borrows techniques from business intelligence and visual analytics to provide a rich interface of interactive visualizations supported by a multi-dimensional data warehouse. In GenoSets, visual analytic approaches are used to enable querying based on orthology, functional assignment, and taxonomic or user-defined groupings of genomes. GenoSets links this information together with coordinated, interactive visualizations for both detailed and high-level categorical analysis of summarized data. GenoSets has been designed to simplify the exploration of multiple genome datasets and to facilitate reasoning about genomic comparisons. Case examples are included showing the use of this system in the analysis of 12 Brucella genomes. GenoSets software and the case study dataset are freely available at http://genosets.uncc.edu. We demonstrate that the integration of genomic data using a coordinated multiple view approach can simplify the exploration of large comparative genomic data sets, and facilitate reasoning about comparisons and features of interest.

  5. Net Analyte Signal Standard Additions Method for Simultaneous Determination of Sulfamethoxazole and Trimethoprim in Pharmaceutical Formulations and Biological Fluids

    OpenAIRE

    Givianrad, M. H.; Mohagheghian, M.

    2012-01-01

    The applicability of a novel net analyte signal standard addition method (NASSAM) to the resolving of overlapping spectra corresponding to the sulfamethoxazole and trimethoprim was verified by UV-visible spectrophotometry. The results confirmed that the net analyte signal standard additions method with simultaneous addition of both analytes is suitable for the simultaneous determination of sulfamethoxazole and trimethoprim in aqueous media. Moreover, applying the net analyte signal standard a...

  6. Reliability of power system with open access

    International Nuclear Information System (INIS)

    Ehsani, A.; Ranjbar, A. M.; Fotuhi Firuzabad, M.; Ehsani, M.

    2003-01-01

    Recently, in many countries, electric utility industry is undergoing considerable changes in regard to its structure and regulation. It can be clearly seen that the thrust towards privatization and deregulation or re regulation of the electric utility industry will introduce numerous reliability problems that will require new criteria and analytical tools that recognize the residual uncertainties in the new environment. In this paper, different risks and uncertainties in competitive electricity markets are briefly introduced; the approach of customers, operators, planners, generation bodies and network providers to the reliability of deregulated system is studied; the impact of dispersed generation on system reliability is evaluated; and finally, the reliability cost/reliability worth issues in the new competitive environment are considered

  7. A Quantitative Analytical Method to Test for Salt Effects on Giant Unilamellar Vesicles

    DEFF Research Database (Denmark)

    Hadorn, Maik; Bönzli, Eva; Eggenberger Hotz, Peter

    2011-01-01

    preparation method with automatic haemocytometry. We found that this new quantitative screening method is highly reliable and consistent with previously reported results. Thus, this method may provide a significant methodological advance in analysis of effects on free-standing model membranes....

  8. Current status and future advancements in analytical post-factum detection of food irradiation within the EU and in global trade

    International Nuclear Information System (INIS)

    Boegl, K.W.

    1999-01-01

    The paper summarizes and explains officially approved methods for reliable analytical detection of illegal or non-labelled treatment of foods with ionizing radiation, referring inter alia to a German study published in 1997. (orig./CB) [de

  9. Analytical methods for the determination of mixtures of bisphenols and derivatives in human and environmental exposure sources and biological fluids. A review

    International Nuclear Information System (INIS)

    Caballero-Casero, N.; Lunar, L.; Rubio, S.

    2016-01-01

    Bisphenol A (BPA) is ubiquitous in humans and the environment. Its potential adverse effects through genomic and non-genomic pathways have fostered BPA replacement by bisphenol analogs that, unfortunately, exert similar adverse effects. Many of these analogs, as well as their derivatives, have already found in humans and the environment and major concerns have arisen over their low dose- and mixture-related effects. This review aims to discuss the characteristics of the main analytical methods reported so far for the determination of mixtures of bisphenol analogs and/or derivatives in human and environmental exposure sources and biological fluids. Approaches followed for removal of background contamination, sample preparation and separation and detection of mixtures of bisphenols and derivatives are critically discussed. Sample treatment is matrix-dependent and common steps include analyte isolation, removal of interferences, evaporation of the extracts and solvent reconstitution. Separation and quantification has been almost exclusively carried out by liquid chromatography tandem mass spectrometry (LC-MS/MS) or gas chromatography mass spectrometry (GC–MS), in the last case prior derivatization, but LC-fluorescence detection has also found some applications. Main characteristics, advantages and drawbacks of these methods will be comparatively discussed. Although at an early stage, some approaches for the assessment of the risk to mixtures of bisphenols, mainly based on the combination of chemical target analysis and toxicity evaluation, have been already applied and they will be here presented. Current knowledge gaps hindering a reliable assessment of human and environmental risk to mixtures of bisphenols and derivatives will be outlined. - Highlights: • Analytical methods for the (bio)monitoring of mixtures of bisphenols are reviewed. • LC and CG coupled to MS are the preferred techniques. • Method-dependent sample treatments are required to remove matrix

  10. Analytical methods for the determination of mixtures of bisphenols and derivatives in human and environmental exposure sources and biological fluids. A review

    Energy Technology Data Exchange (ETDEWEB)

    Caballero-Casero, N.; Lunar, L.; Rubio, S., E-mail: qa1rubrs@uco.es

    2016-02-18

    Bisphenol A (BPA) is ubiquitous in humans and the environment. Its potential adverse effects through genomic and non-genomic pathways have fostered BPA replacement by bisphenol analogs that, unfortunately, exert similar adverse effects. Many of these analogs, as well as their derivatives, have already found in humans and the environment and major concerns have arisen over their low dose- and mixture-related effects. This review aims to discuss the characteristics of the main analytical methods reported so far for the determination of mixtures of bisphenol analogs and/or derivatives in human and environmental exposure sources and biological fluids. Approaches followed for removal of background contamination, sample preparation and separation and detection of mixtures of bisphenols and derivatives are critically discussed. Sample treatment is matrix-dependent and common steps include analyte isolation, removal of interferences, evaporation of the extracts and solvent reconstitution. Separation and quantification has been almost exclusively carried out by liquid chromatography tandem mass spectrometry (LC-MS/MS) or gas chromatography mass spectrometry (GC–MS), in the last case prior derivatization, but LC-fluorescence detection has also found some applications. Main characteristics, advantages and drawbacks of these methods will be comparatively discussed. Although at an early stage, some approaches for the assessment of the risk to mixtures of bisphenols, mainly based on the combination of chemical target analysis and toxicity evaluation, have been already applied and they will be here presented. Current knowledge gaps hindering a reliable assessment of human and environmental risk to mixtures of bisphenols and derivatives will be outlined. - Highlights: • Analytical methods for the (bio)monitoring of mixtures of bisphenols are reviewed. • LC and CG coupled to MS are the preferred techniques. • Method-dependent sample treatments are required to remove matrix

  11. Application of nuclear analytical methods to heavy metal pollution studies of estuaries

    International Nuclear Information System (INIS)

    Anders, B.; Junge, W.; Knoth, J.; Michaelis, W.; Pepelnik, R.; Schwenke, H.

    1984-01-01

    Important objectives of heavy metal pollution studies of estuaries are the understanding of the transport phenomena in these complex ecosystems and the discovery of the pollution history and the geochemical background. Such studies require high precision and accuracy of the analytical methods. Moreover, pronounced spatial heterogeneities and temporal variabilities that are typical for estuaries necessitate the analysis of a great number of samples if relevant results are to be obtained. Both requirements can economically be fulfilled by a proper combination of analytical methods. Applications of energy-dispersive X-ray fluorescence analysis with total reflection of the exciting beam at the sample support and of neutron activation analysis with both thermal and fast neutrons are reported in the light of pollution studies performed in the Lower Elbe River. (orig.)

  12. Reliability analysis of idealized tunnel support system using probability-based methods with case studies

    Science.gov (United States)

    Gharouni-Nik, Morteza; Naeimi, Meysam; Ahadi, Sodayf; Alimoradi, Zahra

    2014-06-01

    In order to determine the overall safety of a tunnel support lining, a reliability-based approach is presented in this paper. Support elements in jointed rock tunnels are provided to control the ground movement caused by stress redistribution during the tunnel drive. Main support elements contribute to stability of the tunnel structure are recognized owing to identify various aspects of reliability and sustainability in the system. The selection of efficient support methods for rock tunneling is a key factor in order to reduce the number of problems during construction and maintain the project cost and time within the limited budget and planned schedule. This paper introduces a smart approach by which decision-makers will be able to find the overall reliability of tunnel support system before selecting the final scheme of the lining system. Due to this research focus, engineering reliability which is a branch of statistics and probability is being appropriately applied to the field and much effort has been made to use it in tunneling while investigating the reliability of the lining support system for the tunnel structure. Therefore, reliability analysis for evaluating the tunnel support performance is the main idea used in this research. Decomposition approaches are used for producing system block diagram and determining the failure probability of the whole system. Effectiveness of the proposed reliability model of tunnel lining together with the recommended approaches is examined using several case studies and the final value of reliability obtained for different designing scenarios. Considering the idea of linear correlation between safety factors and reliability parameters, the values of isolated reliabilities determined for different structural components of tunnel support system. In order to determine individual safety factors, finite element modeling is employed for different structural subsystems and the results of numerical analyses are obtained in

  13. Evaluation and selection of in-situ leaching mining method using analytic hierarchy process

    International Nuclear Information System (INIS)

    Zhao Heyong; Tan Kaixuan; Liu Huizhen

    2007-01-01

    According to the complicated conditions and main influence factors of in-situ leaching min- ing, a model and processes of analytic hierarchy are established for evaluation and selection of in-situ leaching mining methods based on analytic hierarchy process. Taking a uranium mine in Xinjiang of China for example, the application of this model is presented. The results of analyses and calculation indicate that the acid leaching is the optimum project. (authors)

  14. SRC-I demonstration plant analytical laboratory methods manual. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Klusaritz, M.L.; Tewari, K.C.; Tiedge, W.F.; Skinner, R.W.; Znaimer, S.

    1983-03-01

    This manual is a compilation of analytical procedures required for operation of a Solvent-Refined Coal (SRC-I) demonstration or commercial plant. Each method reproduced in full includes a detailed procedure, a list of equipment and reagents, safety precautions, and, where possible, a precision statement. Procedures for the laboratory's environmental and industrial hygiene modules are not included. Required American Society for Testing and Materials (ASTM) methods are cited, and ICRC's suggested modifications to these methods for handling coal-derived products are provided.

  15. Coupling impedance of an in-vacuum undulator: Measurement, simulation, and analytical estimation

    Directory of Open Access Journals (Sweden)

    Victor Smaluk

    2014-07-01

    Full Text Available One of the important issues of the in-vacuum undulator design is the coupling impedance of the vacuum chamber, which includes tapered transitions with variable gap size. To get complete and reliable information on the impedance, analytical estimate, numerical simulations and beam-based measurements have been performed at Diamond Light Source, a forthcoming upgrade of which includes introducing additional insertion device (ID straights. The impedance of an already existing ID vessel geometrically similar to the new one has been measured using the orbit bump method. The measurement results in comparison with analytical estimations and numerical simulations are discussed in this paper.

  16. Quantitative Evaluation of Aged AISI 316L Stainless Steel Sensitization to Intergranular Corrosion: Comparison Between Microstructural Electrochemical and Analytical Methods

    Science.gov (United States)

    Sidhom, H.; Amadou, T.; Sahlaoui, H.; Braham, C.

    2007-06-01

    The evaluation of the degree of sensitization (DOS) to intergranular corrosion (IGC) of a commercial AISI 316L austenitic stainless steel aged at temperatures ranging from 550 °C to 800 °C during 100 to 80,000 hours was carried out using three different assessment methods. (1) The microstructural method coupled with the Strauss standard test (ASTM A262). This method establishes the kinetics of the precipitation phenomenon under different aging conditions, by transmission electronic microscope (TEM) examination of thin foils and electron diffraction. The subsequent chromium-depleted zones are characterized by X-ray microanalysis using scanning transmission electronic microscope (STEM). The superimposition of microstructural time-temperature-precipitation (TTP) and ASTM A262 time-temperature-sensitization (TTS) diagrams provides the relationship between aged microstructure and IGC. Moreover, by considering the chromium-depleted zone characteristics, sensitization and desensitization criteria could be established. (2) The electrochemical method involving the double loop-electrochemical potentiokinetic reactivation (DL-EPR) test. The operating conditions of this test were initially optimized using the experimental design method on the bases of the reliability, the selectivity, and the reproducibility of test responses for both annealed and sensitized steels. The TTS diagram of the AISI 316L stainless steel was established using this method. This diagram offers a quantitative assessment of the DOS and a possibility to appreciate the time-temperature equivalence of the IGC sensitization and desensitization. (3) The analytical method based on the chromium diffusion models. Using the IGC sensitization and desensitization criteria established by the microstructural method, numerical solving of the chromium diffusion equations leads to a calculated AISI 316L TTS diagram. Comparison of these three methods gives a clear advantage to the nondestructive DL-EPR test when it is

  17. Experimental design and multiple response optimization. Using the desirability function in analytical methods development.

    Science.gov (United States)

    Candioti, Luciana Vera; De Zan, María M; Cámara, María S; Goicoechea, Héctor C

    2014-06-01

    A review about the application of response surface methodology (RSM) when several responses have to be simultaneously optimized in the field of analytical methods development is presented. Several critical issues like response transformation, multiple response optimization and modeling with least squares and artificial neural networks are discussed. Most recent analytical applications are presented in the context of analytLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, ArgentinaLaboratorio de Control de Calidad de Medicamentos (LCCM), Facultad de Bioquímica y Ciencias Biológicas, Universidad Nacional del Litoral, C.C. 242, S3000ZAA Santa Fe, Argentinaical methods development, especially in multiple response optimization procedures using the desirability function. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Analytic solution to verify code predictions of two-phase flow in a boiling water reactor core channel

    International Nuclear Information System (INIS)

    Chen, K.F.; Olson, C.A.

    1983-01-01

    One reliable method that can be used to verify the solution scheme of a computer code is to compare the code prediction to a simplified problem for which an analytic solution can be derived. An analytic solution for the axial pressure drop as a function of the flow was obtained for the simplified problem of homogeneous equilibrium two-phase flow in a vertical, heated channel with a cosine axial heat flux shape. This analytic solution was then used to verify the predictions of the CONDOR computer code, which is used to evaluate the thermal-hydraulic performance of boiling water reactors. The results show excellent agreement between the analytic solution and CONDOR prediction

  19. Method of analytic continuation by duality in QCD: Beyond QCD sum rules

    International Nuclear Information System (INIS)

    Kremer, M.; Nasrallah, N.F.; Papadopoulos, N.A.; Schilcher, K.

    1986-01-01

    We present the method of analytic continuation by duality which allows the approximate continuation of QCD amplitudes to small values of the momentum variables where direct perturbative calculations are not possible. This allows a substantial extension of the domain of applications of hadronic QCD phenomenology. The method is illustrated by a simple example which shows its essential features

  20. Analytical method of spectra calculations in the Bargmann representation

    International Nuclear Information System (INIS)

    Maciejewski, Andrzej J.; Przybylska, Maria; Stachowiak, Tomasz

    2014-01-01

    We formulate a universal method for solving an arbitrary quantum system which, in the Bargmann representation, is described by a system of linear equations with one independent variable, such as one- and multi-photon Rabi models, or N level systems interacting with a single mode of the electromagnetic field and their various generalizations. We explain three types of conditions that determine the spectrum and show their usage for two deformations of the Rabi model. We prove that the spectra of both models are just zeros of transcendental functions, which in one case are given explicitly in terms of confluent Heun functions. - Highlights: • Analytical method of spectrum determination in Bargmann representation is proposed. • Three types of conditions determining spectrum are identified. • Method to two generalizations of the Rabi system is applied

  1. A simple reliability block diagram method for safety integrity verification

    International Nuclear Information System (INIS)

    Guo Haitao; Yang Xianhui

    2007-01-01

    IEC 61508 requires safety integrity verification for safety related systems to be a necessary procedure in safety life cycle. PFD avg must be calculated to verify the safety integrity level (SIL). Since IEC 61508-6 does not give detailed explanations of the definitions and PFD avg calculations for its examples, it is difficult for common reliability or safety engineers to understand when they use the standard as guidance in practice. A method using reliability block diagram is investigated in this study in order to provide a clear and feasible way of PFD avg calculation and help those who take IEC 61508-6 as their guidance. The method finds mean down times (MDTs) of both channel and voted group first and then PFD avg . The calculated results of various voted groups are compared with those in IEC61508 part 6 and Ref. [Zhang T, Long W, Sato Y. Availability of systems with self-diagnostic components-applying Markov model to IEC 61508-6. Reliab Eng System Saf 2003;80(2):133-41]. An interesting outcome can be realized from the comparison. Furthermore, although differences in MDT of voted groups exist between IEC 61508-6 and this paper, PFD avg of voted groups are comparatively close. With detailed description, the method of RBD presented can be applied to the quantitative SIL verification, showing a similarity of the method in IEC 61508-6

  2. Analytical and molecular dynamics studies on the impact loading of single-layered graphene sheet by fullerene

    Science.gov (United States)

    Hosseini-Hashemi, Shahrokh; Sepahi-Boroujeni, Amin; Sepahi-Boroujeni, Saeid

    2018-04-01

    Normal impact performance of a system including a fullerene molecule and a single-layered graphene sheet is studied in the present paper. Firstly, through a mathematical approach, a new contact law is derived to describe the overall non-bonding interaction forces of the "hollow indenter-target" system. Preliminary verifications show that the derived contact law gives a reliable picture of force field of the system which is in good agreements with the results of molecular dynamics (MD) simulations. Afterwards, equation of the transversal motion of graphene sheet is utilized on the basis of both the nonlocal theory of elasticity and the assumptions of classical plate theory. Then, to derive dynamic behavior of the system, a set including the proposed contact law and the equations of motion of both graphene sheet and fullerene molecule is solved numerically. In order to evaluate outcomes of this method, the problem is modeled by MD simulation. Despite intrinsic differences between analytical and MD methods as well as various errors arise due to transient nature of the problem, acceptable agreements are established between analytical and MD outcomes. As a result, the proposed analytical method can be reliably used to address similar impact problems. Furthermore, it is found that a single-layered graphene sheet is capable of trapping fullerenes approaching with low velocities. Otherwise, in case of rebound, the sheet effectively absorbs predominant portion of fullerene energy.

  3. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    International Nuclear Information System (INIS)

    Park, Ji Eun; Sung, Yu Sub; Han, Kyung Hwa

    2017-01-01

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary

  4. Selection and reporting of statistical methods to assess reliability of a diagnostic test: Conformity to recommended methods in a peer-reviewed journal

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ji Eun; Sung, Yu Sub [Dept. of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul (Korea, Republic of); Han, Kyung Hwa [Dept. of Radiology, Research Institute of Radiological Science, Yonsei University College of Medicine, Seoul (Korea, Republic of); and others

    2017-11-15

    To evaluate the frequency and adequacy of statistical analyses in a general radiology journal when reporting a reliability analysis for a diagnostic test. Sixty-three studies of diagnostic test accuracy (DTA) and 36 studies reporting reliability analyses published in the Korean Journal of Radiology between 2012 and 2016 were analyzed. Studies were judged using the methodological guidelines of the Radiological Society of North America-Quantitative Imaging Biomarkers Alliance (RSNA-QIBA), and COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN) initiative. DTA studies were evaluated by nine editorial board members of the journal. Reliability studies were evaluated by study reviewers experienced with reliability analysis. Thirty-one (49.2%) of the 63 DTA studies did not include a reliability analysis when deemed necessary. Among the 36 reliability studies, proper statistical methods were used in all (5/5) studies dealing with dichotomous/nominal data, 46.7% (7/15) of studies dealing with ordinal data, and 95.2% (20/21) of studies dealing with continuous data. Statistical methods were described in sufficient detail regarding weighted kappa in 28.6% (2/7) of studies and regarding the model and assumptions of intraclass correlation coefficient in 35.3% (6/17) and 29.4% (5/17) of studies, respectively. Reliability parameters were used as if they were agreement parameters in 23.1% (3/13) of studies. Reproducibility and repeatability were used incorrectly in 20% (3/15) of studies. Greater attention to the importance of reporting reliability, thorough description of the related statistical methods, efforts not to neglect agreement parameters, and better use of relevant terminology is necessary.

  5. A new analytical method to solve the heat equation for a multi-dimensional composite slab

    International Nuclear Information System (INIS)

    Lu, X; Tervola, P; Viljanen, M

    2005-01-01

    A novel analytical approach has been developed for heat conduction in a multi-dimensional composite slab subject to time-dependent boundary changes of the first kind. Boundary temperatures are represented as Fourier series. Taking advantage of the periodic properties of boundary changes, the analytical solution is obtained and expressed explicitly. Nearly all the published works necessitate searching for associated eigenvalues in solving such a problem even for a one-dimensional composite slab. In this paper, the proposed method involves no iterative computation such as numerically searching for eigenvalues and no residue evaluation. The adopted method is simple which represents an extension of the novel analytical approach derived for the one-dimensional composite slab. Moreover, the method of 'separation of variables' employed in this paper is new. The mathematical formula for solutions is concise and straightforward. The physical parameters are clearly shown in the formula. Further comparison with numerical calculations is presented

  6. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    Science.gov (United States)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  7. Study on the Analytical Method for Determination of P-32 in Human Hair

    International Nuclear Information System (INIS)

    Syarbaini; Lubis, E.; Sarwani

    1996-01-01

    Neutron doses due to accident criticality can be estimated by measuring of radionuclide of neutron activation products in human hair. In this work, the analytical method for the determination of P-32 in neutron irradiated hair sample by G.A Siwabessy reactor has been studied. This analytical method consists of dissolving of human hair sample by 10 M HNO3, separation dan purification of P-32 by precipitation as ammonium molibdophosphate finally, the precipitate was measured by low backgroundα/βcounter. The minimum detectable activity of P-32 was 0,05 Bq at a background of 4,6 cpm and with a counting efficiency of 55 % for a 30 minute counting time

  8. Analytic moment method calculations of the drift wave spectrum

    International Nuclear Information System (INIS)

    Thayer, D.R.; Molvig, K.

    1985-11-01

    A derivation and approximate solution of renormalized mode coupling equations describing the turbulent drift wave spectrum is presented. Arguments are given which indicate that a weak turbulence formulation of the spectrum equations fails for a system with negative dissipation. The inadequacy of the weak turbulence theory is circumvented by utilizing a renormalized formation. An analytic moment method is developed to approximate the solution of the nonlinear spectrum integral equations. The solution method employs trial functions to reduce the integral equations to algebraic equations in basic parameters describing the spectrum. An approximate solution of the spectrum equations is first obtained for a mode dissipation with known solution, and second for an electron dissipation in the NSA

  9. Finite Element Reliability Analysis of Chloride Ingress into Reinforced Concrete Structures

    DEFF Research Database (Denmark)

    Frier, Christian; Sørensen, John Dalsgaard

    2007-01-01

    For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in maintenance and repair actions. Further, a reduction of the load-bearing capacity can occur. In the present paper the Finite Element Reliability Method (FERM) is employed for obta......For many reinforced concrete structures corrosion of the reinforcement is an important problem since it can result in maintenance and repair actions. Further, a reduction of the load-bearing capacity can occur. In the present paper the Finite Element Reliability Method (FERM) is employed...... concentration and reinforcement cover depth are modelled by stochastic fields, which are discretized using the Expansion Optimum Linear Estimation (EOLE) approach. The response gradients needed for FORM analysis are derived analytically using the Direct Differentiation Method (DDM). As an example, a bridge pier...... in a marine environment is considered and the results are given in terms of distributions of time for initiation of corrosion....

  10. Comparison of methods for dependency determination between human failure events within human reliability analysis

    International Nuclear Information System (INIS)

    Cepis, M.

    2007-01-01

    The Human Reliability Analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease of subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan - Human Reliability Analysis (IJS-HRA) and Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance. (author)

  11. Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2008-01-01

    The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan human reliability analysis (IJS-HRA) and standardized plant analysis risk human reliability analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance

  12. DEVELOPMENT OF ANALYTICAL METHODS FOR THE QUANTIFICATION OF THE CHEMICAL FORMS OF MERCURY AND OTHER TARGET POLLUTANTS IN COAL-FIRED BOILER FLUE GAS

    Energy Technology Data Exchange (ETDEWEB)

    Terence J. McManus, Ph.D.

    1999-06-30

    . (ATS) as a secondary DOE contractor on this project, assessed the sampling and analytical plans and the emission reports of the five primary contractors to determine how successful the contractors were in satisfying their defined objectives. ATS identified difficulties and inconsistencies in a number of sampling and analytical methodologies in these studies. In particular there was uncertainty as to the validity of the sampling and analytical methods used to differentiate the chemical forms of mercury observed in coal flue gas. Considering the differences in the mercury species with regard to human toxicity, the rate of transport through the ecosystem and the design variations in possible emission control schemes, DOE sought an accurate and reliable means to identify and quantify the various mercury compounds emitted by coal-fired utility boilers. ATS, as a contractor for DOE, completed both bench- and pilot-scale studies on various mercury speciation methods. The final validation of the modified Ontario-Hydro Method, its acceptance by DOE and submission of the method for adoption by ASTM was a direct result of these studies carried out in collaboration with the University of North Dakota's Energy and Environmental Research Center (UNDEERC). This report presents the results from studies carried out at ATS in the development of analytical methods to identify and quantify various chemical species, particularly those of mercury, in coal derived flue gas. Laboratory- and pilot-scale studies, not only on mercury species, but also on other inorganics and organics present in coal combustion flue gas are reported.

  13. An analytical method for neutron thermalization calculations in heterogenous reactors

    Energy Technology Data Exchange (ETDEWEB)

    Pop-Jordanov, J [Boris Kidric Institute of Nuclear Sciences, Vinca, Belgrade (Yugoslavia)

    1965-07-01

    It is well known that the use of the diffusion approximation for stumethods are rather laborious and require the use of large digital computers. In this paper, the use of the diffusion approximation in absorbing media has been avoided, but the treatment remained analytical, thus simplifying practical calculations.

  14. An analytical method for neutron thermalization calculations in heterogenous reactors

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1965-01-01

    It is well known that the use of the diffusion approximation for studying neutron thermalization in heterogeneous reactors may result in considerable errors. On the other hand, more exact numerical methods are rather laborious and require the use of large digital computers. In this paper, the use of the diffusion approximation in absorbing media has been avoided, but the treatment remained analytical, thus simplifying practical calculations

  15. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be

    2009-06-19

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -{alpha}r{sup {lambda}}exp(-{beta}r) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential.

  16. The auxiliary field method and approximate analytical solutions of the Schroedinger equation with exponential potentials

    International Nuclear Information System (INIS)

    Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien

    2009-01-01

    The auxiliary field method is a new and efficient way to compute approximate analytical eigenenergies of the Schroedinger equation. This method has already been successfully applied to the case of central potentials of power-law and logarithmic forms. In the present work, we show that the Schroedinger equation with exponential potentials of the form -αr λ exp(-βr) can also be analytically solved by using the auxiliary field method. Closed formulae giving the critical heights and the energy levels of these potentials are presented. Special attention is drawn to the Yukawa potential and the pure exponential potential

  17. Hanford environmental analytical methods: Methods as of March 1990. Volume 3, Appendix A2-I

    Energy Technology Data Exchange (ETDEWEB)

    Goheen, S.C.; McCulloch, M.; Daniel, J.L.

    1993-05-01

    This paper from the analytical laboratories at Hanford describes the method used to measure pH of single-shell tank core samples. Sludge or solid samples are mixed with deionized water. The pH electrode used combines both a sensor and reference electrode in one unit. The meter amplifies the input signal from the electrode and displays the pH visually.

  18. Phonon dispersion on Ag (100) surface: A modified analytic embedded atom method study

    International Nuclear Information System (INIS)

    Zhang Xiao-Jun; Chen Chang-Le

    2016-01-01

    Within the harmonic approximation, the analytic expression of the dynamical matrix is derived based on the modified analytic embedded atom method (MAEAM) and the dynamics theory of surface lattice. The surface phonon dispersions along three major symmetry directions, and XM-bar are calculated for the clean Ag (100) surface by using our derived formulas. We then discuss the polarization and localization of surface modes at points X-bar and M-bar by plotting the squared polarization vectors as a function of the layer index. The phonon frequencies of the surface modes calculated by MAEAM are compared with the available experimental and other theoretical data. It is found that the present results are generally in agreement with the referenced experimental or theoretical results, with a maximum deviation of 10.4%. The agreement shows that the modified analytic embedded atom method is a reasonable many-body potential model to quickly describe the surface lattice vibration. It also lays a significant foundation for studying the surface lattice vibration in other metals. (paper)

  19. A Simple and Reliable Method of Design for Standalone Photovoltaic Systems

    Science.gov (United States)

    Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.

    2017-06-01

    Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.

  20. A Hierarchical Reliability Control Method for a Space Manipulator Based on the Strategy of Autonomous Decision-Making

    Directory of Open Access Journals (Sweden)

    Xin Gao

    2016-01-01

    Full Text Available In order to maintain and enhance the operational reliability of a robotic manipulator deployed in space, an operational reliability system control method is presented in this paper. First, a method to divide factors affecting the operational reliability is proposed, which divides the operational reliability factors into task-related factors and cost-related factors. Then the models describing the relationships between the two kinds of factors and control variables are established. Based on this, a multivariable and multiconstraint optimization model is constructed. Second, a hierarchical system control model which incorporates the operational reliability factors is constructed. The control process of the space manipulator is divided into three layers: task planning, path planning, and motion control. Operational reliability related performance parameters are measured and used as the system’s feedback. Taking the factors affecting the operational reliability into consideration, the system can autonomously decide which control layer of the system should be optimized and how to optimize it using a control level adjustment decision module. The operational reliability factors affect these three control levels in the form of control variable constraints. Simulation results demonstrate that the proposed method can achieve a greater probability of meeting the task accuracy requirements, while extending the expected lifetime of the space manipulator.