WorldWideScience

Sample records for reliability estimating procedures

  1. Processes and Procedures for Estimating Score Reliability and Precision

    Science.gov (United States)

    Bardhoshi, Gerta; Erford, Bradley T.

    2017-01-01

    Precision is a key facet of test development, with score reliability determined primarily according to the types of error one wants to approximate and demonstrate. This article identifies and discusses several primary forms of reliability estimation: internal consistency (i.e., split-half, KR-20, a), test-retest, alternate forms, interscorer, and…

  2. Automation of reliability evaluation procedures through CARE - The computer-aided reliability estimation program.

    Science.gov (United States)

    Mathur, F. P.

    1972-01-01

    Description of an on-line interactive computer program called CARE (Computer-Aided Reliability Estimation) which can model self-repair and fault-tolerant organizations and perform certain other functions. Essentially CARE consists of a repository of mathematical equations defining the various basic redundancy schemes. These equations, under program control, are then interrelated to generate the desired mathematical model to fit the architecture of the system under evaluation. The mathematical model is then supplied with ground instances of its variables and is then evaluated to generate values for the reliability-theoretic functions applied to the model.

  3. A Simplified Procedure for Reliability Estimation of Underground Concrete Barriers against Normal Missile Impact

    Directory of Open Access Journals (Sweden)

    N. A. Siddiqui

    2011-06-01

    Full Text Available Underground concrete barriers are frequently used to protect strategic structures like Nuclear power plants (NPP, deep under the soil against any possible high velocity missile impact. For a given range and type of missile (or projectile it is of paramount importance to examine the reliability of underground concrete barriers under expected uncertainties involved in the missile, concrete, and soil parameters. In this paper, a simple procedure for the reliability assessment of underground concrete barriers against normal missile impact has been presented using the First Order Reliability Method (FORM. The presented procedure is illustrated by applying it to a concrete barrier that lies at a certain depth in the soil. Some parametric studies are also conducted to obtain the design values which make the barrier as reliable as desired.

  4. Development of a reliable estimation procedure of radioactivity inventory in a BWR plant due to neutron irradiation for decommissioning

    Directory of Open Access Journals (Sweden)

    Tanaka Ken-ichi

    2017-01-01

    Full Text Available Reliable information of radioactivity inventory resulted from the radiological characterization is important in order to plan decommissioning planning and is also crucial in order to promote decommissioning in effectiveness and in safe. The information is referred to by planning of decommissioning strategy and by an application to regulator. Reliable information of radioactivity inventory can be used to optimize the decommissioning processes. In order to perform the radiological characterization reliably, we improved a procedure of an evaluation of neutron-activated materials for a Boiling Water Reactor (BWR. Neutron-activated materials are calculated with calculation codes and their validity should be verified with measurements. The evaluation of neutron-activated materials can be divided into two processes. One is a distribution calculation of neutron-flux. Another is an activation calculation of materials. The distribution calculation of neutron-flux is performed with neutron transport calculation codes with appropriate cross section library to simulate neutron transport phenomena well. Using the distribution of neutron-flux, we perform distribution calculations of radioactivity concentration. We also estimate a time dependent distribution of radioactivity classification and a radioactive-waste classification. The information obtained from the evaluation is utilized by other tasks in the preparatory tasks to make the decommissioning plan and the activity safe and rational.

  5. Development of a reliable estimation procedure of radioactivity inventory in a BWR plant due to neutron irradiation for decommissioning

    Science.gov (United States)

    Tanaka, Ken-ichi; Ueno, Jun

    2017-09-01

    Reliable information of radioactivity inventory resulted from the radiological characterization is important in order to plan decommissioning planning and is also crucial in order to promote decommissioning in effectiveness and in safe. The information is referred to by planning of decommissioning strategy and by an application to regulator. Reliable information of radioactivity inventory can be used to optimize the decommissioning processes. In order to perform the radiological characterization reliably, we improved a procedure of an evaluation of neutron-activated materials for a Boiling Water Reactor (BWR). Neutron-activated materials are calculated with calculation codes and their validity should be verified with measurements. The evaluation of neutron-activated materials can be divided into two processes. One is a distribution calculation of neutron-flux. Another is an activation calculation of materials. The distribution calculation of neutron-flux is performed with neutron transport calculation codes with appropriate cross section library to simulate neutron transport phenomena well. Using the distribution of neutron-flux, we perform distribution calculations of radioactivity concentration. We also estimate a time dependent distribution of radioactivity classification and a radioactive-waste classification. The information obtained from the evaluation is utilized by other tasks in the preparatory tasks to make the decommissioning plan and the activity safe and rational.

  6. Accuracy of a Classical Test Theory-Based Procedure for Estimating the Reliability of a Multistage Test. Research Report. ETS RR-17-02

    Science.gov (United States)

    Kim, Sooyeon; Livingston, Samuel A.

    2017-01-01

    The purpose of this simulation study was to assess the accuracy of a classical test theory (CTT)-based procedure for estimating the alternate-forms reliability of scores on a multistage test (MST) having 3 stages. We generated item difficulty and discrimination parameters for 10 parallel, nonoverlapping forms of the complete 3-stage test and…

  7. Reliability of application of inspection procedures

    Energy Technology Data Exchange (ETDEWEB)

    Murgatroyd, R A

    1988-12-31

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC). 3 refs.

  8. Reliability of application of inspection procedures

    International Nuclear Information System (INIS)

    Murgatroyd, R.A.

    1988-01-01

    This document deals with the reliability of application of inspection procedures. A method to ensure that the inspection of defects thanks to fracture mechanics is reliable is described. The Systematic Human Error Reduction and Prediction Analysis (SHERPA) methodology is applied to every task performed by the inspector to estimate the possibility of error. It appears that it is essential that inspection procedures should be sufficiently rigorous to avoid substantial errors, and that the selection procedures and the training period for inspectors should be optimised. (TEC)

  9. Estimation of Bridge Reliability Distributions

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    In this paper it is shown how the so-called reliability distributions can be estimated using crude Monte Carlo simulation. The main purpose is to demonstrate the methodology. Therefor very exact data concerning reliability and deterioration are not needed. However, it is intended in the paper to ...

  10. Reliability of procedures used for scaling loudness

    DEFF Research Database (Denmark)

    Jesteadt, Walt; Joshi, Suyash Narendra

    2013-01-01

    In this study, 16 normally-hearing listeners judged the loudness of 1000-Hz sinusoids using magnitude estimation (ME), magnitude production (MP), and categorical loudness scaling (CLS). Listeners in each of four groups completed the loudness scaling tasks in a different sequence on the first visit...... (ME, MP, CLS; MP, ME, CLS; CLS, ME, MP; CLS, MP, ME), and the order was reversed on the second visit. This design made it possible to compare the reliability of estimates of the slope of the loudness function across procedures in the same listeners. The ME data were well fitted by an inflected...... results were the most reproducible, they do not provide direct information about the slope of the loudness function because the numbers assigned to CLS categories are arbitrary. This problem can be corrected by using data from the other procedures to assign numbers that are proportional to loudness...

  11. Human Reliability Analysis For Computerized Procedures

    International Nuclear Information System (INIS)

    Boring, Ronald L.; Gertman, David I.; Le Blanc, Katya

    2011-01-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  12. Interim Reliability Evaluation Program procedures guide

    International Nuclear Information System (INIS)

    Carlson, D.D.; Gallup, D.R.; Kolaczkowski, A.M.; Kolb, G.J.; Stack, D.W.; Lofgren, E.; Horton, W.H.; Lobner, P.R.

    1983-01-01

    This document presents procedures for conducting analyses of a scope similar to those performed in Phase II of the Interim Reliability Evaluation Program (IREP). It documents the current state of the art in performing the plant systems analysis portion of a probabilistic risk assessment. Insights gained into managing such an analysis are discussed. Step-by-step procedures and methodological guidance constitute the major portion of the document. While not to be viewed as a cookbook, the procedures set forth the principal steps in performing an IREP analysis. Guidance for resolving the problems encountered in previous analyses is offered. Numerous examples and representative products from previous analyses clarify the discussion

  13. An Analytical Cost Estimation Procedure

    National Research Council Canada - National Science Library

    Jayachandran, Toke

    1999-01-01

    Analytical procedures that can be used to do a sensitivity analysis of a cost estimate, and to perform tradeoffs to identify input values that can reduce the total cost of a project, are described in the report...

  14. Reliability Estimation Based Upon Test Plan Results

    National Research Council Canada - National Science Library

    Read, Robert

    1997-01-01

    The report contains a brief summary of aspects of the Maximus reliability point and interval estimation technique as it has been applied to the reliability of a device whose surveillance tests contain...

  15. Dependent systems reliability estimation by structural reliability approach

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Sørensen, John Dalsgaard

    2014-01-01

    Estimation of system reliability by classical system reliability methods generally assumes that the components are statistically independent, thus limiting its applicability in many practical situations. A method is proposed for estimation of the system reliability with dependent components, where...... the leading failure mechanism(s) is described by physics of failure model(s). The proposed method is based on structural reliability techniques and accounts for both statistical and failure effect correlations. It is assumed that failure of any component is due to increasing damage (fatigue phenomena...... identification. Application of the proposed method can be found in many real world systems....

  16. Accident Sequence Evaluation Program: Human reliability analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  17. Accident Sequence Evaluation Program: Human reliability analysis procedure

    International Nuclear Information System (INIS)

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs

  18. Basics of Bayesian reliability estimation from attribute test data

    International Nuclear Information System (INIS)

    Martz, H.F. Jr.; Waller, R.A.

    1975-10-01

    The basic notions of Bayesian reliability estimation from attribute lifetest data are presented in an introductory and expository manner. Both Bayesian point and interval estimates of the probability of surviving the lifetest, the reliability, are discussed. The necessary formulas are simply stated, and examples are given to illustrate their use. In particular, a binomial model in conjunction with a beta prior model is considered. Particular attention is given to the procedure for selecting an appropriate prior model in practice. Empirical Bayes point and interval estimates of reliability are discussed and examples are given. 7 figures, 2 tables

  19. A Method of Nuclear Software Reliability Estimation

    International Nuclear Information System (INIS)

    Park, Gee Yong; Eom, Heung Seop; Cheon, Se Woo; Jang, Seung Cheol

    2011-01-01

    A method on estimating software reliability for nuclear safety software is proposed. This method is based on the software reliability growth model (SRGM) where the behavior of software failure is assumed to follow the non-homogeneous Poisson process. Several modeling schemes are presented in order to estimate and predict more precisely the number of software defects based on a few of software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating the software test cases into the model. It is identified that this method is capable of accurately estimating the remaining number of software defects which are on-demand type directly affecting safety trip functions. The software reliability can be estimated from a model equation and one method of obtaining the software reliability is proposed

  20. Reliability Estimates for Undergraduate Grade Point Average

    Science.gov (United States)

    Westrick, Paul A.

    2017-01-01

    Undergraduate grade point average (GPA) is a commonly employed measure in educational research, serving as a criterion or as a predictor depending on the research question. Over the decades, researchers have used a variety of reliability coefficients to estimate the reliability of undergraduate GPA, which suggests that there has been no consensus…

  1. Mission Reliability Estimation for Repairable Robot Teams

    Science.gov (United States)

    Trebi-Ollennu, Ashitey; Dolan, John; Stancliff, Stephen

    2010-01-01

    A mission reliability estimation method has been designed to translate mission requirements into choices of robot modules in order to configure a multi-robot team to have high reliability at minimal cost. In order to build cost-effective robot teams for long-term missions, one must be able to compare alternative design paradigms in a principled way by comparing the reliability of different robot models and robot team configurations. Core modules have been created including: a probabilistic module with reliability-cost characteristics, a method for combining the characteristics of multiple modules to determine an overall reliability-cost characteristic, and a method for the generation of legitimate module combinations based on mission specifications and the selection of the best of the resulting combinations from a cost-reliability standpoint. The developed methodology can be used to predict the probability of a mission being completed, given information about the components used to build the robots, as well as information about the mission tasks. In the research for this innovation, sample robot missions were examined and compared to the performance of robot teams with different numbers of robots and different numbers of spare components. Data that a mission designer would need was factored in, such as whether it would be better to have a spare robot versus an equivalent number of spare parts, or if mission cost can be reduced while maintaining reliability using spares. This analytical model was applied to an example robot mission, examining the cost-reliability tradeoffs among different team configurations. Particularly scrutinized were teams using either redundancy (spare robots) or repairability (spare components). Using conservative estimates of the cost-reliability relationship, results show that it is possible to significantly reduce the cost of a robotic mission by using cheaper, lower-reliability components and providing spares. This suggests that the

  2. Adaptive Response Surface Techniques in Reliability Estimation

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Faber, M. H.; Sørensen, John Dalsgaard

    1993-01-01

    Problems in connection with estimation of the reliability of a component modelled by a limit state function including noise or first order discontinuitics are considered. A gradient free adaptive response surface algorithm is developed. The algorithm applies second order polynomial surfaces...

  3. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  4. Benchmark of systematic human action reliability procedure

    International Nuclear Information System (INIS)

    Spurgin, A.J.; Hannaman, G.W.; Moieni, P.

    1986-01-01

    Probabilistic risk assessment (PRA) methodology has emerged as one of the most promising tools for assessing the impact of human interactions on plant safety and understanding the importance of the man/machine interface. Human interactions were considered to be one of the key elements in the quantification of accident sequences in a PRA. The approach to quantification of human interactions in past PRAs has not been very systematic. The Electric Power Research Institute sponsored the development of SHARP to aid analysts in developing a systematic approach for the evaluation and quantification of human interactions in a PRA. The SHARP process has been extensively peer reviewed and has been adopted by the Institute of Electrical and Electronics Engineers as the basis of a draft guide for the industry. By carrying out a benchmark process, in which SHARP is an essential ingredient, however, it appears possible to assess the strengths and weaknesses of SHARP to aid human reliability analysts in carrying out human reliability analysis as part of a PRA

  5. Estimation of structural reliability under combined loads

    International Nuclear Information System (INIS)

    Shinozuka, M.; Kako, T.; Hwang, H.; Brown, P.; Reich, M.

    1983-01-01

    For the overall safety evaluation of seismic category I structures subjected to various load combinations, a quantitative measure of the structural reliability in terms of a limit state probability can be conveniently used. For this purpose, the reliability analysis method for dynamic loads, which has recently been developed by the authors, was combined with the existing standard reliability analysis procedure for static and quasi-static loads. The significant parameters that enter into the analysis are: the rate at which each load (dead load, accidental internal pressure, earthquake, etc.) will occur, its duration and intensity. All these parameters are basically random variables for most of the loads to be considered. For dynamic loads, the overall intensity is usually characterized not only by their dynamic components but also by their static components. The structure considered in the present paper is a reinforced concrete containment structure subjected to various static and dynamic loads such as dead loads, accidental pressure, earthquake acceleration, etc. Computations are performed to evaluate the limit state probabilities under each load combination separately and also under all possible combinations of such loads

  6. Estimation of structural reliability under combined loads

    International Nuclear Information System (INIS)

    Shinozuka, M.; Kako, T.; Hwang, H.; Brown, P.; Reich, M.

    1983-01-01

    For the overall safety evaluation of seismic category I structures subjected to various load combinations, a quantitative measure of the structural reliability in terms of a limit state probability can be conveniently used. For this purpose, the reliability analysis method for dynamic loads, which has recently been developed by the authors, was combined with the existing standard reliability analysis procedure for static and quasi-static loads. The significant parameters that enter into the analysis are: the rate at which each load (dead load, accidental internal pressure, earthquake, etc.) will occur, its duration and intensity. All these parameters are basically random variables for most of the loads to be considered. For dynamic loads, the overall intensity is usually characterized not only by their dynamic components but also by their static components. The structure considered in the present paper is a reinforced concrete containment structure subjected to various static and dynamic loads such as dead loads, accidental pressure, earthquake acceleration, etc. Computations are performed to evaluate the limit state probabilities under each load combination separately and also under all possible combinations of such loads. Indeed, depending on the limit state condition to be specified, these limit state probabilities can indicate which particular load combination provides the dominant contribution to the overall limit state probability. On the other hand, some of the load combinations contribute very little to the overall limit state probability. These observations provide insight into the complex problem of which load combinations must be considered for design, for which limit states and at what level of limit state probabilities. (orig.)

  7. Reliability of Estimation Pile Load Capacity Methods

    Directory of Open Access Journals (Sweden)

    Yudhi Lastiasih

    2014-04-01

    Full Text Available None of numerous previous methods for predicting pile capacity is known how accurate any of them are when compared with the actual ultimate capacity of piles tested to failure. The author’s of the present paper have conducted such an analysis, based on 130 data sets of field loading tests. Out of these 130 data sets, only 44 could be analysed, of which 15 were conducted until the piles actually reached failure. The pile prediction methods used were: Brinch Hansen’s method (1963, Chin’s method (1970, Decourt’s Extrapolation Method (1999, Mazurkiewicz’s method (1972, Van der Veen’s method (1953, and the Quadratic Hyperbolic Method proposed by Lastiasih et al. (2012. It was obtained that all the above methods were sufficiently reliable when applied to data from pile loading tests that loaded to reach failure. However, when applied to data from pile loading tests that loaded without reaching failure, the methods that yielded lower values for correction factor N are more recommended. Finally, the empirical method of Reese and O’Neill (1988 was found to be reliable enough to be used to estimate the Qult of a pile foundation based on soil data only.

  8. Methodology for uranium resource estimates and reliability

    International Nuclear Information System (INIS)

    Blanchfield, D.M.

    1980-01-01

    The NURE uranium assessment method has evolved from a small group of geologists estimating resources on a few lease blocks, to a national survey involving an interdisciplinary system consisting of the following: (1) geology and geologic analogs; (2) engineering and cost modeling; (3) mathematics and probability theory, psychology and elicitation of subjective judgments; and (4) computerized calculations, computer graphics, and data base management. The evolution has been spurred primarily by two objectives; (1) quantification of uncertainty, and (2) elimination of simplifying assumptions. This has resulted in a tremendous data-gathering effort and the involvement of hundreds of technical experts, many in uranium geology, but many from other fields as well. The rationality of the methods is still largely based on the concept of an analog and the observation that the results are reasonable. The reliability, or repeatability, of the assessments is reasonably guaranteed by the series of peer and superior technical reviews which has been formalized under the current methodology. The optimism or pessimism of individual geologists who make the initial assessments is tempered by the review process, resulting in a series of assessments which are a consistent, unbiased reflection of the facts. Despite the many improvements over past methods, several objectives for future development remain, primarily to reduce subjectively in utilizing factual information in the estimation of endowment, and to improve the recognition of cost uncertainties in the assessment of economic potential. The 1980 NURE assessment methodology will undoubtly be improved, but the reader is reminded that resource estimates are and always will be a forecast for the future

  9. Procedure for estimating permanent total enclosure costs

    Energy Technology Data Exchange (ETDEWEB)

    Lukey, M E; Prasad, C; Toothman, D A; Kaplan, N

    1999-07-01

    Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.

  10. Generating human reliability estimates using expert judgment. Volume 2. Appendices

    International Nuclear Information System (INIS)

    Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.

    1984-11-01

    The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results

  11. Lower bounds to the reliabilities of factor score estimators

    NARCIS (Netherlands)

    Hessen, D.J.

    2017-01-01

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone’s factor score estimators, Bartlett’s factor score

  12. Lower Bounds to the Reliabilities of Factor Score Estimators.

    Science.gov (United States)

    Hessen, David J

    2016-10-06

    Under the general common factor model, the reliabilities of factor score estimators might be of more interest than the reliability of the total score (the unweighted sum of item scores). In this paper, lower bounds to the reliabilities of Thurstone's factor score estimators, Bartlett's factor score estimators, and McDonald's factor score estimators are derived and conditions are given under which these lower bounds are equal. The relative performance of the derived lower bounds is studied using classic example data sets. The results show that estimates of the lower bounds to the reliabilities of Thurstone's factor score estimators are greater than or equal to the estimates of the lower bounds to the reliabilities of Bartlett's and McDonald's factor score estimators.

  13. Procedure for estimating permanent total enclosure costs

    Energy Technology Data Exchange (ETDEWEB)

    Lukey, M.E.; Prasad, C.; Toothman, D.A.; Kaplan, N.

    1999-07-01

    Industries that use add-on control devices must adequately capture emissions before delivering them to the control device. One way to capture emissions is to use permanent total enclosures (PTEs). By definition, an enclosure which meets the US Environmental Protection Agency's five-point criteria is a PTE and has a capture efficiency of 100%. Since costs play an important role in regulatory development, in selection of control equipment, and in control technology evaluations for permitting purposes, EPA has developed a Control Cost Manual for estimating costs of various items of control equipment. EPA's Manual does not contain any methodology for estimating PTE costs. In order to assist environmental regulators and potential users of PTEs, a methodology for estimating PTE costs was developed under contract with EPA, by Pacific Environmental Services, Inc. (PES) and is the subject of this paper. The methodology for estimating PTE costs follows the approach used for other control devices in the Manual. It includes procedures for sizing various components of a PTE and for estimating capital as well as annual costs. It contains verification procedures for demonstrating compliance with EPA's five-point criteria. In addition, procedures are included to determine compliance with Occupational Safety and Health Administration (OSHA) standards. Meeting these standards is an important factor in properly designing PTEs. The methodology is encoded in Microsoft Exel spreadsheets to facilitate cost estimation and PTE verification. Examples are given throughout the methodology development and in the spreadsheets to illustrate the PTE design, verification, and cost estimation procedures.

  14. Nonspecialist Raters Can Provide Reliable Assessments of Procedural Skills

    DEFF Research Database (Denmark)

    Mahmood, Oria; Dagnæs, Julia; Bube, Sarah

    2018-01-01

    was significant (p Pearson's correlation of 0.77 for the nonspecialists and 0.75 for the specialists. The test-retest reliability showed the biggest difference between the 2 groups, 0.59 and 0.38 for the nonspecialist raters and the specialist raters, respectively (p ... was chosen as it is a simple procedural skill that is crucial to master in a resident urology program. RESULTS: The internal consistency of assessments was high, Cronbach's α = 0.93 and 0.95 for nonspecialist and specialist raters, respectively (p correlations). The interrater reliability...

  15. MEASUREMENT: ACCOUNTING FOR RELIABILITY IN PERFORMANCE ESTIMATES.

    Science.gov (United States)

    Waterman, Brian; Sutter, Robert; Burroughs, Thomas; Dunagan, W Claiborne

    2014-01-01

    When evaluating physician performance measures, physician leaders are faced with the quandary of determining whether departures from expected physician performance measurements represent a true signal or random error. This uncertainty impedes the physician leader's ability and confidence to take appropriate performance improvement actions based on physician performance measurements. Incorporating reliability adjustment into physician performance measurement is a valuable way of reducing the impact of random error in the measurements, such as those caused by small sample sizes. Consequently, the physician executive has more confidence that the results represent true performance and is positioned to make better physician performance improvement decisions. Applying reliability adjustment to physician-level performance data is relatively new. As others have noted previously, it's important to keep in mind that reliability adjustment adds significant complexity to the production, interpretation and utilization of results. Furthermore, the methods explored in this case study only scratch the surface of the range of available Bayesian methods that can be used for reliability adjustment; further study is needed to test and compare these methods in practice and to examine important extensions for handling specialty-specific concerns (e.g., average case volumes, which have been shown to be important in cardiac surgery outcomes). Moreover, it's important to note that the provider group average as a basis for shrinkage is one of several possible choices that could be employed in practice and deserves further exploration in future research. With these caveats, our results demonstrate that incorporating reliability adjustment into physician performance measurements is feasible and can notably reduce the incidence of "real" signals relative to what one would expect to see using more traditional approaches. A physician leader who is interested in catalyzing performance improvement

  16. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  17. Validity, reliability and support for implementation of independence-scaled procedural assessment in laparoscopic surgery.

    Science.gov (United States)

    Kramp, Kelvin H; van Det, Marc J; Veeger, Nic J G M; Pierie, Jean-Pierre E N

    2016-06-01

    There is no widely used method to evaluate procedure-specific laparoscopic skills. The first aim of this study was to develop a procedure-based assessment method. The second aim was to compare its validity, reliability and feasibility with currently available global rating scales (GRSs). An independence-scaled procedural assessment was created by linking the procedural key steps of the laparoscopic cholecystectomy to an independence scale. Subtitled and blinded videos of a novice, an intermediate and an almost competent trainee, were evaluated with GRSs (OSATS and GOALS) and the independence-scaled procedural assessment by seven surgeons, three senior trainees and six scrub nurses. Participants received a short introduction to the GRSs and independence-scaled procedural assessment before assessment. The validity was estimated with the Friedman and Wilcoxon test and the reliability with the intra-class correlation coefficient (ICC). A questionnaire was used to evaluate user opinion. Independence-scaled procedural assessment and GRS scores improved significantly with surgical experience (OSATS p = 0.001, GOALS p < 0.001, independence-scaled procedural assessment p < 0.001). The ICCs of the OSATS, GOALS and independence-scaled procedural assessment were 0.78, 0.74 and 0.84, respectively, among surgeons. The ICCs increased when the ratings of scrub nurses were added to those of the surgeons. The independence-scaled procedural assessment was not considered more of an administrative burden than the GRSs (p = 0.692). A procedural assessment created by combining procedural key steps to an independence scale is a valid, reliable and acceptable assessment instrument in surgery. In contrast to the GRSs, the reliability of the independence-scaled procedural assessment exceeded the threshold of 0.8, indicating that it can also be used for summative assessment. It furthermore seems that scrub nurses can assess the operative competence of surgical trainees.

  18. Influences on and Limitations of Classical Test Theory Reliability Estimates.

    Science.gov (United States)

    Arnold, Margery E.

    It is incorrect to say "the test is reliable" because reliability is a function not only of the test itself, but of many factors. The present paper explains how different factors affect classical reliability estimates such as test-retest, interrater, internal consistency, and equivalent forms coefficients. Furthermore, the limits of classical test…

  19. Reliability Estimation for Digital Instrument/Control System

    International Nuclear Information System (INIS)

    Yang, Yaguang; Sydnor, Russell

    2011-01-01

    Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method

  20. Reliability Estimation for Digital Instrument/Control System

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Yaguang; Sydnor, Russell [U.S. Nuclear Regulatory Commission, Washington, D.C. (United States)

    2011-08-15

    Digital instrumentation and controls (DI and C) systems are widely adopted in various industries because of their flexibility and ability to implement various functions that can be used to automatically monitor, analyze, and control complicated systems. It is anticipated that the DI and C will replace the traditional analog instrumentation and controls (AI and C) systems in all future nuclear reactor designs. There is an increasing interest for reliability and risk analyses for safety critical DI and C systems in regulatory organizations, such as The United States Nuclear Regulatory Commission. Developing reliability models and reliability estimation methods for digital reactor control and protection systems will involve every part of the DI and C system, such as sensors, signal conditioning and processing components, transmission lines and digital communication systems, D/A and A/D converters, computer system, signal processing software, control and protection software, power supply system, and actuators. Some of these components are hardware, such as sensors and actuators, their failure mechanisms are well understood, and the traditional reliability model and estimation methods can be directly applied. But many of these components are firmware which has software embedded in the hardware, and software needs special consideration because its failure mechanism is unique, and the reliability estimation method for a software system will be different from the ones used for hardware systems. In this paper, we will propose a reliability estimation method for the entire DI and C system reliability using a recently developed software reliability estimation method and a traditional hardware reliability estimation method.

  1. Estimation of the Reliability of Plastic Slabs

    DEFF Research Database (Denmark)

    Pirzada, G. B. : Ph.D.

    In this thesis, work related to fundamental conditions has been extended to non-fundamental or the general case of probabilistic analysis. Finally, using the ss-unzipping technique a door has been opened to system reliability analysis of plastic slabs. An attempt has been made in this thesis...... to give a probabilistic treatment of plastic slabs which is parallel to the deterministic and systematic treatment of plastic slabs by Nielsen (3). The fundamental reason is that in Nielsen (3) the treatment is based on a deterministic modelling of the basic material properties for the reinforced...

  2. Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability

    Directory of Open Access Journals (Sweden)

    Rózsás Árpád

    2015-12-01

    Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.

  3. Reliability estimation of semi-Markov systems: a case study

    International Nuclear Information System (INIS)

    Ouhbi, Brahim; Limnios, Nikolaos

    1997-01-01

    In this article, we are concerned with the estimation of the reliability and the availability of a turbo-generator rotor using a set of data observed in a real engineering situation provided by Electricite De France (EDF). The rotor is modeled by a semi-Markov process, which is used to estimate the rotor's reliability and availability. To do this, we present a method for estimating the semi-Markov kernel from a censored data

  4. Reliabilities of genomic estimated breeding values in Danish Jersey

    DEFF Research Database (Denmark)

    Thomasen, Jørn Rind; Guldbrandtsen, Bernt; Su, Guosheng

    2012-01-01

    In order to optimize the use of genomic selection in breeding plans, it is essential to have reliable estimates of the genomic breeding values. This study investigated reliabilities of direct genomic values (DGVs) in the Jersey population estimated by three different methods. The validation methods...... were (i) fivefold cross-validation and (ii) validation on the most recent 3 years of bulls. The reliability of DGV was assessed using squared correlations between DGV and deregressed proofs (DRPs). In the recent 3-year validation model, estimated reliabilities were also used to assess the reliabilities...... of DGV. The data set consisted of 1003 Danish Jersey bulls with conventional estimated breeding values (EBVs) for 14 different traits included in the Nordic selection index. The bulls were genotyped for Single-nucleotide polymorphism (SNP) markers using the Illumina 54 K chip. A Bayesian method was used...

  5. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  6. A SOFTWARE RELIABILITY ESTIMATION METHOD TO NUCLEAR SAFETY SOFTWARE

    Directory of Open Access Journals (Sweden)

    GEE-YONG PARK

    2014-02-01

    Full Text Available A method for estimating software reliability for nuclear safety software is proposed in this paper. This method is based on the software reliability growth model (SRGM, where the behavior of software failure is assumed to follow a non-homogeneous Poisson process. Two types of modeling schemes based on a particular underlying method are proposed in order to more precisely estimate and predict the number of software defects based on very rare software failure data. The Bayesian statistical inference is employed to estimate the model parameters by incorporating software test cases as a covariate into the model. It was identified that these models are capable of reasonably estimating the remaining number of software defects which directly affects the reactor trip functions. The software reliability might be estimated from these modeling equations, and one approach of obtaining software reliability value is proposed in this paper.

  7. IRT-Estimated Reliability for Tests Containing Mixed Item Formats

    Science.gov (United States)

    Shu, Lianghua; Schwarz, Richard D.

    2014-01-01

    As a global measure of precision, item response theory (IRT) estimated reliability is derived for four coefficients (Cronbach's a, Feldt-Raju, stratified a, and marginal reliability). Models with different underlying assumptions concerning test-part similarity are discussed. A detailed computational example is presented for the targeted…

  8. A Latent Class Approach to Estimating Test-Score Reliability

    Science.gov (United States)

    van der Ark, L. Andries; van der Palm, Daniel W.; Sijtsma, Klaas

    2011-01-01

    This study presents a general framework for single-administration reliability methods, such as Cronbach's alpha, Guttman's lambda-2, and method MS. This general framework was used to derive a new approach to estimating test-score reliability by means of the unrestricted latent class model. This new approach is the latent class reliability…

  9. User's guide to the Reliability Estimation System Testbed (REST)

    Science.gov (United States)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  10. Estimation of effective dose during hysterosalpingography procedures

    International Nuclear Information System (INIS)

    Alzimamil, K.; Babikir, E.; Alkhorayef, M.; Sulieman, A.; Alsafi, K.; Omer, H.

    2014-08-01

    Hysterosalpingography (HSG) is the most frequently used diagnostic tool to evaluate the endometrial cavity and fallopian tube by using conventional x-ray or fluoroscopy. Determination of the patient radiation doses values from x-ray examinations provides useful guidance on where best to concentrate efforts on patient dose reduction in order to optimize the protection of the patients. The aims of this study were to measure the patients entrance surface air kerma doses (ESA K), effective doses and to compare practices between different hospitals in Sudan. ESA K were measured for patient using calibrated thermo luminance dosimeters (TLDs, Gr-200A). Effective doses were estimated using National Radiological Protection Board (NRPB) software. This study was conducted in five radiological departments: Two Teaching Hospitals (A and D), two private hospitals (B and C) and one University Hospital (E). The mean ESD was 20.1 mGy, 28.9 mGy, 13.6 mGy, 58.65 mGy, 35.7, 22.4 and 19.6 mGy for hospitals A,B,C,D, and E), respectively. The mean effective dose was 2.4 mSv, 3.5 mSv, 1.6 mSv, 7.1 mSv and 4.3 mSv in the same order. The study showed wide variations in the ESDs with three of the hospitals having values above the internationally reported values. Number of x-ray images, fluoroscopy time, operator skills x-ray machine type and clinical complexity of the procedures were shown to be major contributors to the variations reported. Results demonstrated the need for standardization of technique throughout the hospital. The results also suggest that there is a need to optimize the procedures. Local DRLs were proposed for the entire procedures. (author)

  11. SHARP1: A revised systematic human action reliability procedure

    International Nuclear Information System (INIS)

    Wakefield, D.J.; Parry, G.W.; Hannaman, G.W.; Spurgin, A.J.

    1990-12-01

    Individual plant examinations (IPE) are being performed by utilities to evaluate plant-specific vulnerabilities to severe accidents. A major tool in performing an IPE is a probabilistic risk assessment (PRA). The importance of human interactions in determining the plant response in past PRAs is well documented. The modeling and quantification of the probabilities of human interactions have been the subjects of considerable research by the Electric Power Research Institute (EPRI). A revised framework, SHARP1, for incorporating human interactions into PRA is summarized in this report. SHARP1 emphasizes that the process stages are iterative and directed at specific goals rather than being performed sequentially in a stepwise procedure. This expanded summary provides the reader with a flavor of the full report content. Excerpts from the full report are presented, following the same outline as the full report. In the full report, the interface of the human reliability analysis with the plant logic model development in a PRA is given special attention. In addition to describing a methodology framework, the report also discusses the types of human interactions to be evaluated, and how to formulate a project team to perform the human reliability analysis. A concise description and comparative evaluation of the selected existing methods of quantification of human error are also presented. Four case studies are also provided to illustrate the SHARP1 process

  12. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  13. Application of analytical procedure on system reliability, GO-FLOW

    International Nuclear Information System (INIS)

    Matsuoka, Takeshi; Fukuto, Junji; Mitomo, Nobuo; Miyazaki, Keiko; Matsukura, Hiroshi; Kobayashi, Michiyuki

    2000-01-01

    In the Ship Research Institute, research and development of GO-FLOW procedure with various advanced functions as a system reliability analysis method occupying main part of the probabilistic safety assessment (PSA) were promoted. In this study, as an important evaluation technique on executing PSA with lower than level 3, by intending fundamental upgrading of the GO-FLOW procedure, a safety assessment system using the GO-FLOW as well as an analytical function coupling of dynamic behavior analytical function and physical behavior of the system with stochastic phenomenon change were developed. In 1998 fiscal year, preparation and verification of various functions such as dependence addition between the headings, rearrangement in order of time, positioning of same heading to plural positions, calculation of forming frequency with elapsing time were carried out. And, on a simulation analysis function of accident sequence, confirmation on analysis covering all of main accident sequence in the reactor for improved marine reactor, MRX was carried out. In addition, a function near automatically producible on input data for analysis was also prepared. As a result, the conventional analysis not always easy understanding on analytical results except an expert of PSA was solved, and understanding of the accident phenomenon, verification of validity on analysis, feedback to analysis, and feedback to design could be easily carried out. (G.K.)

  14. Estimating the Reliability of Aggregated and Within-Person Centered Scores in Ecological Momentary Assessment

    Science.gov (United States)

    Huang, Po-Hsien; Weng, Li-Jen

    2012-01-01

    A procedure for estimating the reliability of test scores in the context of ecological momentary assessment (EMA) was proposed to take into account the characteristics of EMA measures. Two commonly used test scores in EMA were considered: the aggregated score (AGGS) and the within-person centered score (WPCS). Conceptually, AGGS and WPCS represent…

  15. Estimation of some stochastic models used in reliability engineering

    International Nuclear Information System (INIS)

    Huovinen, T.

    1989-04-01

    The work aims to study the estimation of some stochastic models used in reliability engineering. In reliability engineering continuous probability distributions have been used as models for the lifetime of technical components. We consider here the following distributions: exponential, 2-mixture exponential, conditional exponential, Weibull, lognormal and gamma. Maximum likelihood method is used to estimate distributions from observed data which may be either complete or censored. We consider models based on homogeneous Poisson processes such as gamma-poisson and lognormal-poisson models for analysis of failure intensity. We study also a beta-binomial model for analysis of failure probability. The estimators of the parameters for three models are estimated by the matching moments method and in the case of gamma-poisson and beta-binomial models also by maximum likelihood method. A great deal of mathematical or statistical problems that arise in reliability engineering can be solved by utilizing point processes. Here we consider the statistical analysis of non-homogeneous Poisson processes to describe the failing phenomena of a set of components with a Weibull intensity function. We use the method of maximum likelihood to estimate the parameters of the Weibull model. A common cause failure can seriously reduce the reliability of a system. We consider a binomial failure rate (BFR) model as an application of the marked point processes for modelling common cause failure in a system. The parameters of the binomial failure rate model are estimated with the maximum likelihood method

  16. Case Study: Zutphen : Estimates of levee system reliability

    NARCIS (Netherlands)

    Roscoe, K.; Kothuis, Baukje; Kok, Matthijs

    2017-01-01

    Estimates of levee system reliability can conflict with experience and intuition. For example, a very high failure probability may be computed while no evidence of failure has been observed, or a very low failure probability when signs of failure have been detected.

  17. A generic method for estimating system reliability using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Ramirez-Marquez, Jose Emmanuel

    2009-01-01

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples

  18. A generic method for estimating system reliability using Bayesian networks

    Energy Technology Data Exchange (ETDEWEB)

    Doguc, Ozge [Stevens Institute of Technology, Hoboken, NJ 07030 (United States); Ramirez-Marquez, Jose Emmanuel [Stevens Institute of Technology, Hoboken, NJ 07030 (United States)], E-mail: jmarquez@stevens.edu

    2009-02-15

    This study presents a holistic method for constructing a Bayesian network (BN) model for estimating system reliability. BN is a probabilistic approach that is used to model and predict the behavior of a system based on observed stochastic events. The BN model is a directed acyclic graph (DAG) where the nodes represent system components and arcs represent relationships among them. Although recent studies on using BN for estimating system reliability have been proposed, they are based on the assumption that a pre-built BN has been designed to represent the system. In these studies, the task of building the BN is typically left to a group of specialists who are BN and domain experts. The BN experts should learn about the domain before building the BN, which is generally very time consuming and may lead to incorrect deductions. As there are no existing studies to eliminate the need for a human expert in the process of system reliability estimation, this paper introduces a method that uses historical data about the system to be modeled as a BN and provides efficient techniques for automated construction of the BN model, and hence estimation of the system reliability. In this respect K2, a data mining algorithm, is used for finding associations between system components, and thus building the BN model. This algorithm uses a heuristic to provide efficient and accurate results while searching for associations. Moreover, no human intervention is necessary during the process of BN construction and reliability estimation. The paper provides a step-by-step illustration of the method and evaluation of the approach with literature case examples.

  19. An integrated approach to estimate storage reliability with initial failures based on E-Bayesian estimates

    International Nuclear Information System (INIS)

    Zhang, Yongjin; Zhao, Ming; Zhang, Shitao; Wang, Jiamei; Zhang, Yanjun

    2017-01-01

    Storage reliability that measures the ability of products in a dormant state to keep their required functions is studied in this paper. For certain types of products, Storage reliability may not always be 100% at the beginning of storage, unlike the operational reliability, which exist possible initial failures that are normally neglected in the models of storage reliability. In this paper, a new integrated technique, the non-parametric measure based on the E-Bayesian estimates of current failure probabilities is combined with the parametric measure based on the exponential reliability function, is proposed to estimate and predict the storage reliability of products with possible initial failures, where the non-parametric method is used to estimate the number of failed products and the reliability at each testing time, and the parameter method is used to estimate the initial reliability and the failure rate of storage product. The proposed method has taken into consideration that, the reliability test data of storage products containing the unexamined before and during the storage process, is available for providing more accurate estimates of both the initial failure probability and the storage failure probability. When storage reliability prediction that is the main concern in this field should be made, the non-parametric estimates of failure numbers can be used into the parametric models for the failure process in storage. In the case of exponential models, the assessment and prediction method for storage reliability is presented in this paper. Finally, a numerical example is given to illustrate the method. Furthermore, a detailed comparison between the proposed and traditional method, for examining the rationality of assessment and prediction on the storage reliability, is investigated. The results should be useful for planning a storage environment, decision-making concerning the maximum length of storage, and identifying the production quality. - Highlights:

  20. Reliability of Bluetooth Technology for Travel Time Estimation

    DEFF Research Database (Denmark)

    Araghi, Bahar Namaki; Olesen, Jonas Hammershøj; Krishnan, Rajesh

    2015-01-01

    . However, their corresponding impacts on accuracy and reliability of estimated travel time have not been evaluated. In this study, a controlled field experiment is conducted to collect both Bluetooth and GPS data for 1000 trips to be used as the basis for evaluation. Data obtained by GPS logger is used...... to calculate actual travel time, referred to as ground truth, and to geo-code the Bluetooth detection events. In this setting, reliability is defined as the percentage of devices captured per trip during the experiment. It is found that, on average, Bluetooth-enabled devices will be detected 80% of the time......-range antennae detect Bluetooth-enabled devices in a closer location to the sensor, thus providing a more accurate travel time estimate. However, the smaller the size of the detection zone, the lower the penetration rate, which could itself influence the accuracy of estimates. Therefore, there has to be a trade...

  1. Computer Model to Estimate Reliability Engineering for Air Conditioning Systems

    International Nuclear Information System (INIS)

    Afrah Al-Bossly, A.; El-Berry, A.; El-Berry, A.

    2012-01-01

    Reliability engineering is used to predict the performance and optimize design and maintenance of air conditioning systems. Air conditioning systems are expose to a number of failures. The failures of an air conditioner such as turn on, loss of air conditioner cooling capacity, reduced air conditioning output temperatures, loss of cool air supply and loss of air flow entirely can be due to a variety of problems with one or more components of an air conditioner or air conditioning system. Forecasting for system failure rates are very important for maintenance. This paper focused on the reliability of the air conditioning systems. Statistical distributions that were commonly applied in reliability settings: the standard (2 parameter) Weibull and Gamma distributions. After distributions parameters had been estimated, reliability estimations and predictions were used for evaluations. To evaluate good operating condition in a building, the reliability of the air conditioning system that supplies conditioned air to the several The company's departments. This air conditioning system is divided into two, namely the main chilled water system and the ten air handling systems that serves the ten departments. In a chilled-water system the air conditioner cools water down to 40-45 degree F (4-7 degree C). The chilled water is distributed throughout the building in a piping system and connected to air condition cooling units wherever needed. Data analysis has been done with support a computer aided reliability software, this is due to the Weibull and Gamma distributions indicated that the reliability for the systems equal to 86.012% and 77.7% respectively. A comparison between the two important families of distribution functions, namely, the Weibull and Gamma families was studied. It was found that Weibull method performed for decision making.

  2. Analytical procedures for determining the impacts of reliability mitigation strategies.

    Science.gov (United States)

    2013-01-01

    Reliability of transport, especially the ability to reach a destination within a certain amount of time, is a regular concern of travelers and shippers. The definition of reliability used in this research is how travel time varies over time. The vari...

  3. Reliability Estimation of the Pultrusion Process Using the First-Order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Baran, Ismet; Tutum, Cem Celal; Hattel, Jesper Henri

    2013-01-01

    In the present study the reliability estimation of the pultrusion process of a flat plate is analyzed by using the first order reliability method (FORM). The implementation of the numerical process model is validated by comparing the deterministic temperature and cure degree profiles...... with corresponding analyses in the literature. The centerline degree of cure at the exit (CDOCE) being less than a critical value and the maximum composite temperature (Tmax) during the process being greater than a critical temperature are selected as the limit state functions (LSFs) for the FORM. The cumulative...

  4. Modelling and estimating degradation processes with application in structural reliability

    International Nuclear Information System (INIS)

    Chiquet, J.

    2007-06-01

    The characteristic level of degradation of a given structure is modeled through a stochastic process called the degradation process. The random evolution of the degradation process is governed by a differential system with Markovian environment. We put the associated reliability framework by considering the failure of the structure once the degradation process reaches a critical threshold. A closed form solution of the reliability function is obtained thanks to Markov renewal theory. Then, we build an estimation methodology for the parameters of the stochastic processes involved. The estimation methods and the theoretical results, as well as the associated numerical algorithms, are validated on simulated data sets. Our method is applied to the modelling of a real degradation mechanism, known as crack growth, for which an experimental data set is considered. (authors)

  5. Application of subset simulation in reliability estimation of underground pipelines

    International Nuclear Information System (INIS)

    Tee, Kong Fah; Khan, Lutfor Rahman; Li, Hongshuang

    2014-01-01

    This paper presents a computational framework for implementing an advanced Monte Carlo simulation method, called Subset Simulation (SS) for time-dependent reliability prediction of underground flexible pipelines. The SS can provide better resolution for low failure probability level of rare failure events which are commonly encountered in pipeline engineering applications. Random samples of statistical variables are generated efficiently and used for computing probabilistic reliability model. It gains its efficiency by expressing a small probability event as a product of a sequence of intermediate events with larger conditional probabilities. The efficiency of SS has been demonstrated by numerical studies and attention in this work is devoted to scrutinise the robustness of the SS application in pipe reliability assessment and compared with direct Monte Carlo simulation (MCS) method. Reliability of a buried flexible steel pipe with time-dependent failure modes, namely, corrosion induced deflection, buckling, wall thrust and bending stress has been assessed in this study. The analysis indicates that corrosion induced excessive deflection is the most critical failure event whereas buckling is the least susceptible during the whole service life of the pipe. The study also shows that SS is robust method to estimate the reliability of buried pipelines and it is more efficient than MCS, especially in small failure probability prediction

  6. Procedures for treating common cause failures in safety and reliability studies: Procedural framework and examples

    International Nuclear Information System (INIS)

    Mosleh, A.; Fleming, K.N.; Parry, G.W.; Paula, H.M.; Worledge, D.H.; Rasmuson, D.M.

    1988-01-01

    This report presents a framework for the inclusion of the impact of common cause failures in risk and reliability evaluations. Common cause failures are defined as that cutset of dependent failures for which causes are not explicitly included in the logic model as basic events. The emphasis here is on providing procedures for a practical, systematic approach that can be used to perform and clearly document the analysis. The framework comprises four major stages: (1) System Logic Model Development; (2) Identification of Common Cause Component Groups; (3) Common Cause Modeling and Data Analysis; and (4) System Quantification and Interpretation of Results. The framework and the methods discussed for performing the different stages of the analysis integrate insights obtained from engineering assessments of the system and the historical evidence from multiple failure events into a systematic, reproducible, and defensible analysis. 22 figs., 34 tabs

  7. Estimating the reliability of eyewitness identifications from police lineups.

    Science.gov (United States)

    Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William

    2016-01-12

    Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.

  8. Procedures for controlling the risks of reliability, safety, and availability of technical systems

    International Nuclear Information System (INIS)

    1987-01-01

    The reference book covers four sections. Apart from the fundamental aspects of the reliability problem, of risk and safety and the relevant criteria with regard to reliability, the material presented explains reliability in terms of maintenance, logistics and availability, and presents procedures for reliability assessment and determination of factors influencing the reliability, together with suggestions for systems technical integration. The reliability assessment consists of diagnostic and prognostic analyses. The section on factors influencing reliability discusses aspects of organisational structures, programme planning and control, and critical activities. (DG) [de

  9. Structural Reliability Using Probability Density Estimation Methods Within NESSUS

    Science.gov (United States)

    Chamis, Chrisos C. (Technical Monitor); Godines, Cody Ric

    2003-01-01

    A reliability analysis studies a mathematical model of a physical system taking into account uncertainties of design variables and common results are estimations of a response density, which also implies estimations of its parameters. Some common density parameters include the mean value, the standard deviation, and specific percentile(s) of the response, which are measures of central tendency, variation, and probability regions, respectively. Reliability analyses are important since the results can lead to different designs by calculating the probability of observing safe responses in each of the proposed designs. All of this is done at the expense of added computational time as compared to a single deterministic analysis which will result in one value of the response out of many that make up the density of the response. Sampling methods, such as monte carlo (MC) and latin hypercube sampling (LHS), can be used to perform reliability analyses and can compute nonlinear response density parameters even if the response is dependent on many random variables. Hence, both methods are very robust; however, they are computationally expensive to use in the estimation of the response density parameters. Both methods are 2 of 13 stochastic methods that are contained within the Numerical Evaluation of Stochastic Structures Under Stress (NESSUS) program. NESSUS is a probabilistic finite element analysis (FEA) program that was developed through funding from NASA Glenn Research Center (GRC). It has the additional capability of being linked to other analysis programs; therefore, probabilistic fluid dynamics, fracture mechanics, and heat transfer are only a few of what is possible with this software. The LHS method is the newest addition to the stochastic methods within NESSUS. Part of this work was to enhance NESSUS with the LHS method. The new LHS module is complete, has been successfully integrated with NESSUS, and been used to study four different test cases that have been

  10. Procedure for Application of Software Reliability Growth Models to NPP PSA

    International Nuclear Information System (INIS)

    Son, Han Seong; Kang, Hyun Gook; Chang, Seung Cheol

    2009-01-01

    As the use of software increases at nuclear power plants (NPPs), the necessity for including software reliability and/or safety into the NPP Probabilistic Safety Assessment (PSA) rises. This work proposes an application procedure of software reliability growth models (RGMs), which are most widely used to quantify software reliability, to NPP PSA. Through the proposed procedure, it can be determined if a software reliability growth model can be applied to the NPP PSA before its real application. The procedure proposed in this work is expected to be very helpful for incorporating software into NPP PSA

  11. A note on reliability estimation of functionally diverse systems

    International Nuclear Information System (INIS)

    Littlewood, B.; Popov, P.; Strigini, L.

    1999-01-01

    It has been argued that functional diversity might be a plausible means of claiming independence of failures between two versions of a system. We present a model of functional diversity, in the spirit of earlier models of diversity such as those of Eckhardt and Lee, and Hughes. In terms of the model, we show that the claims for independence between functionally diverse systems seem rather unrealistic. Instead, it seems likely that functionally diverse systems will exhibit positively correlated failures, and thus will be less reliable than an assumption of independence would suggest. The result does not, of course, suggest that functional diversity is not worthwhile; instead, it places upon the evaluator of such a system the onus to estimate the degree of dependence so as to evaluate the reliability of the system

  12. Probabilistic confidence for decisions based on uncertain reliability estimates

    Science.gov (United States)

    Reid, Stuart G.

    2013-05-01

    Reliability assessments are commonly carried out to provide a rational basis for risk-informed decisions concerning the design or maintenance of engineering systems and structures. However, calculated reliabilities and associated probabilities of failure often have significant uncertainties associated with the possible estimation errors relative to the 'true' failure probabilities. For uncertain probabilities of failure, a measure of 'probabilistic confidence' has been proposed to reflect the concern that uncertainty about the true probability of failure could result in a system or structure that is unsafe and could subsequently fail. The paper describes how the concept of probabilistic confidence can be applied to evaluate and appropriately limit the probabilities of failure attributable to particular uncertainties such as design errors that may critically affect the dependability of risk-acceptance decisions. This approach is illustrated with regard to the dependability of structural design processes based on prototype testing with uncertainties attributable to sampling variability.

  13. Reliability and Validity of 10 Different Standard Setting Procedures.

    Science.gov (United States)

    Halpin, Glennelle; Halpin, Gerald

    Research indicating that different cut-off points result from the use of different standard-setting techniques leaves decision makers with a disturbing dilemma: Which standard-setting method is best? This investigation of the reliability and validity of 10 different standard-setting approaches was designed to provide information that might help…

  14. Availability and Reliability of FSO Links Estimated from Visibility

    Directory of Open Access Journals (Sweden)

    M. Tatarko

    2012-06-01

    Full Text Available This paper is focused on estimation availability and reliability of FSO systems. Shortcut FSO means Free Space Optics. It is a system which allows optical transmission between two steady points. We can say that it is a last mile communication system. It is an optical communication system, but the propagation media is air. This solution of last mile does not require expensive optical fiber and establishing of connection is very simple. But there are some drawbacks which have a bad influence of quality of services and availability of the link. Number of phenomena in the atmosphere such as scattering, absorption and turbulence cause a large variation of receiving optical power and laser beam attenuation. The influence of absorption and turbulence can be significantly reduced by an appropriate design of FSO link. But the visibility has the main influence on quality of the optical transmission channel. Thus, in typical continental area where rain, snow or fog occurs is important to know their values. This article gives a description of device for measuring weather conditions and information about estimation of availability and reliability of FSO links in Slovakia.

  15. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    International Nuclear Information System (INIS)

    Bell, B.J.; Swain, A.D.

    1983-05-01

    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study

  16. Reliability of fish size estimates obtained from multibeam imaging sonar

    Science.gov (United States)

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of

  17. Methods for qualification of highly reliable software - international procedure

    International Nuclear Information System (INIS)

    Kersken, M.

    1997-01-01

    Despite the advantages of computer-assisted safety technology, there still is some uneasyness to be observed with respect to the novel processes, resulting from absence of a body of generally accepted and uncontentious qualification guides (regulatory provisions, standards) for safety evaluation of the computer codes applied. Warranty of adequate protection of the population, operators or plant components is an essential aspect in this context, too - as it is in general with reliability and risk assessment of novel technology - so that, due to appropriate legislation still missing, there currently is a licensing risk involved in the introduction of digital safety systems. Nevertheless, there is some extent of agreement within the international community and utility operators about what standards and measures should be applied for qualification of software of relevance to plant safety. The standard IEC 880/IEC 86/ in particular, in its original version, or national documents based on this standard, are applied in all countries using or planning to install those systems. A novel supplement to this standard, document /IEC 96/, is in the process of finalization and defines the requirements to be met by modern methods of software engineering. (orig./DG) [de

  18. A modified procedure for estimating the population mean in two ...

    African Journals Online (AJOL)

    A modified procedure for estimating the population mean in two-occasion successive samplings. Housila Prasad Singh, Suryal Kant Pal. Abstract. This paper addresses the problem of estimating the current population mean in two occasion successive sampling. Utilizing the readily available information on two auxiliary ...

  19. Performance of a procedure for yield estimation in fruit orchards

    DEFF Research Database (Denmark)

    Aravena Zamora, Felipe; Potin, Camila; Wulfsohn, Dvora-Laio

    Early estimation of expected fruit tree yield is important for the market planning and for growers and exporters to plan for labour and boxes. Large variations in tree yield may be found, posing a challenge for accurate yield estimation. We evaluated a multilevel systematic sampling procedure...

  20. Assessment of the Maximal Split-Half Coefficient to Estimate Reliability

    Science.gov (United States)

    Thompson, Barry L.; Green, Samuel B.; Yang, Yanyun

    2010-01-01

    The maximal split-half coefficient is computed by calculating all possible split-half reliability estimates for a scale and then choosing the maximal value as the reliability estimate. Osburn compared the maximal split-half coefficient with 10 other internal consistency estimates of reliability and concluded that it yielded the most consistently…

  1. Reliability Estimation for Single-unit Ceramic Crown Restorations

    Science.gov (United States)

    Lekesiz, H.

    2014-01-01

    The objective of this study was to evaluate the potential of a survival prediction method for the assessment of ceramic dental restorations. For this purpose, fast-fracture and fatigue reliabilities for 2 bilayer (metal ceramic alloy core veneered with fluorapatite leucite glass-ceramic, d.Sign/d.Sign-67, by Ivoclar; glass-infiltrated alumina core veneered with feldspathic porcelain, VM7/In-Ceram Alumina, by Vita) and 3 monolithic (leucite-reinforced glass-ceramic, Empress, and ProCAD, by Ivoclar; lithium-disilicate glass-ceramic, Empress 2, by Ivoclar) single posterior crown restorations were predicted, and fatigue predictions were compared with the long-term clinical data presented in the literature. Both perfectly bonded and completely debonded cases were analyzed for evaluation of the influence of the adhesive/restoration bonding quality on estimations. Material constants and stress distributions required for predictions were calculated from biaxial tests and finite element analysis, respectively. Based on the predictions, In-Ceram Alumina presents the best fast-fracture resistance, and ProCAD presents a comparable resistance for perfect bonding; however, ProCAD shows a significant reduction of resistance in case of complete debonding. Nevertheless, it is still better than Empress and comparable with Empress 2. In-Ceram Alumina and d.Sign have the highest long-term reliability, with almost 100% survivability even after 10 years. When compared with clinical failure rates reported in the literature, predictions show a promising match with clinical data, and this indicates the soundness of the settings used in the proposed predictions. PMID:25048249

  2. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    International Nuclear Information System (INIS)

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual's performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average

  3. The 'Own Children' fertility estimation procedure: a reappraisal.

    Science.gov (United States)

    Avery, Christopher; St Clair, Travis; Levin, Michael; Hill, Kenneth

    2013-07-01

    The Full Birth History has become the dominant source of estimates of fertility levels and trends for countries lacking complete birth registration. An alternative, the 'Own Children' method, derives fertility estimates from household age distributions, but is now rarely used, partly because of concerns about its accuracy. We compared the estimates from these two procedures by applying them to 56 recent Demographic and Health Surveys. On average, 'Own Children' estimates of recent total fertility rates are 3 per cent lower than birth-history estimates. Much of this difference stems from selection bias in the collection of birth histories: women with more children are more likely to be interviewed. We conclude that full birth histories overestimate total fertility, and that the 'Own Children' method gives estimates of total fertility that may better reflect overall national fertility. We recommend the routine application of the 'Own Children' method to census and household survey data to estimate fertility levels and trends.

  4. 1/f noise as a reliability estimation for solar panels

    Science.gov (United States)

    Alabedra, R.; Orsal, B.

    The purpose of this work is a study of the 1/f noise from a forward biased dark solar cell as a nondestructive reliability estimation of solar panels. It is shown that one cell with a given defect can be detected in a solar panel by low frequency noise measurements at obscurity. One real solar panel of 5 cells in parallel and 5 cells in series is tested by this method. The cells for space application are n(+)p monocrystalline silicon junction with an area of 8 sq cm and a base resistivity of 10 ohm/cm. In the first part of this paper the I-V, Rd=f(1) characteristics of one cell or of a panel are not modified when a small defect is introduced by a mechanical constraint. In the second part, the theoretical results on the 1/f noise in a p-n junction under forward bias are recalled. It is shown that the noise of the cell with a defect is about 10 to 15 times higher than that of a good cell. If one good cell is replaced by a cell with defect in the panel 5 x 5, this leads to an increase of about 30 percent of the noise level of the panel.

  5. A rapid reliability estimation method for directed acyclic lifeline networks with statistically dependent components

    International Nuclear Information System (INIS)

    Kang, Won-Hee; Kliese, Alyce

    2014-01-01

    Lifeline networks, such as transportation, water supply, sewers, telecommunications, and electrical and gas networks, are essential elements for the economic and societal functions of urban areas, but their components are highly susceptible to natural or man-made hazards. In this context, it is essential to provide effective pre-disaster hazard mitigation strategies and prompt post-disaster risk management efforts based on rapid system reliability assessment. This paper proposes a rapid reliability estimation method for node-pair connectivity analysis of lifeline networks especially when the network components are statistically correlated. Recursive procedures are proposed to compound all network nodes until they become a single super node representing the connectivity between the origin and destination nodes. The proposed method is applied to numerical network examples and benchmark interconnected power and water networks in Memphis, Shelby County. The connectivity analysis results show the proposed method's reasonable accuracy and remarkable efficiency as compared to the Monte Carlo simulations

  6. An automated background estimation procedure for gamma ray spectra

    International Nuclear Information System (INIS)

    Tervo, R.J.; Kennett, T.J.; Prestwich, W.V.

    1983-01-01

    An objective and simple method has been developed to estimate the background continuum in Ge gamma ray spectra. Requiring no special procedures, the method is readily automated. Based upon the inherent statistical properties of the experimental data itself, nodes, which reflect background samples are located and used to produce an estimate of the continuum. A simple procedure to interpolate between nodes is reported and a range of rather typical experimental data is presented. All information necessary to implemented this technique is given including the relevant properties of various factors involved in its development. (orig.)

  7. Generating human reliability estimates using expert judgment. Volume 1. Main report

    International Nuclear Information System (INIS)

    Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.

    1984-11-01

    The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report

  8. Reliability: How much is it worth? Beyond its estimation or prediction, the (net) present value of reliability

    International Nuclear Information System (INIS)

    Saleh, J.H.; Marais, K.

    2006-01-01

    In this article, we link an engineering concept, reliability, to a financial and managerial concept, net present value, by exploring the impact of a system's reliability on its revenue generation capability. The framework here developed for non-repairable systems quantitatively captures the value of reliability from a financial standpoint. We show that traditional present value calculations of engineering systems do not account for system reliability, thus over-estimate a system's worth and can therefore lead to flawed investment decisions. It is therefore important to involve reliability engineers upfront before investment decisions are made in technical systems. In addition, the analyses here developed help designers identify the optimal level of reliability that maximizes a system's net present value-the financial value reliability provides to the system minus the cost to achieve this level of reliability. Although we recognize that there are numerous considerations driving the specification of an engineering system's reliability, we contend that the financial analysis of reliability here developed should be made available to decision-makers to support in part, or at least be factored into, the system reliability specification

  9. Improving reliability of state estimation programming and computing suite based on analyzing a fault tree

    Directory of Open Access Journals (Sweden)

    Kolosok Irina

    2017-01-01

    Full Text Available Reliable information on the current state parameters obtained as a result of processing the measurements from systems of the SCADA and WAMS data acquisition and processing through methods of state estimation (SE is a condition that enables to successfully manage an energy power system (EPS. SCADA and WAMS systems themselves, as any technical systems, are subject to failures and faults that lead to distortion and loss of information. The SE procedure enables to find erroneous measurements, therefore, it is a barrier for the distorted information to penetrate into control problems. At the same time, the programming and computing suite (PCS implementing the SE functions may itself provide a wrong decision due to imperfection of the software algorithms and errors. In this study, we propose to use a fault tree to analyze consequences of failures and faults in SCADA and WAMS and in the very SE procedure. Based on the analysis of the obtained measurement information and on the SE results, we determine the state estimation PCS fault tolerance level featuring its reliability.

  10. 40 CFR 98.415 - Procedures for estimating missing data.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data. 98.415 Section 98.415 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Suppliers of Industrial Greenhouse Gases § 98.415...

  11. 40 CFR 98.405 - Procedures for estimating missing data.

    Science.gov (United States)

    2010-07-01

    ... meter malfunctions), a substitute data value for the missing quantity measurement must be used in the... period for any reason, the reporter shall use either its delivering pipeline measurements or the default... § 98.405 Procedures for estimating missing data. (a) Whenever a quality-assured value of the quantity...

  12. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...

  13. Estimation of reliability of a interleaving PFC boost converter

    Directory of Open Access Journals (Sweden)

    Gulam Amer Sandepudi

    2010-01-01

    Full Text Available Reliability plays an important role in power supplies. For other electronic equipment, a certain failure mode, at least for a part of the total system, can often be employed without serious (critical effects. However, for power supply no such condition can be accepted, since very high demands on its reliability must be achieved. At higher power levels, the continuous conduction mode (CCM boost converter is preferred topology for implementation a front end with PFC. As a result, significant efforts have been made to improve the performance of high boost converter. This paper is one of the efforts for improving the performance of the converter from the reliability point of view. In this paper, interleaving boost power factor correction converter is simulated with single switch in continuous conduction mode (CCM, discontinuous conduction mode (DCM and critical conduction mode (CRM under different output power ratings. Results of the converter are explored from reliability point of view.

  14. reliability reliability

    African Journals Online (AJOL)

    eobe

    Corresponding author, Tel: +234-703. RELIABILITY .... V , , given by the code of practice. However, checks must .... an optimization procedure over the failure domain F corresponding .... of Concrete Members based on Utility Theory,. Technical ...

  15. A new modular procedure for industrial plant simulations and its reliable implementation

    International Nuclear Information System (INIS)

    Carcasci, C.; Marini, L.; Morini, B.; Porcelli, M.

    2016-01-01

    Modeling of industrial plants, and especially energy systems, has become increasingly important in industrial engineering and the need for accurate information on their behavior has grown along with the complexity of the industrial processes. Consequently, accurate and flexible simulation tools became essential yielding the development of modular codes. The aim of this work is to propose a new modular mathematical modeling for industrial plant simulation and its reliable numerical implementation. Regardless of their layout, a large class of plant's configurations is modeled by a library of elementary parts; then the physical properties, compositions of the working fluid, and plant's performance are estimated. Each plant component is represented by equations modeling fundamental mechanical and thermodynamic laws and giving rise to a system of algebraic nonlinear equations; remarkably, suitable restrictions on the variables of such nonlinear equations are imposed to guarantee solutions of physical meaning. The proposed numerical procedure combines an outer iterative process which refines plants characteristic parameters and an inner one which solves the arising nonlinear systems and consists of a trust-region solver for bound-constrained nonlinear equalities. The new procedure has been validated performing simulations against an existing modular tool on two compression train arrangements with both series and parallel-mounted compressors. - Highlights: • A numerical modular tool for industrial plants simulation is presented. • Mathematical modeling is thoroughly described. • Solution of the nonlinear system is performed by a trust-region Gauss–Newton solver. • A detailed explanation of the optimization solver named TRESNEI is provided. • Code flexibility and robustness are investigated through numerical simulations.

  16. Reliability estimation for check valves and other components

    International Nuclear Information System (INIS)

    McElhaney, K.L.; Staunton, R.H.

    1996-01-01

    For years the nuclear industry has depended upon component operational reliability information compiled from reliability handbooks and other generic sources as well as private databases generated by recognized experts both within and outside the nuclear industry. Regrettably, these technical bases lacked the benefit of large-scale operational data and comprehensive data verification, and did not take into account the parameters and combinations of parameters that affect the determination of failure rates. This paper briefly examines the historic use of generic component reliability data, its sources, and its limitations. The concept of using a single failure rate for a particular component type is also examined. Particular emphasis is placed on check valves due to the information available on those components. The Appendix presents some of the results of the extensive analyses done by Oak Ridge National Laboratory (ORNL) on check valve performance

  17. Systems reliability analyses and risk analyses for the licencing procedure under atomic law

    International Nuclear Information System (INIS)

    Berning, A.; Spindler, H.

    1983-01-01

    For the licencing procedure under atomic law in accordance with Article 7 AtG, the nuclear power plant as a whole needs to be assessed, plus the reliability of systems and plant components that are essential to safety are to be determined with probabilistic methods. This requirement is the consequence of safety criteria for nuclear power plants issued by the Home Department (BMI). Systems reliability studies and risk analyses used in licencing procedures under atomic law are identified. The stress is on licencing decisions, mainly for PWR-type reactors. Reactor Safety Commission (RSK) guidelines, examples of reasoning in legal proceedings and arguments put forth by objectors are also dealt with. Correlations are shown between reliability analyses made by experts and licencing decisions by means of examples. (orig./HP) [de

  18. A Survey of Software Reliability Modeling and Estimation

    Science.gov (United States)

    1983-09-01

    considered include: the Jelinski-Moranda Model, the ,Geometric Model,’ and Musa’s Model. A Monte -Carlo study of the behavior of the ’V"’"*least squares...ceedings Number 261, 1979, pp. 34-1, 34-11. IoelAmrit, AGieboSSukert, Alan and Goel, Ararat , "A Guidebookfor Software Reliability Assessment, 1980

  19. Evaluation and reliability of bone histological age estimation methods

    African Journals Online (AJOL)

    Human age estimation at death plays a vital role in forensic anthropology and bioarchaeology. Researchers used morphological and histological methods to estimate human age from their skeletal remains. This paper discussed different histological methods that used human long bones and ribs to determine age ...

  20. Engineer’s estimate reliability and statistical characteristics of bids

    Directory of Open Access Journals (Sweden)

    Fariborz M. Tehrani

    2016-12-01

    Full Text Available The objective of this report is to provide a methodology for examining bids and evaluating the performance of engineer’s estimates in capturing the true cost of projects. This study reviews the cost development for transportation projects in addition to two sources of uncertainties in a cost estimate, including modeling errors and inherent variability. Sample projects are highway maintenance projects with a similar scope of the work, size, and schedule. Statistical analysis of engineering estimates and bids examines the adaptability of statistical models for sample projects. Further, the variation of engineering cost estimates from inception to implementation has been presented and discussed for selected projects. Moreover, the applicability of extreme values theory is assessed for available data. The results indicate that the performance of engineer’s estimate is best evaluated based on trimmed average of bids, excluding discordant bids.

  1. Reliability estimates for selected sensors in fusion applications

    International Nuclear Information System (INIS)

    Cadwallader, L.C.

    1996-09-01

    This report presents the results of a study to define several types of sensors in use, the qualitative reliability (failure modes) and quantitative reliability (average failure rates) for these types of process sensors. Temperature, pressure, flow, and level sensors are discussed for water coolant and for cryogenic coolants. The failure rates that have been found are useful for risk assessment and safety analysis. Repair times and calibration intervals are also given when found in the literature. All of these values can also be useful to plant operators and maintenance personnel. Designers may be able to make use of these data when planning systems. The final chapter in this report discusses failure rates for several types of personnel safety sensors, including ionizing radiation monitors, toxic and combustible gas detectors, humidity sensors, and magnetic field sensors. These data could be useful to industrial hygienists and other safety professionals when designing or auditing for personnel safety

  2. Sensitivity of Reliability Estimates in Partially Damaged RC Structures subject to Earthquakes, using Reduced Hysteretic Models

    DEFF Research Database (Denmark)

    Iwankiewicz, R.; Nielsen, Søren R. K.; Skjærbæk, P. S.

    The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation.......The subject of the paper is the investigation of the sensitivity of structural reliability estimation by a reduced hysteretic model for a reinforced concrete frame under an earthquake excitation....

  3. Reliability and precision of pellet-group counts for estimating landscape-level deer density

    Science.gov (United States)

    David S. deCalesta

    2013-01-01

    This study provides hitherto unavailable methodology for reliably and precisely estimating deer density within forested landscapes, enabling quantitative rather than qualitative deer management. Reliability and precision of the deer pellet-group technique were evaluated in 1 small and 2 large forested landscapes. Density estimates, adjusted to reflect deer harvest and...

  4. Alternative Estimates of the Reliability of College Grade Point Averages. Professional File. Article 130, Spring 2013

    Science.gov (United States)

    Saupe, Joe L.; Eimers, Mardy T.

    2013-01-01

    The purpose of this paper is to explore differences in the reliabilities of cumulative college grade point averages (GPAs), estimated for unweighted and weighted, one-semester, 1-year, 2-year, and 4-year GPAs. Using cumulative GPAs for a freshman class at a major university, we estimate internal consistency (coefficient alpha) reliabilities for…

  5. On estimation of reliability for pipe lines of heat power plants under cyclic loading

    International Nuclear Information System (INIS)

    Verezemskij, V.G.

    1986-01-01

    One of the possible methods to obtain a quantitative estimate of the reliability for pipe lines of the welded heat power plants under cyclic loading due to heating-cooling and due to vibration is considered. Reliability estimate is carried out for a common case of loading by simultaneous cycles with different amplitudes and loading asymmetry. It is shown that scattering of the breaking number of cycles for the metal of welds may perceptibly decrease reliability of the welded pipe line

  6. Reliability of single aliquot regenerative protocol (SAR) for dose estimation in quartz at different burial temperatures: A simulation study

    International Nuclear Information System (INIS)

    Koul, D.K.; Pagonis, V.; Patil, P.

    2016-01-01

    The single aliquot regenerative protocol (SAR) is a well-established technique for estimating naturally acquired radiation doses in quartz. This simulation work examines the reliability of SAR protocol for samples which experienced different ambient temperatures in nature in the range of −10 to 40 °C. The contribution of various experimental variables used in SAR protocols to the accuracy and precision of the method is simulated for different ambient temperatures. Specifically the effects of paleo-dose, test dose, pre-heating temperature and cut-heat temperature on the accuracy of equivalent dose (ED) estimation are simulated by using random combinations of the concentrations of traps and centers using a previously published comprehensive quartz model. The findings suggest that the ambient temperature has a significant bearing on the reliability of natural dose estimation using SAR protocol, especially for ambient temperatures above 0 °C. The main source of these inaccuracies seems to be thermal sensitization of the quartz samples caused by the well-known thermal transfer of holes between luminescence centers in quartz. The simulations suggest that most of this inaccuracy in the dose estimation can be removed by delivering the laboratory doses in pulses (pulsed irradiation procedures). - Highlights: • Ambient temperatures affect the reliability of SAR. • It overestimates the dose with increase in burial temperature and burial time periods. • Elevated temperature irradiation does not correct for these overestimations. • Inaccuracies in dose estimation can be removed by incorporating pulsed irradiation procedures.

  7. Design flood hydrograph estimation procedure for small and fully-ungauged basins

    Science.gov (United States)

    Grimaldi, S.; Petroselli, A.

    2013-12-01

    The Rational Formula is the most applied equation in practical hydrology due to its simplicity and the effective compromise between theory and data availability. Although the Rational Formula is affected by several drawbacks, it is reliable and surprisingly accurate considering the paucity of input information. However, after more than a century, the recent computational, theoretical, and large-scale monitoring progresses compel us to try to suggest a more advanced yet still empirical procedure for estimating peak discharge in small and ungauged basins. In this contribution an alternative empirical procedure (named EBA4SUB - Event Based Approach for Small and Ungauged Basins) based on the common modelling steps: design hyetograph, rainfall excess, and rainfall-runoff transformation, is described. The proposed approach, accurately adapted for the fully-ungauged basin condition, provides a potentially better estimation of the peak discharge, a design hydrograph shape, and, most importantly, reduces the subjectivity of the hydrologist in its application.

  8. Averaging models: parameters estimation with the R-Average procedure

    Directory of Open Access Journals (Sweden)

    S. Noventa

    2010-01-01

    Full Text Available The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982, can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto & Vicentini, 2007 can be used to estimate the parameters of these models. By the use of multiple information criteria in the model selection procedure, R-Average allows for the identification of the best subset of parameters that account for the data. After a review of the general method, we present an implementation of the procedure in the framework of R-project, followed by some experiments using a Monte Carlo method.

  9. CONSIDERATIONS FOR THE TREATMENT OF COMPUTERIZED PROCEDURES IN HUMAN RELIABILITY ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2012-07-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  10. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; David I. Gertman

    2012-10-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  11. Approximate estimation of system reliability via fault trees

    International Nuclear Information System (INIS)

    Dutuit, Y.; Rauzy, A.

    2005-01-01

    In this article, we show how fault tree analysis, carried out by means of binary decision diagrams (BDD), is able to approximate reliability of systems made of independent repairable components with a good accuracy and a good efficiency. We consider four algorithms: the Murchland lower bound, the Barlow-Proschan lower bound, the Vesely full approximation and the Vesely asymptotic approximation. For each of these algorithms, we consider an implementation based on the classical minimal cut sets/rare events approach and another one relying on the BDD technology. We present numerical results obtained with both approaches on various examples

  12. Estimating anesthesia and surgical procedure times from medicare anesthesia claims.

    Science.gov (United States)

    Silber, Jeffrey H; Rosenbaum, Paul R; Zhang, Xuemei; Even-Shoshan, Orit

    2007-02-01

    Procedure times are important variables that often are included in studies of quality and efficiency. However, due to the need for costly chart review, most studies are limited to single-institution analyses. In this article, the authors describe how well the anesthesia claim from Medicare can estimate chart times. The authors abstracted information on time of induction and entrance to the recovery room ("anesthesia chart time") from the charts of 1,931 patients who underwent general and orthopedic surgical procedures in Pennsylvania. The authors then merged the associated bills from claims data supplied from Medicare (Part B data) that included a variable denoting the time in minutes for the anesthesia service. The authors also investigated the time from incision to closure ("surgical chart time") on a subset of 1,888 patients. Anesthesia claim time from Medicare was highly predictive of anesthesia chart time (Kendall's rank correlation tau = 0.85, P < 0.0001, median absolute error = 5.1 min) but somewhat less predictive of surgical chart time (Kendall's tau = 0.73, P < 0.0001, median absolute error = 13.8 min). When predicting chart time from Medicare bills, variables reflecting procedure type, comorbidities, and hospital type did not significantly improve the prediction, suggesting that errors in predicting the chart time from the anesthesia bill time are not related to these factors; however, the individual hospital did have some influence on these estimates. Anesthesia chart time can be well estimated using Medicare claims, thereby facilitating studies with vastly larger sample sizes and much lower costs of data collection.

  13. Simple and reliable procedure for the evaluation of short-term dynamic processes in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Popovic, D P

    1986-10-01

    An efficient approach is presented to the solution of the short-term dynamics model in power systems. It consists of an adequate algebraic treatment of the original system of nonlinear differential equations, using linearization, decomposition and Cauchy's formula. The simple difference equations obtained in this way are incorporated into a model of the electrical network, which is of a low order compared to the ones usually used. Newton's method is applied to the model formed in this way, which leads to a simple and reliable iterative procedure. The characteristics of the procedure developed are demonstrated on examples of transient stability analysis of real power systems. 12 refs.

  14. Validity, reliability, feasibility, acceptability and educational impact of direct observation of procedural skills (DOPS).

    Science.gov (United States)

    Naeem, Naghma

    2013-01-01

    Direct observation of procedural skills (DOPS) is a new workplace assessment tool. The aim of this narrative review of literature is to summarize the available evidence about the validity, reliability, feasibility, acceptability and educational impact of DOPS. A PubMed database and Google search of the literature on DOPS published from January 2000 to January 2012 was conducted which yielded 30 articles. Thirteen articles were selected for full text reading and review. In the reviewed literature, DOPS was found to be a useful tool for assessment of procedural skills, but further research is required to prove its utility as a workplace based assessment instrument.

  15. Three procedures for estimating erosion from construction areas

    International Nuclear Information System (INIS)

    Abt, S.R.; Ruff, J.F.

    1978-01-01

    Erosion from many mining and construction sites can lead to serious environmental pollution problems. Therefore, erosion management plans must be developed in order that the engineer may implement measures to control or eliminate excessive soil losses. To properly implement a management program, it is necessary to estimate potential soil losses from the time the project begins to beyond project completion. Three methodologies are presented which project the estimated soil losses due to sheet or rill erosion of water and are applicable to mining and construction areas. Furthermore, the three methods described are intended as indicators of the state-of-the-art in water erosion prediction. The procedures herein do not account for gully erosion, snowmelt erosion, wind erosion, freeze-thaw erosion or extensive flooding

  16. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  17. Empirical Study of Travel Time Estimation and Reliability

    OpenAIRE

    Li, Ruimin; Chai, Huajun; Tang, Jin

    2013-01-01

    This paper explores the travel time distribution of different types of urban roads, the link and path average travel time, and variance estimation methods by analyzing the large-scale travel time dataset detected from automatic number plate readers installed throughout Beijing. The results show that the best-fitting travel time distribution for different road links in 15 min time intervals differs for different traffic congestion levels. The average travel time for all links on all days can b...

  18. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Science.gov (United States)

    Kanjilal, Oindrila; Manohar, C. S.

    2017-07-01

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.

  19. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  20. Reliability assessment of a manual-based procedure towards learning curve modeling and fmea analysis

    Directory of Open Access Journals (Sweden)

    Gustavo Rech

    2013-03-01

    Full Text Available Separation procedures in drug Distribution Centers (DC are manual-based activities prone to failures such as shipping exchanged, expired or broken drugs to the customer. Two interventions seem as promising in improving the reliability in the separation procedure: (i selection and allocation of appropriate operators to the procedure, and (ii analysis of potential failure modes incurred by selected operators. This article integrates Learning Curves (LC and FMEA (Failure Mode and Effect Analysis aimed at reducing the occurrence of failures in the manual separation of a drug DC. LCs parameters enable generating an index to identify the recommended operators to perform the procedures. The FMEA is then applied to the separation procedure carried out by the selected operators in order to identify failure modes. It also deployed the traditional FMEA severity index into two sub-indexes related to financial issues and damage to company´s image in order to characterize failures severity. When applied to a drug DC, the proposed method significantly reduced the frequency and severity of failures in the separation procedure.

  1. Reliability

    OpenAIRE

    Condon, David; Revelle, William

    2017-01-01

    Separating the signal in a test from the irrelevant noise is a challenge for all measurement. Low test reliability limits test validity, attenuates important relationships, and can lead to regression artifacts. Multiple approaches to the assessment and improvement of reliability are discussed. The advantages and disadvantages of several different approaches to reliability are considered. Practical advice on how to assess reliability using open source software is provided.

  2. Estimating the Parameters of Software Reliability Growth Models Using the Grey Wolf Optimization Algorithm

    OpenAIRE

    Alaa F. Sheta; Amal Abdel-Raouf

    2016-01-01

    In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of...

  3. Fault-tolerant embedded system design and optimization considering reliability estimation uncertainty

    International Nuclear Information System (INIS)

    Wattanapongskorn, Naruemon; Coit, David W.

    2007-01-01

    In this paper, we model embedded system design and optimization, considering component redundancy and uncertainty in the component reliability estimates. The systems being studied consist of software embedded in associated hardware components. Very often, component reliability values are not known exactly. Therefore, for reliability analysis studies and system optimization, it is meaningful to consider component reliability estimates as random variables with associated estimation uncertainty. In this new research, the system design process is formulated as a multiple-objective optimization problem to maximize an estimate of system reliability, and also, to minimize the variance of the reliability estimate. The two objectives are combined by penalizing the variance for prospective solutions. The two most common fault-tolerant embedded system architectures, N-Version Programming and Recovery Block, are considered as strategies to improve system reliability by providing system redundancy. Four distinct models are presented to demonstrate the proposed optimization techniques with or without redundancy. For many design problems, multiple functionally equivalent software versions have failure correlation even if they have been independently developed. The failure correlation may result from faults in the software specification, faults from a voting algorithm, and/or related faults from any two software versions. Our approach considers this correlation in formulating practical optimization models. Genetic algorithms with a dynamic penalty function are applied in solving this optimization problem, and reasonable and interesting results are obtained and discussed

  4. Monte Carlo estimation for pediatric barium meal procedures

    International Nuclear Information System (INIS)

    Filipov, D.; Schelin, H.R.; Denyak, V.; Legnani, A.; Ledesma, J.A.; Paschuk, S.A.; Sauzen, J.; Yagui, A.; Hoff, G.; Khoury, H.J.

    2015-01-01

    Fluoroscopic barium meal (BM) series involve an X-ray examination of the esophagus, stomach, and duodenum, by the use of a contrast media – the barium sulfate (BaSO4). They are widely used to observe digestive functions or to diagnose abnormalities such as ulcers; tumors; inflammation of the esophagus, stomach, and duodenum; malrotations; vascular rings; and gastroesophageal reflux disease (a common ailment in children). However, this procedure uses long fluoroscopy times and multiple radiographies, resulting in high effective doses to pediatric patients, whose radiosensitivity and life expectancy are higher than in adults. Based on those data, the aims of the current study are to: determine the P K,A (kerma-area product) values, on the patient chest area, and the effective doses to 5 and 10 years old children. Thirty-seven different pediatric patients were studied and stratified into two group sizes: 5 and 10 years old. For each procedure, the following data was recorded: sex, age and upper chest thickness, from the patients; technical parameters of the procedure (kV, fluoroscopy time and number of radiographies); distances (focus-detector and focus-table) and field size on the examination table. Three pairs of LiF:Mg,Ti thermoluminescent dosimeters were positioned at the center of the child´s sternum. After that, upper chest thickness was subtracted from focus-table distance, so focus-patient distance was obtained. Using the field size on the table and applying similar triangles concepts, the field size on the patient was measured, which was multiplied by the mean kerma (from the dosimeters), so that P K,A could be determined. To estimate the effective dose, P K,A and technical parameters of the procedure (kV, total filtration, focus-detector distance and field size on the patient) were written in a Monte Carlo software simulation. The results of P K,A and effective doses were higher than studies used for comparison, which shows the importance of an

  5. Reliability/Cost Evaluation on Power System connected with Wind Power for the Reserve Estimation

    DEFF Research Database (Denmark)

    Lee, Go-Eun; Cha, Seung-Tae; Shin, Je-Seok

    2012-01-01

    Wind power is ideally a renewable energy with no fuel cost, but has a risk to reduce reliability of the whole system because of uncertainty of the output. If the reserve of the system is increased, the reliability of the system may be improved. However, the cost would be increased. Therefore...... the reserve needs to be estimated considering the trade-off between reliability and economic aspects. This paper suggests a methodology to estimate the appropriate reserve, when wind power is connected to the power system. As a case study, when wind power is connected to power system of Korea, the effects...

  6. A procedure to obtain reliable pair distribution functions of non-crystalline materials from diffraction data

    International Nuclear Information System (INIS)

    Hansen, F.Y.; Carneiro, K.

    1977-01-01

    A simple numerical method, which unifies the calculation of structure factors from X-ray or neutron diffraction data with the calculation of reliable pair distribution functions, is described. The objective of the method is to eliminate systematic errors in the normalizations and corrections of the intensity data, and to provide measures for elimination of truncation errors without losing information about the structure. This is done through an iterative procedure, which is easy to program for computers. The applications to amorphous selenium and diatomic liquids are briefly reviewed. (Auth.)

  7. An adaptive neuro fuzzy model for estimating the reliability of component-based software systems

    Directory of Open Access Journals (Sweden)

    Kirti Tyagi

    2014-01-01

    Full Text Available Although many algorithms and techniques have been developed for estimating the reliability of component-based software systems (CBSSs, much more research is needed. Accurate estimation of the reliability of a CBSS is difficult because it depends on two factors: component reliability and glue code reliability. Moreover, reliability is a real-world phenomenon with many associated real-time problems. Soft computing techniques can help to solve problems whose solutions are uncertain or unpredictable. A number of soft computing approaches for estimating CBSS reliability have been proposed. These techniques learn from the past and capture existing patterns in data. The two basic elements of soft computing are neural networks and fuzzy logic. In this paper, we propose a model for estimating CBSS reliability, known as an adaptive neuro fuzzy inference system (ANFIS, that is based on these two basic elements of soft computing, and we compare its performance with that of a plain FIS (fuzzy inference system for different data sets.

  8. Reliance on and Reliability of the Engineer’s Estimate in Heavy Civil Projects

    Directory of Open Access Journals (Sweden)

    George Okere

    2017-06-01

    Full Text Available To the contractor, the engineer’s estimate is the target number to aim for, and the basis for a contractor to evaluate the accuracy of their estimate. To the owner, the engineer’s estimate is the basis for funding, evaluation of bids, and for predicting project costs. As such the engineer’s estimate is the benchmark. This research sought to investigate the reliance on, and the reliability of the engineer’s estimate in heavy civil cost estimate. The research objective was to characterize the engineer’s estimate and allow owners and contractors re-evaluate or affirm their reliance on the engineer’s estimate. A literature review was conducted to understand the reliance on the engineer’s estimate, and secondary data from Washington State Department of Transportation was used to investigate the reliability of the engineer’s estimate. The findings show the need for practitioners to re-evaluate their reliance on the engineer’s estimate. The empirical data showed that, within various contexts, the engineer’s estimate fell outside the expected accuracy range of the low bids or the cost to complete projects. The study recommends direct tracking of costs by project owners while projects are under construction, the use of a second estimate to improve the accuracy of their estimates, and use of the cost estimating practices found in highly reputable construction companies.

  9. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  10. Reliance on and Reliability of the Engineer’s Estimate in Heavy Civil Projects

    OpenAIRE

    Okere, George

    2017-01-01

    To the contractor, the engineer’s estimate is the target number to aim for, and the basis for a contractor to evaluate the accuracy of their estimate. To the owner, the engineer’s estimate is the basis for funding, evaluation of bids, and for predicting project costs. As such the engineer’s estimate is the benchmark. This research sought to investigate the reliance on, and the reliability of the engineer’s estimate in heavy civil cost estimate. The research objective was to characterize the e...

  11. An Energy-Based Limit State Function for Estimation of Structural Reliability in Shock Environments

    Directory of Open Access Journals (Sweden)

    Michael A. Guthrie

    2013-01-01

    Full Text Available limit state function is developed for the estimation of structural reliability in shock environments. This limit state function uses peak modal strain energies to characterize environmental severity and modal strain energies at failure to characterize the structural capacity. The Hasofer-Lind reliability index is briefly reviewed and its computation for the energy-based limit state function is discussed. Applications to two degree of freedom mass-spring systems and to a simple finite element model are considered. For these examples, computation of the reliability index requires little effort beyond a modal analysis, but still accounts for relevant uncertainties in both the structure and environment. For both examples, the reliability index is observed to agree well with the results of Monte Carlo analysis. In situations where fast, qualitative comparison of several candidate designs is required, the reliability index based on the proposed limit state function provides an attractive metric which can be used to compare and control reliability.

  12. Root biomass in cereals, catch crops and weeds can be reliably estimated without considering aboveground biomass

    DEFF Research Database (Denmark)

    Hu, Teng; Sørensen, Peter; Wahlström, Ellen Margrethe

    2018-01-01

    and management factors may affect this allometric relationship making such estimates uncertain and biased. Therefore, we aimed to explore how root biomass for typical cereal crops, catch crops and weeds could most reliably be estimated. Published and unpublished data on aboveground and root biomass (corrected...

  13. Can dendrochronology procedures estimate historical Tree Water Footprint?

    Science.gov (United States)

    Fernandes, Tarcísio J. G.; Del Campo, Antonio D.; Molina, Antonio J.

    2013-04-01

    Whole estimates of tree water use are becoming increasingly important in forest science and forest scientists have long sought to develop reliable techniques to estimate tree water use. In this sense accurately determining or estimate the quantity of water transpired by trees and forests is important and can be used to determine "green" water footprint. The use of dendrochronology is relative common in the study of effects and interactions between growth and climatic variables, but few studies deal with the relationship with water footprint. The main objective of this study is determining the historical tree water-use in a planted stand by dendrochronological approaches. This study was performed in South-eastern Spain, in an area covered by 50-60 years old Pinus halepensis Mil. plantations with high tree density (ca.1288/ha) due to low forest management. The experimental set-up consisted of two plots (30x30m), one corresponding to a thinning treatment performed in 2008 (t10) and the other thinned in 1998 (t1) to assess the mid-term effects of thinning. After one year of thinning four representative trees were select in each plot to measure transpiration by heat pulse sensor (sapflow velocity, vs). The accumulated daily values of transpiration (L day-1) were estimated multiplying the values of vs by sapwood area of each selected tree. After transpiration measurements two cores per tree were taken for establishing the tree-rings chronologies. The cores were prepared, their ring-width were measured and standardised in basal area increment index (BAI-i) following usual dendrochronological methods. The dendrochronology analyses showed a general variability in ring width during the initial growth (15 years), while in the following years the width rings were very small, conditioned by climate. The year after thinning (1999 or 2009) all trees in the treatments showed significant increases in ring width. The average vs for t1 and t10 were 3.59 cm h-1 and 1.95 cm h-1, and

  14. Parameter estimation of component reliability models in PSA model of Krsko NPP

    International Nuclear Information System (INIS)

    Jordan Cizelj, R.; Vrbanic, I.

    2001-01-01

    In the paper, the uncertainty analysis of component reliability models for independent failures is shown. The present approach for parameter estimation of component reliability models in NPP Krsko is presented. Mathematical approaches for different types of uncertainty analyses are introduced and used in accordance with some predisposed requirements. Results of the uncertainty analyses are shown in an example for time-related components. As the most appropriate uncertainty analysis proved the Bayesian estimation with the numerical estimation of a posterior, which can be approximated with some appropriate probability distribution, in this paper with lognormal distribution.(author)

  15. How Many Sleep Diary Entries Are Needed to Reliably Estimate Adolescent Sleep?

    Science.gov (United States)

    Arora, Teresa; Gradisar, Michael; Taheri, Shahrad; Carskadon, Mary A.

    2017-01-01

    Abstract Study Objectives: To investigate (1) how many nights of sleep diary entries are required for reliable estimates of five sleep-related outcomes (bedtime, wake time, sleep onset latency [SOL], sleep duration, and wake after sleep onset [WASO]) and (2) the test–retest reliability of sleep diary estimates of school night sleep across 12 weeks. Methods: Data were drawn from four adolescent samples (Australia [n = 385], Qatar [n = 245], United Kingdom [n = 770], and United States [n = 366]), who provided 1766 eligible sleep diary weeks for reliability analyses. We performed reliability analyses for each cohort using complete data (7 days), one to five school nights, and one to two weekend nights. We also performed test–retest reliability analyses on 12-week sleep diary data available from a subgroup of 55 US adolescents. Results: Intraclass correlation coefficients for bedtime, SOL, and sleep duration indicated good-to-excellent reliability from five weekday nights of sleep diary entries across all adolescent cohorts. Four school nights was sufficient for wake times in the Australian and UK samples, but not the US or Qatari samples. Only Australian adolescents showed good reliability for two weekend nights of bedtime reports; estimates of SOL were adequate for UK adolescents based on two weekend nights. WASO was not reliably estimated using 1 week of sleep diaries. We observed excellent test–rest reliability across 12 weeks of sleep diary data in a subsample of US adolescents. Conclusion: We recommend at least five weekday nights of sleep dairy entries to be made when studying adolescent bedtimes, SOL, and sleep duration. Adolescent sleep patterns were stable across 12 consecutive school weeks. PMID:28199718

  16. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  17. A Data-Driven Reliability Estimation Approach for Phased-Mission Systems

    Directory of Open Access Journals (Sweden)

    Hua-Feng He

    2014-01-01

    Full Text Available We attempt to address the issues associated with reliability estimation for phased-mission systems (PMS and present a novel data-driven approach to achieve reliability estimation for PMS using the condition monitoring information and degradation data of such system under dynamic operating scenario. In this sense, this paper differs from the existing methods only considering the static scenario without using the real-time information, which aims to estimate the reliability for a population but not for an individual. In the presented approach, to establish a linkage between the historical data and real-time information of the individual PMS, we adopt a stochastic filtering model to model the phase duration and obtain the updated estimation of the mission time by Bayesian law at each phase. At the meanwhile, the lifetime of PMS is estimated from degradation data, which are modeled by an adaptive Brownian motion. As such, the mission reliability can be real time obtained through the estimated distribution of the mission time in conjunction with the estimated lifetime distribution. We demonstrate the usefulness of the developed approach via a numerical example.

  18. On the q-Weibull distribution for reliability applications: An adaptive hybrid artificial bee colony algorithm for parameter estimation

    International Nuclear Information System (INIS)

    Xu, Meng; Droguett, Enrique López; Lins, Isis Didier; Chagas Moura, Márcio das

    2017-01-01

    The q-Weibull model is based on the Tsallis non-extensive entropy and is able to model various behaviors of the hazard rate function, including bathtub curves, by using a single set of parameters. Despite its flexibility, the q-Weibull has not been widely used in reliability applications partly because of the complicated parameters estimation. In this work, the parameters of the q-Weibull are estimated by the maximum likelihood (ML) method. Due to the intricate system of nonlinear equations, derivative-based optimization methods may fail to converge. Thus, the heuristic optimization method of artificial bee colony (ABC) is used instead. To deal with the slow convergence of ABC, it is proposed an adaptive hybrid ABC (AHABC) algorithm that dynamically combines Nelder-Mead simplex search method with ABC for the ML estimation of the q-Weibull parameters. Interval estimates for the q-Weibull parameters, including confidence intervals based on the ML asymptotic theory and on bootstrap methods, are also developed. The AHABC is validated via numerical experiments involving the q-Weibull ML for reliability applications and results show that it produces faster and more accurate convergence when compared to ABC and similar approaches. The estimation procedure is applied to real reliability failure data characterized by a bathtub-shaped hazard rate. - Highlights: • Development of an Adaptive Hybrid ABC (AHABC) algorithm for q-Weibull distribution. • AHABC combines local Nelder-Mead simplex method with ABC to enhance local search. • AHABC efficiently finds the optimal solution for the q-Weibull ML problem. • AHABC outperforms ABC and self-adaptive hybrid ABC in accuracy and convergence speed. • Useful model for reliability data with non-monotonic hazard rate.

  19. Metrological reliability of the calibration procedure in terms of air kerma using the ionization chamber NE2575

    International Nuclear Information System (INIS)

    Guimaraes, Margarete Cristina; Silva, Teogenes Augusto da; Rosado, Paulo H.G.

    2016-01-01

    Metrology laboratories are expected to provide X radiation beams that were established by international standardization organizations to perform calibration and testing of dosimeters. Reliable and traceable standard dosimeters should be used in the calibration procedure. The aim of this work was to study the reliability of the NE 2575 ionization chamber used as standard dosimeter for the air kerma calibration procedure adopted in the CDTN Calibration Laboratory. (author)

  20. The validity and reliability of value-added and target-setting procedures with special reference to Key Stage 3

    OpenAIRE

    Moody, Ian Robin

    2003-01-01

    The validity of value-added systems of measurement is crucially dependent upon there being a demonstrably unambiguous relationship between the so-called baseline, or intake measures, and any subsequent measure of performance at a later stage. The reliability of such procedures is dependent on the relationships between these two measures being relatively stable over time. A number of questions arise with regard to both the validity and reliability of value-added procedures at any level in educ...

  1. Bayesian and Classical Estimation of Stress-Strength Reliability for Inverse Weibull Lifetime Models

    Directory of Open Access Journals (Sweden)

    Qixuan Bi

    2017-06-01

    Full Text Available In this paper, we consider the problem of estimating stress-strength reliability for inverse Weibull lifetime models having the same shape parameters but different scale parameters. We obtain the maximum likelihood estimator and its asymptotic distribution. Since the classical estimator doesn’t hold explicit forms, we propose an approximate maximum likelihood estimator. The asymptotic confidence interval and two bootstrap intervals are obtained. Using the Gibbs sampling technique, Bayesian estimator and the corresponding credible interval are obtained. The Metropolis-Hastings algorithm is used to generate random variates. Monte Carlo simulations are conducted to compare the proposed methods. Analysis of a real dataset is performed.

  2. Establishing Reliable Cognitive Change in Children with Epilepsy: The Procedures and Results for a Sample with Epilepsy

    Science.gov (United States)

    van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…

  3. Establishing reliable cognitive change in children with epilepsy: The procedures and results for a sample with epilepsy

    NARCIS (Netherlands)

    van Iterson, L.; Augustijn, P.B.; de Jong, P.F.; van der Leij, A.

    2013-01-01

    The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a

  4. Estimating Between-Person and Within-Person Subscore Reliability with Profile Analysis.

    Science.gov (United States)

    Bulut, Okan; Davison, Mark L; Rodriguez, Michael C

    2017-01-01

    Subscores are of increasing interest in educational and psychological testing due to their diagnostic function for evaluating examinees' strengths and weaknesses within particular domains of knowledge. Previous studies about the utility of subscores have mostly focused on the overall reliability of individual subscores and ignored the fact that subscores should be distinct and have added value over the total score. This study introduces a profile reliability approach that partitions the overall subscore reliability into within-person and between-person subscore reliability. The estimation of between-person reliability and within-person reliability coefficients is demonstrated using subscores from number-correct scoring, unidimensional and multidimensional item response theory scoring, and augmented scoring approaches via a simulation study and a real data study. The effects of various testing conditions, such as subtest length, correlations among subscores, and the number of subtests, are examined. Results indicate that there is a substantial trade-off between within-person and between-person reliability of subscores. Profile reliability coefficients can be useful in determining the extent to which subscores provide distinct and reliable information under various testing conditions.

  5. Integrated Reliability Estimation of a Nuclear Maintenance Robot including a Software

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Heung Seop; Kim, Jae Hee; Jeong, Kyung Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    Conventional reliability estimation techniques such as Fault Tree Analysis (FTA), Reliability Block Diagram (RBD), Markov Model, and Event Tree Analysis (ETA) have been used widely and approved in some industries. Then there are some limitations when we use them for a complicate robot systems including software such as intelligent reactor inspection robots. Therefore an expert's judgment plays an important role in estimating the reliability of a complicate system in practice, because experts can deal with diverse evidence related to the reliability and then perform an inference based on them. The proposed method in this paper combines qualitative and quantitative evidences and performs an inference like experts. Furthermore, it does the work in a formal and in a quantitative way unlike human experts, by the benefits of Bayesian Nets (BNs)

  6. Reliability and validity of food portion size estimation from images using manual flexible digital virtual meshes

    Science.gov (United States)

    The eButton takes frontal images at 4 second intervals throughout the day. A three-dimensional (3D) manually administered wire mesh procedure has been developed to quantify portion sizes from the two-dimensional (2D) images. This paper reports a test of the interrater reliability and validity of use...

  7. "A Comparison of Consensus, Consistency, and Measurement Approaches to Estimating Interrater Reliability"

    OpenAIRE

    Steven E. Stemler

    2004-01-01

    This article argues that the general practice of describing interrater reliability as a single, unified concept is..at best imprecise, and at worst potentially misleading. Rather than representing a single concept, different..statistical methods for computing interrater reliability can be more accurately classified into one of three..categories based upon the underlying goals of analysis. The three general categories introduced and..described in this paper are: 1) consensus estimates, 2) cons...

  8. Reliability estimation for multiunit nuclear and fossil-fired industrial energy systems

    International Nuclear Information System (INIS)

    Sullivan, W.G.; Wilson, J.V.; Klepper, O.H.

    1977-01-01

    As petroleum-based fuels grow increasingly scarce and costly, nuclear energy may become an important alternative source of industrial energy. Initial applications would most likely include a mix of fossil-fired and nuclear sources of process energy. A means for determining the overall reliability of these mixed systems is a fundamental aspect of demonstrating their feasibility to potential industrial users. Reliability data from nuclear and fossil-fired plants are presented, and several methods of applying these data for calculating the reliability of reasonably complex industrial energy supply systems are given. Reliability estimates made under a number of simplifying assumptions indicate that multiple nuclear units or a combination of nuclear and fossil-fired plants could provide adequate reliability to meet industrial requirements for continuity of service

  9. Reliability estimation system: its application to the nuclear geophysical sampling of ore deposits

    International Nuclear Information System (INIS)

    Khaykovich, I.M.; Savosin, S.I.

    1992-01-01

    The reliability estimation system accepted in the Soviet Union for sampling data in nuclear geophysics is based on unique requirements in metrology and methodology. It involves estimating characteristic errors in calibration, as well as errors in measurement and interpretation. This paper describes the methods of estimating the levels of systematic and random errors at each stage of the problem. The data of nuclear geophysics sampling are considered to be reliable if there are no statistically significant, systematic differences between ore intervals determined by this method and by geological control, or by other methods of sampling; the reliability of the latter having been verified. The difference between the random errors is statistically insignificant. The system allows one to obtain information on the parameters of ore intervals with a guaranteed random error and without systematic errors. (Author)

  10. Stochastic models and reliability parameter estimation applicable to nuclear power plant safety

    International Nuclear Information System (INIS)

    Mitra, S.P.

    1979-01-01

    A set of stochastic models and related estimation schemes for reliability parameters are developed. The models are applicable for evaluating reliability of nuclear power plant systems. Reliability information is extracted from model parameters which are estimated from the type and nature of failure data that is generally available or could be compiled in nuclear power plants. Principally, two aspects of nuclear power plant reliability have been investigated: (1) The statistical treatment of inplant component and system failure data; (2) The analysis and evaluation of common mode failures. The model inputs are failure data which have been classified as either the time type of failure data or the demand type of failure data. Failures of components and systems in nuclear power plant are, in general, rare events.This gives rise to sparse failure data. Estimation schemes for treating sparse data, whenever necessary, have been considered. The following five problems have been studied: 1) Distribution of sparse failure rate component data. 2) Failure rate inference and reliability prediction from time type of failure data. 3) Analyses of demand type of failure data. 4) Common mode failure model applicable to time type of failure data. 5) Estimation of common mode failures from 'near-miss' demand type of failure data

  11. The relationship between cost estimates reliability and BIM adoption: SEM analysis

    Science.gov (United States)

    Ismail, N. A. A.; Idris, N. H.; Ramli, H.; Rooshdi, R. R. Raja Muhammad; Sahamir, S. R.

    2018-02-01

    This paper presents the usage of Structural Equation Modelling (SEM) approach in analysing the effects of Building Information Modelling (BIM) technology adoption in improving the reliability of cost estimates. Based on the questionnaire survey results, SEM analysis using SPSS-AMOS application examined the relationships between BIM-improved information and cost estimates reliability factors, leading to BIM technology adoption. Six hypotheses were established prior to SEM analysis employing two types of SEM models, namely the Confirmatory Factor Analysis (CFA) model and full structural model. The SEM models were then validated through the assessment on their uni-dimensionality, validity, reliability, and fitness index, in line with the hypotheses tested. The final SEM model fit measures are: P-value=0.000, RMSEA=0.0790.90, TLI=0.956>0.90, NFI=0.935>0.90 and ChiSq/df=2.259; indicating that the overall index values achieved the required level of model fitness. The model supports all the hypotheses evaluated, confirming that all relationship exists amongst the constructs are positive and significant. Ultimately, the analysis verified that most of the respondents foresee better understanding of project input information through BIM visualization, its reliable database and coordinated data, in developing more reliable cost estimates. They also perceive to accelerate their cost estimating task through BIM adoption.

  12. Procedures for treating common cause failures in safety and reliability studies: Analytical background and techniques

    International Nuclear Information System (INIS)

    Mosleh, A.; Fleming, K.N.; Parry, G.W.; Paula, H.M.; Worledge, D.H.; Rasmuson, D.M.

    1989-01-01

    Volume I of this report presents a framework for the inclusion of the impact of common cause failures in risk and reliability evaluations. Common cause failures are defined as that subset of dependent failures for which causes are not explicitly included in the logic model as basic events. The emphasis here is on providing procedures for a practical, systematic approach that can be used to perform and clearly document the analysis. The framework and the methods discussed for performing the different stages of the analysis integrate insights obtained from engineering assessments of the system and the historical evidence from multiple failure events into a systematic, reproducible, and defensible analysis. This document, Volume 2, contains a series of appendices that provide additional background and methodological detail on several important topics discussed in Volume I

  13. Inverse sampled Bernoulli (ISB) procedure for estimating a population proportion, with nuclear material applications

    International Nuclear Information System (INIS)

    Wright, T.

    1982-01-01

    A new sampling procedure is introduced for estimating a population proportion. The procedure combines the ideas of inverse binomial sampling and Bernoulli sampling. An unbiased estimator is given with its variance. The procedure can be viewed as a generalization of inverse binomial sampling

  14. Reliability and validity of procedure-based assessments in otolaryngology training.

    Science.gov (United States)

    Awad, Zaid; Hayden, Lindsay; Robson, Andrew K; Muthuswamy, Keerthini; Tolley, Neil S

    2015-06-01

    To investigate the reliability and construct validity of procedure-based assessment (PBA) in assessing performance and progress in otolaryngology training. Retrospective database analysis using a national electronic database. We analyzed PBAs of otolaryngology trainees in North London from core trainees (CTs) to specialty trainees (STs). The tool contains six multi-item domains: consent, planning, preparation, exposure/closure, technique, and postoperative care, rated as "satisfactory" or "development required," in addition to an overall performance rating (pS) of 1 to 4. Individual domain score, overall calculated score (cS), and number of "development-required" items were calculated for each PBA. Receiver operating characteristic analysis helped determine sensitivity and specificity. There were 3,152 otolaryngology PBAs from 46 otolaryngology trainees analyzed. PBA reliability was high (Cronbach's α 0.899), and sensitivity approached 99%. cS correlated positively with pS and level in training (rs : +0.681 and +0.324, respectively). ST had higher cS and pS than CT (93% ± 0.6 and 3.2 ± 0.03 vs. 71% ± 3.1 and 2.3 ± 0.08, respectively; P reliable and valid for assessing otolaryngology trainees' performance and progress at all levels. It is highly sensitive in identifying competent trainees. The tool is used in a formative and feedback capacity. The technical domain is the best predictor and should be given close attention. NA. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  15. Estimated Value of Service Reliability for Electric Utility Customers in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, M.J.; Mercurio, Matthew; Schellenberg, Josh

    2009-06-01

    Information on the value of reliable electricity service can be used to assess the economic efficiency of investments in generation, transmission and distribution systems, to strategically target investments to customer segments that receive the most benefit from system improvements, and to numerically quantify the risk associated with different operating, planning and investment strategies. This paper summarizes research designed to provide estimates of the value of service reliability for electricity customers in the US. These estimates were obtained by analyzing the results from 28 customer value of service reliability studies conducted by 10 major US electric utilities over the 16 year period from 1989 to 2005. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods it was possible to integrate their results into a single meta-database describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the US for industrial, commercial, and residential customers. Estimated interruption costs for different types of customers and of different duration are provided. Finally, additional research and development designed to expand the usefulness of this powerful database and analysis are suggested.

  16. Feasibility and reliability of digital imaging for estimating food selection and consumption from students' packed lunches.

    Science.gov (United States)

    Taylor, Jennifer C; Sutter, Carolyn; Ontai, Lenna L; Nishina, Adrienne; Zidenberg-Cherr, Sheri

    2018-01-01

    Although increasing attention is placed on the quality of foods in children's packed lunches, few studies have examined the capacity of observational methods to reliably determine both what is selected and consumed from these lunches. The objective of this project was to assess the feasibility and inter-rater reliability of digital imaging for determining selection and consumption from students' packed lunches, by adapting approaches previously applied to school lunches. Study 1 assessed feasibility and reliability of data collection among a sample of packed lunches (n = 155), while Study 2 further examined reliability in a larger sample of packed (n = 386) as well as school (n = 583) lunches. Based on the results from Study 1, it was feasible to collect and code most items in packed lunch images; missing data were most commonly attributed to packaging that limited visibility of contents. Across both studies, there was satisfactory reliability for determining food types selected, quantities selected, and quantities consumed in the eight food categories examined (weighted kappa coefficients 0.68-0.97 for packed lunches, 0.74-0.97 for school lunches), with lowest reliability for estimating condiments and meats/meat alternatives in packed lunches. In extending methods predominately applied to school lunches, these findings demonstrate the capacity of digital imaging for the objective estimation of selection and consumption from both school and packed lunches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Reliability of Semiautomated Computational Methods for Estimating Tibiofemoral Contact Stress in the Multicenter Osteoarthritis Study

    Directory of Open Access Journals (Sweden)

    Donald D. Anderson

    2012-01-01

    Full Text Available Recent findings suggest that contact stress is a potent predictor of subsequent symptomatic osteoarthritis development in the knee. However, much larger numbers of knees (likely on the order of hundreds, if not thousands need to be reliably analyzed to achieve the statistical power necessary to clarify this relationship. This study assessed the reliability of new semiautomated computational methods for estimating contact stress in knees from large population-based cohorts. Ten knees of subjects from the Multicenter Osteoarthritis Study were included. Bone surfaces were manually segmented from sequential 1.0 Tesla magnetic resonance imaging slices by three individuals on two nonconsecutive days. Four individuals then registered the resulting bone surfaces to corresponding bone edges on weight-bearing radiographs, using a semi-automated algorithm. Discrete element analysis methods were used to estimate contact stress distributions for each knee. Segmentation and registration reliabilities (day-to-day and interrater for peak and mean medial and lateral tibiofemoral contact stress were assessed with Shrout-Fleiss intraclass correlation coefficients (ICCs. The segmentation and registration steps of the modeling approach were found to have excellent day-to-day (ICC 0.93–0.99 and good inter-rater reliability (0.84–0.97. This approach for estimating compartment-specific tibiofemoral contact stress appears to be sufficiently reliable for use in large population-based cohorts.

  18. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  19. Safeprops: A Software for Fast and Reliable Estimation of Safety and Environmental Properties for Organic Compounds

    DEFF Research Database (Denmark)

    Jones, Mark Nicholas; Frutiger, Jerome; Abildskov, Jens

    We present a new software tool called SAFEPROPS which is able to estimate major safety-related and environmental properties for organic compounds. SAFEPROPS provides accurate, reliable and fast predictions using the Marrero-Gani group contribution (MG-GC) method. It is implemented using Python...... as the main programming language, while the necessary parameters together with their correlation matrix are obtained from a SQLite database which has been populated using off-line parameter and error estimation routines (Eq. 3-8)....

  20. ESTIMATING RELIABILITY OF DISTURBANCES IN SATELLITE TIME SERIES DATA BASED ON STATISTICAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    Z.-G. Zhou

    2016-06-01

    Full Text Available Normally, the status of land cover is inherently dynamic and changing continuously on temporal scale. However, disturbances or abnormal changes of land cover — caused by such as forest fire, flood, deforestation, and plant diseases — occur worldwide at unknown times and locations. Timely detection and characterization of these disturbances is of importance for land cover monitoring. Recently, many time-series-analysis methods have been developed for near real-time or online disturbance detection, using satellite image time series. However, the detection results were only labelled with “Change/ No change” by most of the present methods, while few methods focus on estimating reliability (or confidence level of the detected disturbances in image time series. To this end, this paper propose a statistical analysis method for estimating reliability of disturbances in new available remote sensing image time series, through analysis of full temporal information laid in time series data. The method consists of three main steps. (1 Segmenting and modelling of historical time series data based on Breaks for Additive Seasonal and Trend (BFAST. (2 Forecasting and detecting disturbances in new time series data. (3 Estimating reliability of each detected disturbance using statistical analysis based on Confidence Interval (CI and Confidence Levels (CL. The method was validated by estimating reliability of disturbance regions caused by a recent severe flooding occurred around the border of Russia and China. Results demonstrated that the method can estimate reliability of disturbances detected in satellite image with estimation error less than 5% and overall accuracy up to 90%.

  1. Nonparametric Estimation of Interval Reliability for Discrete-Time Semi-Markov Systems

    DEFF Research Database (Denmark)

    Georgiadis, Stylianos; Limnios, Nikolaos

    2016-01-01

    In this article, we consider a repairable discrete-time semi-Markov system with finite state space. The measure of the interval reliability is given as the probability of the system being operational over a given finite-length time interval. A nonparametric estimator is proposed for the interval...

  2. Uncertainty in reliability estimation : when do we know everything we know?

    NARCIS (Netherlands)

    Houben, M.J.H.A.; Sonnemans, P.J.M.; Newby, M.J.; Bris, R.; Guedes Soares, C.; Martorell, S.

    2009-01-01

    In this paperwe demonstrate the use of an adapted GroundedTheory approach through interviews and their analysis to determine explicit uncertainty (known unknowns) for reliability estimation in the early phases of product development.We have applied the adapted Grounded Theory approach in a case

  3. Estimating reliability coefficients with heterogeneous item weightings using Stata: A factor based approach

    NARCIS (Netherlands)

    Boermans, M.A.; Kattenberg, M.A.C.

    2011-01-01

    We show how to estimate a Cronbach's alpha reliability coefficient in Stata after running a principal component or factor analysis. Alpha evaluates to what extent items measure the same underlying content when the items are combined into a scale or used for latent variable. Stata allows for testing

  4. Can a sample of Landsat sensor scenes reliably estimate the global extent of tropical deforestation?

    Science.gov (United States)

    R. L. Czaplewski

    2003-01-01

    Tucker and Townshend (2000) conclude that wall-to-wall coverage is needed to avoid gross errors in estimations of deforestation rates' because tropical deforestation is concentrated along roads and rivers. They specifically question the reliability of the 10% sample of Landsat sensor scenes used in the global remote sensing survey conducted by the Food and...

  5. Perceptual and Acoustic Reliability Estimates for the Speech Disorders Classification System (SDCS)

    Science.gov (United States)

    Shriberg, Lawrence D.; Fourakis, Marios; Hall, Sheryl D.; Karlsson, Heather B.; Lohmeier, Heather L.; McSweeny, Jane L.; Potter, Nancy L.; Scheer-Cohen, Alison R.; Strand, Edythe A.; Tilkens, Christie M.; Wilson, David L.

    2010-01-01

    A companion paper describes three extensions to a classification system for paediatric speech sound disorders termed the Speech Disorders Classification System (SDCS). The SDCS uses perceptual and acoustic data reduction methods to obtain information on a speaker's speech, prosody, and voice. The present paper provides reliability estimates for…

  6. Integration of external estimated breeding values and associated reliabilities using correlations among traits and effects

    NARCIS (Netherlands)

    Vandenplas, J.; Colinet, F.G.; Glorieux, G.; Bertozzi, C.; Gengler, N.

    2015-01-01

    Based on a Bayesian view of linear mixed models, several studies showed the possibilities to integrate estimated breeding values (EBV) and associated reliabilities (REL) provided by genetic evaluations performed outside a given evaluation system into this genetic evaluation. Hereafter, the term

  7. Reliability of stellar inclination estimated from asteroseismology: analytical criteria, mock simulations and Kepler data analysis

    Science.gov (United States)

    Kamiaka, Shoya; Benomar, Othman; Suto, Yasushi

    2018-05-01

    Advances in asteroseismology of solar-like stars, now provide a unique method to estimate the stellar inclination i⋆. This enables to evaluate the spin-orbit angle of transiting planetary systems, in a complementary fashion to the Rossiter-McLaughlineffect, a well-established method to estimate the projected spin-orbit angle λ. Although the asteroseismic method has been broadly applied to the Kepler data, its reliability has yet to be assessed intensively. In this work, we evaluate the accuracy of i⋆ from asteroseismology of solar-like stars using 3000 simulated power spectra. We find that the low signal-to-noise ratio of the power spectra induces a systematic under-estimate (over-estimate) bias for stars with high (low) inclinations. We derive analytical criteria for the reliable asteroseismic estimate, which indicates that reliable measurements are possible in the range of 20° ≲ i⋆ ≲ 80° only for stars with high signal-to-noise ratio. We also analyse and measure the stellar inclination of 94 Kepler main-sequence solar-like stars, among which 33 are planetary hosts. According to our reliability criteria, a third of them (9 with planets, 22 without) have accurate stellar inclination. Comparison of our asteroseismic estimate of vsin i⋆ against spectroscopic measurements indicates that the latter suffers from a large uncertainty possibly due to the modeling of macro-turbulence, especially for stars with projected rotation speed vsin i⋆ ≲ 5km/s. This reinforces earlier claims, and the stellar inclination estimated from the combination of measurements from spectroscopy and photometric variation for slowly rotating stars needs to be interpreted with caution.

  8. Estimations of parameters in Pareto reliability model in the presence of masked data

    International Nuclear Information System (INIS)

    Sarhan, Ammar M.

    2003-01-01

    Estimations of parameters included in the individual distributions of the life times of system components in a series system are considered in this paper based on masked system life test data. We consider a series system of two independent components each has a Pareto distributed lifetime. The maximum likelihood and Bayes estimators for the parameters and the values of the reliability of the system's components at a specific time are obtained. Symmetrical triangular prior distributions are assumed for the unknown parameters to be estimated in obtaining the Bayes estimators of these parameters. Large simulation studies are done in order: (i) explain how one can utilize the theoretical results obtained; (ii) compare the maximum likelihood and Bayes estimates obtained of the underlying parameters; and (iii) study the influence of the masking level and the sample size on the accuracy of the estimates obtained

  9. Effective dose estimation to patients and staff during urethrography procedures

    International Nuclear Information System (INIS)

    Sulieman, A.; Barakat, H.; Alkhorayef, M.; Babikir, E.; Dalton, A.; Bradley, D.

    2015-10-01

    Medical-related radiation is the largest source of controllable radiation exposure to humans and it accounts for more than 95% of radiation exposure from man-made sources. Few data were available worldwide regarding patient and staff dose during urological ascending urethrography (ASU) procedure. The purposes of this study are to measure patient and staff entrance surface air kerma dose (ESAK) during ASU procedure and evaluate the effective doses. A total of 243 patients and 145 staff (Urologist) were examined in three Hospitals in Khartoum state. ESAKs were measured for patient and staff using thermoluminescent detectors (TLDs). Effective doses (E) were calculated using published conversion factors and methods recommended by the national Radiological Protection Board (NRPB). The mean ESAK dose for patients and staff dose were 7.79±6.7 mGy and 0.161±0.30 mGy per procedures respectively. The mean and range of the effective dose was 1.21 mSv per procedure. The radiation dose in this study is comparable with previous studies except Hospital C. It is obvious that high patient and staff exposure is due to the lack of experience and protective equipment s. Interventional procedures remain operator dependent; therefore continuous training is crucial. (Author)

  10. Effective dose estimation to patients and staff during urethrography procedures

    Energy Technology Data Exchange (ETDEWEB)

    Sulieman, A. [Prince Sattam bin Abdulaziz University, College of Applied Medical Sciences, Radiology and Medical Imaging Department, P. O- Box 422, Alkharj 11942 (Saudi Arabia); Barakat, H. [Neelain University, College of Science and Technology, Medical Physics Department, Khartoum (Sudan); Alkhorayef, M.; Babikir, E. [King Saud University, College of Applied Sciences, Radiological Sciences Department, P. O. Box 10219, Riyadh 11433 (Saudi Arabia); Dalton, A.; Bradley, D. [University of Surrey, Centre for Nuclear and Radiation Physics, Department of Physics, Surrey, GU2 7XH Guildford (United Kingdom)

    2015-10-15

    Medical-related radiation is the largest source of controllable radiation exposure to humans and it accounts for more than 95% of radiation exposure from man-made sources. Few data were available worldwide regarding patient and staff dose during urological ascending urethrography (ASU) procedure. The purposes of this study are to measure patient and staff entrance surface air kerma dose (ESAK) during ASU procedure and evaluate the effective doses. A total of 243 patients and 145 staff (Urologist) were examined in three Hospitals in Khartoum state. ESAKs were measured for patient and staff using thermoluminescent detectors (TLDs). Effective doses (E) were calculated using published conversion factors and methods recommended by the national Radiological Protection Board (NRPB). The mean ESAK dose for patients and staff dose were 7.79±6.7 mGy and 0.161±0.30 mGy per procedures respectively. The mean and range of the effective dose was 1.21 mSv per procedure. The radiation dose in this study is comparable with previous studies except Hospital C. It is obvious that high patient and staff exposure is due to the lack of experience and protective equipment s. Interventional procedures remain operator dependent; therefore continuous training is crucial. (Author)

  11. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  12. Estimation of the human error probabilities in the human reliability analysis

    International Nuclear Information System (INIS)

    Liu Haibin; He Xuhong; Tong Jiejuan; Shen Shifei

    2006-01-01

    Human error data is an important issue of human reliability analysis (HRA). Using of Bayesian parameter estimation, which can use multiple information, such as the historical data of NPP and expert judgment data to modify the human error data, could get the human error data reflecting the real situation of NPP more truly. This paper, using the numeric compute program developed by the authors, presents some typical examples to illustrate the process of the Bayesian parameter estimation in HRA and discusses the effect of different modification data on the Bayesian parameter estimation. (authors)

  13. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  14. An automated method for estimating reliability of grid systems using Bayesian networks

    International Nuclear Information System (INIS)

    Doguc, Ozge; Emmanuel Ramirez-Marquez, Jose

    2012-01-01

    Grid computing has become relevant due to its applications to large-scale resource sharing, wide-area information transfer, and multi-institutional collaborating. In general, in grid computing a service requests the use of a set of resources, available in a grid, to complete certain tasks. Although analysis tools and techniques for these types of systems have been studied, grid reliability analysis is generally computation-intensive to obtain due to the complexity of the system. Moreover, conventional reliability models have some common assumptions that cannot be applied to the grid systems. Therefore, new analytical methods are needed for effective and accurate assessment of grid reliability. This study presents a new method for estimating grid service reliability, which does not require prior knowledge about the grid system structure unlike the previous studies. Moreover, the proposed method does not rely on any assumptions about the link and node failure rates. This approach is based on a data-mining algorithm, the K2, to discover the grid system structure from raw historical system data, that allows to find minimum resource spanning trees (MRST) within the grid then, uses Bayesian networks (BN) to model the MRST and estimate grid service reliability.

  15. Confidence Estimation of Reliability Indices of the System with Elements Duplication and Recovery

    Directory of Open Access Journals (Sweden)

    I. V. Pavlov

    2017-01-01

    Full Text Available The article considers a problem to estimate a confidence interval of the main reliability indices such as availability rate, mean time between failures, and operative availability (in the stationary state for the model of the system with duplication and independent recovery of elements.Presents a solution of the problem for a situation that often arises in practice, when there are unknown exact values of the reliability parameters of the elements, and only test data of the system or its individual parts (elements, subsystems for reliability are known. It should be noted that the problems of the confidence estimate of reliability indices of the complex systems based on the testing results of their individual elements are fairly common function in engineering practice when designing and running the various engineering systems. The available papers consider this problem, mainly, for non-recovery systems.Describes a solution of this problem for the important particular case when the system elements are duplicated by the reserved elements, and the elements that have failed in the course of system operation are recovered (regardless of the state of other elements.An approximate solution of this problem is obtained for the case of high reliability or "fast recovery" of elements on the assumption that the average recovery time of elements is small as compared to the average time between failures.

  16. 40 CFR 98.65 - Procedures for estimating missing data.

    Science.gov (United States)

    2010-07-01

    ... factor (1.6 metric tons CO2/metric ton aluminum produced). MPp = Metal production from prebake process... produced). MPs = Metal production from Sderberg process (metric tons Al). (b) For other parameters, use the... PROGRAMS (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Aluminum Production § 98.65 Procedures for...

  17. Methods for estimating the reliability of the RBMK fuel assemblies and elements

    International Nuclear Information System (INIS)

    Klemin, A.I.; Sitkarev, A.G.

    1985-01-01

    Applied non-parametric methods for calculation of point and interval estimations for the basic nomenclature of reliability factors for the RBMK fuel assemblies and elements are described. As the fuel assembly and element reliability factors, the average lifetime is considered at a preset operating time up to unloading due to fuel burnout as well as the average lifetime at the reactor transient operation and at the steady-state fuel reloading mode of reactor operation. The formulae obtained are included into the special standardized engineering documentation

  18. Reliable and Efficient Procedure for Steady-State Analysis of Nonautonomous and Autonomous Systems

    Directory of Open Access Journals (Sweden)

    J. Dobes

    2012-04-01

    Full Text Available The majority of contemporary design tools do not still contain steady-state algorithms, especially for the autonomous systems. This is mainly caused by insufficient accuracy of the algorithm for numerical integration, but also by unreliable steady-state algorithms themselves. Therefore, in the paper, a very stable and efficient procedure for the numerical integration of nonlinear differential-algebraic systems is defined first. Afterwards, two improved methods are defined for finding the steady state, which use this integration algorithm in their iteration loops. The first is based on the idea of extrapolation, and the second utilizes nonstandard time-domain sensitivity analysis. The two steady-state algorithms are compared by analyses of a rectifier and a C-class amplifier, and the extrapolation algorithm is primarily selected as a more reliable alternative. Finally, the method based on the extrapolation naturally cooperating with the algorithm for solving the differential-algebraic systems is thoroughly tested on various electronic circuits: Van der Pol and Colpitts oscillators, fragment of a large bipolar logical circuit, feedback and distributed microwave oscillators, and power amplifier. The results confirm that the extrapolation method is faster than a classical plain numerical integration, especially for larger circuits with complicated transients.

  19. High-reliability microcontroller nerve stimulator for assistance in regional anaesthesia procedures.

    Science.gov (United States)

    Ferri, Carlos A; Quevedo, Antonio A F

    2017-07-01

    In the last decades, the use of nerve stimulators to aid in regional anaesthesia has been shown to benefit the patient since it allows a better location of the nerve plexus, leading to correct positioning of the needle through which the anaesthetic is applied. However, most of the nerve stimulators available in the market for this purpose do not have the minimum recommended features for a good stimulator, and this can lead to risks to the patient. Thus, this study aims to develop an equipment, using embedded electronics, which meets all the characteristics, for a successful blockade. The system is made of modules for generation and overall control of the current pulse and the patient and user interfaces. The results show that the designed system fits into required specifications for a good and reliable nerve stimulator. Linearity proved satisfactory, ensuring accuracy in electrical current amplitude for a wide range of body impedances. Field tests have proven very successful. The anaesthesiologist that used the system reported that, in all cases, plexus blocking was achieved with higher quality, faster anaesthetic diffusion and without needed of an additional dose when compared with same procedure without the use of the device.

  20. A single model procedure for estimating tank calibration equations

    International Nuclear Information System (INIS)

    Liebetrau, A.M.

    1997-10-01

    A fundamental component of any accountability system for nuclear materials is a tank calibration equation that relates the height of liquid in a tank to its volume. Tank volume calibration equations are typically determined from pairs of height and volume measurements taken in a series of calibration runs. After raw calibration data are standardized to a fixed set of reference conditions, the calibration equation is typically fit by dividing the data into several segments--corresponding to regions in the tank--and independently fitting the data for each segment. The estimates obtained for individual segments must then be combined to obtain an estimate of the entire calibration function. This process is tedious and time-consuming. Moreover, uncertainty estimates may be misleading because it is difficult to properly model run-to-run variability and between-segment correlation. In this paper, the authors describe a model whose parameters can be estimated simultaneously for all segments of the calibration data, thereby eliminating the need for segment-by-segment estimation. The essence of the proposed model is to define a suitable polynomial to fit to each segment and then extend its definition to the domain of the entire calibration function, so that it (the entire calibration function) can be expressed as the sum of these extended polynomials. The model provides defensible estimates of between-run variability and yields a proper treatment of between-segment correlations. A portable software package, called TANCS, has been developed to facilitate the acquisition, standardization, and analysis of tank calibration data. The TANCS package was used for the calculations in an example presented to illustrate the unified modeling approach described in this paper. With TANCS, a trial calibration function can be estimated and evaluated in a matter of minutes

  1. ARA and ARI imperfect repair models: Estimation, goodness-of-fit and reliability prediction

    International Nuclear Information System (INIS)

    Toledo, Maria Luíza Guerra de; Freitas, Marta A.; Colosimo, Enrico A.; Gilardoni, Gustavo L.

    2015-01-01

    An appropriate maintenance policy is essential to reduce expenses and risks related to equipment failures. A fundamental aspect to be considered when specifying such policies is to be able to predict the reliability of the systems under study, based on a well fitted model. In this paper, the classes of models Arithmetic Reduction of Age and Arithmetic Reduction of Intensity are explored. Likelihood functions for such models are derived, and a graphical method is proposed for model selection. A real data set involving failures in trucks used by a Brazilian mining is analyzed considering models with different memories. Parameters, namely, shape and scale for Power Law Process, and the efficiency of repair were estimated for the best fitted model. Estimation of model parameters allowed us to derive reliability estimators to predict the behavior of the failure process. These results are a valuable information for the mining company and can be used to support decision making regarding preventive maintenance policy. - Highlights: • Likelihood functions for imperfect repair models are derived. • A goodness-of-fit technique is proposed as a tool for model selection. • Failures in trucks owned by a Brazilian mining are modeled. • Estimation allowed deriving reliability predictors to forecast the future failure process of the trucks

  2. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    International Nuclear Information System (INIS)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee; Lee, Minuk; Choi, Jong-su; Hong, Sup

    2015-01-01

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF

  3. Threshold Estimation of Generalized Pareto Distribution Based on Akaike Information Criterion for Accurate Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Seunghoon; Lim, Woochul; Cho, Su-gil; Park, Sanghyun; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Minuk; Choi, Jong-su; Hong, Sup [Korea Research Insitute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-02-15

    In order to perform estimations with high reliability, it is necessary to deal with the tail part of the cumulative distribution function (CDF) in greater detail compared to an overall CDF. The use of a generalized Pareto distribution (GPD) to model the tail part of a CDF is receiving more research attention with the goal of performing estimations with high reliability. Current studies on GPDs focus on ways to determine the appropriate number of sample points and their parameters. However, even if a proper estimation is made, it can be inaccurate as a result of an incorrect threshold value. Therefore, in this paper, a GPD based on the Akaike information criterion (AIC) is proposed to improve the accuracy of the tail model. The proposed method determines an accurate threshold value using the AIC with the overall samples before estimating the GPD over the threshold. To validate the accuracy of the method, its reliability is compared with that obtained using a general GPD model with an empirical CDF.

  4. Estimation of organ doses of patient undergoing hepatic chemoembolization procedures

    International Nuclear Information System (INIS)

    Jaramillo, G.W.; Kramer, R.; Khoury, H.J.; Barros, V.S.M.; Andrade, G.

    2015-01-01

    The aim of this study is to evaluate the organ doses of patients undergoing hepatic chemoembolization procedures performed in two hospitals in the city of Recife-Brazil. Forty eight patients undergoing fifty hepatic chemoembolization procedures were investigated. For the 20 cases with PA projection only, organ and tissue absorbed doses as well as radiation risks were calculated. For this purpose organs and tissues dose to KAP conversion coefficients were calculated using the mesh-based phantom series FASH and MASH coupled to the EGSnrc Monte Carlo code. Clinical, dosimetric and irradiations parameters were registered for all patients. The maximum organ doses found were 1.72 Gy, 0.65Gy, 0.56 Gy and 0.33 Gy for skin, kidneys, adrenals and liver, respectively. (authors)

  5. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, 2

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1976-01-01

    The problem of obtaining numerically maximum likelihood estimates of the parameters for a mixture of normal distributions is addressed. In recent literature, a certain successive approximations procedure, based on the likelihood equations, is shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, a general iterative procedure is introduced, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. With probability 1 as the sample size grows large, it is shown that this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. The step-size which yields optimal local convergence rates for large samples is determined in a sense by the separation of the component normal densities and is bounded below by a number between 1 and 2.

  6. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1978-01-01

    This paper addresses the problem of obtaining numerically maximum-likelihood estimates of the parameters for a mixture of normal distributions. In recent literature, a certain successive-approximations procedure, based on the likelihood equations, was shown empirically to be effective in numerically approximating such maximum-likelihood estimates; however, the reliability of this procedure was not established theoretically. Here, we introduce a general iterative procedure, of the generalized steepest-ascent (deflected-gradient) type, which is just the procedure known in the literature when the step-size is taken to be 1. We show that, with probability 1 as the sample size grows large, this procedure converges locally to the strongly consistent maximum-likelihood estimate whenever the step-size lies between 0 and 2. We also show that the step-size which yields optimal local convergence rates for large samples is determined in a sense by the 'separation' of the component normal densities and is bounded below by a number between 1 and 2.

  7. Reliability Estimation of Aero-engine Based on Mixed Weibull Distribution Model

    Science.gov (United States)

    Yuan, Zhongda; Deng, Junxiang; Wang, Dawei

    2018-02-01

    Aero-engine is a complex mechanical electronic system, based on analysis of reliability of mechanical electronic system, Weibull distribution model has an irreplaceable role. Till now, only two-parameter Weibull distribution model and three-parameter Weibull distribution are widely used. Due to diversity of engine failure modes, there is a big error with single Weibull distribution model. By contrast, a variety of engine failure modes can be taken into account with mixed Weibull distribution model, so it is a good statistical analysis model. Except the concept of dynamic weight coefficient, in order to make reliability estimation result more accurately, three-parameter correlation coefficient optimization method is applied to enhance Weibull distribution model, thus precision of mixed distribution reliability model is improved greatly. All of these are advantageous to popularize Weibull distribution model in engineering applications.

  8. Reliability of piping system components. Framework for estimating failure parameters from service data

    International Nuclear Information System (INIS)

    Nyman, R.; Hegedus, D.; Tomic, B.; Lydell, B.

    1997-12-01

    This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed 'PFCA'-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies

  9. Reliability of piping system components. Framework for estimating failure parameters from service data

    Energy Technology Data Exchange (ETDEWEB)

    Nyman, R [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hegedus, D; Tomic, B [ENCONET Consulting GesmbH, Vienna (Austria); Lydell, B [RSA Technologies, Vista, CA (United States)

    1997-12-01

    This report summarizes results and insights from the final phase of a R and D project on piping reliability sponsored by the Swedish Nuclear Power Inspectorate (SKI). The technical scope includes the development of an analysis framework for estimating piping reliability parameters from service data. The R and D has produced a large database on the operating experience with piping systems in commercial nuclear power plants worldwide. It covers the period 1970 to the present. The scope of the work emphasized pipe failures (i.e., flaws/cracks, leaks and ruptures) in light water reactors (LWRs). Pipe failures are rare events. A data reduction format was developed to ensure that homogenous data sets are prepared from scarce service data. This data reduction format distinguishes between reliability attributes and reliability influence factors. The quantitative results of the analysis of service data are in the form of conditional probabilities of pipe rupture given failures (flaws/cracks, leaks or ruptures) and frequencies of pipe failures. Finally, the R and D by SKI produced an analysis framework in support of practical applications of service data in PSA. This, multi-purpose framework, termed `PFCA`-Pipe Failure Cause and Attribute- defines minimum requirements on piping reliability analysis. The application of service data should reflect the requirements of an application. Together with raw data summaries, this analysis framework enables the development of a prior and a posterior pipe rupture probability distribution. The framework supports LOCA frequency estimation, steam line break frequency estimation, as well as the development of strategies for optimized in-service inspection strategies. 63 refs, 30 tabs, 22 figs.

  10. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  11. Validity and reliability of Nike + Fuelband for estimating physical activity energy expenditure.

    Science.gov (United States)

    Tucker, Wesley J; Bhammar, Dharini M; Sawyer, Brandon J; Buman, Matthew P; Gaesser, Glenn A

    2015-01-01

    The Nike + Fuelband is a commercially available, wrist-worn accelerometer used to track physical activity energy expenditure (PAEE) during exercise. However, validation studies assessing the accuracy of this device for estimating PAEE are lacking. Therefore, this study examined the validity and reliability of the Nike + Fuelband for estimating PAEE during physical activity in young adults. Secondarily, we compared PAEE estimation of the Nike + Fuelband with the previously validated SenseWear Armband (SWA). Twenty-four participants (n = 24) completed two, 60-min semi-structured routines consisting of sedentary/light-intensity, moderate-intensity, and vigorous-intensity physical activity. Participants wore a Nike + Fuelband and SWA, while oxygen uptake was measured continuously with an Oxycon Mobile (OM) metabolic measurement system (criterion). The Nike + Fuelband (ICC = 0.77) and SWA (ICC = 0.61) both demonstrated moderate to good validity. PAEE estimates provided by the Nike + Fuelband (246 ± 67 kcal) and SWA (238 ± 57 kcal) were not statistically different than OM (243 ± 67 kcal). Both devices also displayed similar mean absolute percent errors for PAEE estimates (Nike + Fuelband = 16 ± 13 %; SWA = 18 ± 18 %). Test-retest reliability for PAEE indicated good stability for Nike + Fuelband (ICC = 0.96) and SWA (ICC = 0.90). The Nike + Fuelband provided valid and reliable estimates of PAEE, that are similar to the previously validated SWA, during a routine that included approximately equal amounts of sedentary/light-, moderate- and vigorous-intensity physical activity.

  12. A procedure for the estimation over time of metabolic fluxes in scenarios where measurements are uncertain and/or insufficient

    Directory of Open Access Journals (Sweden)

    Picó Jesús

    2007-10-01

    Full Text Available Abstract Background An indirect approach is usually used to estimate the metabolic fluxes of an organism: couple the available measurements with known biological constraints (e.g. stoichiometry. Typically this estimation is done under a static point of view. Therefore, the fluxes so obtained are only valid while the environmental conditions and the cell state remain stable. However, estimating the evolution over time of the metabolic fluxes is valuable to investigate the dynamic behaviour of an organism and also to monitor industrial processes. Although Metabolic Flux Analysis can be successively applied with this aim, this approach has two drawbacks: i sometimes it cannot be used because there is a lack of measurable fluxes, and ii the uncertainty of experimental measurements cannot be considered. The Flux Balance Analysis could be used instead, but the assumption of optimal behaviour of the organism brings other difficulties. Results We propose a procedure to estimate the evolution of the metabolic fluxes that is structured as follows: 1 measure the concentrations of extracellular species and biomass, 2 convert this data to measured fluxes and 3 estimate the non-measured fluxes using the Flux Spectrum Approach, a variant of Metabolic Flux Analysis that overcomes the difficulties mentioned above without assuming optimal behaviour. We apply the procedure to a real problem taken from the literature: estimate the metabolic fluxes during a cultivation of CHO cells in batch mode. We show that it provides a reliable and rich estimation of the non-measured fluxes, thanks to considering measurements uncertainty and reversibility constraints. We also demonstrate that this procedure can estimate the non-measured fluxes even when there is a lack of measurable species. In addition, it offers a new method to deal with inconsistency. Conclusion This work introduces a procedure to estimate time-varying metabolic fluxes that copes with the insufficiency of

  13. Validity and reliability of central blood pressure estimated by upper arm oscillometric cuff pressure.

    Science.gov (United States)

    Climie, Rachel E D; Schultz, Martin G; Nikolic, Sonja B; Ahuja, Kiran D K; Fell, James W; Sharman, James E

    2012-04-01

    Noninvasive central blood pressure (BP) independently predicts mortality, but current methods are operator-dependent, requiring skill to obtain quality recordings. The aims of this study were first, to determine the validity of an automatic, upper arm oscillometric cuff method for estimating central BP (O(CBP)) by comparison with the noninvasive reference standard of radial tonometry (T(CBP)). Second, we determined the intratest and intertest reliability of O(CBP). To assess validity, central BP was estimated by O(CBP) (Pulsecor R6.5B monitor) and compared with T(CBP) (SphygmoCor) in 47 participants free from cardiovascular disease (aged 57 ± 9 years) in supine, seated, and standing positions. Brachial mean arterial pressure (MAP) and diastolic BP (DBP) from the O(CBP) device were used to calibrate in both devices. Duplicate measures were recorded in each position on the same day to assess intratest reliability, and participants returned within 10 ± 7 days for repeat measurements to assess intertest reliability. There was a strong intraclass correlation (ICC = 0.987, P difference (1.2 ± 2.2 mm Hg) for central systolic BP (SBP) determined by O(CBP) compared with T(CBP). Ninety-six percent of all comparisons (n = 495 acceptable recordings) were within 5 mm Hg. With respect to reliability, there were strong correlations but higher limits of agreement for the intratest (ICC = 0.975, P difference 0.6 ± 4.5 mm Hg) and intertest (ICC = 0.895, P difference 4.3 ± 8.0 mm Hg) comparisons. Estimation of central SBP using cuff oscillometry is comparable to radial tonometry and has good reproducibility. As a noninvasive, relatively operator-independent method, O(CBP) may be as useful as T(CBP) for estimating central BP in clinical practice.

  14. Expanding Reliability Generalization Methods with KR-21 Estimates: An RG Study of the Coopersmith Self-Esteem Inventory.

    Science.gov (United States)

    Lane, Ginny G.; White, Amy E.; Henson, Robin K.

    2002-01-01

    Conducted a reliability generalizability study on the Coopersmith Self-Esteem Inventory (CSEI; S. Coopersmith, 1967) to examine the variability of reliability estimates across studies and to identify study characteristics that may predict this variability. Results show that reliability for CSEI scores can vary considerably, especially at the…

  15. Empiric reliability of diagnostic and prognostic estimations of physical standards of children, going in for sports.

    Directory of Open Access Journals (Sweden)

    Zaporozhanov V.A.

    2012-12-01

    Full Text Available In the conditions of sporting-pedagogical practices objective estimation of potential possibilities gettings busy already on the initial stages of long-term preparation examined as one of issues of the day. The proper quantitative information allows to individualize preparation of gettings in obedience to requirements to the guided processes busy. Research purpose - logically and metrical to rotin expedience of metrical method of calculations of reliability of results of the control measurings, in-use for diagnostics of psychophysical fitness and prognosis of growth of trade gettings busy in the select type of sport. Material and methods. Analysed the results of the control measurings on four indexes of psychophysical preparedness and estimation of experts of fitness 24th gettings busy composition of children of gymnastic school. The results of initial and final inspection of gymnasts on the same control tests processed the method of mathematical statistics. Expected the metrical estimations of reliability of measurings is stability, co-ordination and informing of control information for current diagnostics and prognosis of sporting possibilities inspected. Results. Expedience of the use in these aims of metrical operations of calculation of complex estimation of the psychophysical state of gettings busy is metrology grounded. Conclusions. Research results confirm expedience of calculation of complex estimation of psychophysical features gettings busy for diagnostics of fitness in the select type of sport and trade prognosis on the subsequent stages of preparation.

  16. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis

    Directory of Open Access Journals (Sweden)

    Adela-Eliza Dumitrascu

    2015-01-01

    Full Text Available Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram, which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  17. Reliability Estimation of Parameters of Helical Wind Turbine with Vertical Axis.

    Science.gov (United States)

    Dumitrascu, Adela-Eliza; Lepadatescu, Badea; Dumitrascu, Dorin-Ion; Nedelcu, Anisor; Ciobanu, Doina Valentina

    2015-01-01

    Due to the prolonged use of wind turbines they must be characterized by high reliability. This can be achieved through a rigorous design, appropriate simulation and testing, and proper construction. The reliability prediction and analysis of these systems will lead to identifying the critical components, increasing the operating time, minimizing failure rate, and minimizing maintenance costs. To estimate the produced energy by the wind turbine, an evaluation approach based on the Monte Carlo simulation model is developed which enables us to estimate the probability of minimum and maximum parameters. In our simulation process we used triangular distributions. The analysis of simulation results has been focused on the interpretation of the relative frequency histograms and cumulative distribution curve (ogive diagram), which indicates the probability of obtaining the daily or annual energy output depending on wind speed. The experimental researches consist in estimation of the reliability and unreliability functions and hazard rate of the helical vertical axis wind turbine designed and patented to climatic conditions for Romanian regions. Also, the variation of power produced for different wind speeds, the Weibull distribution of wind probability, and the power generated were determined. The analysis of experimental results indicates that this type of wind turbine is efficient at low wind speed.

  18. Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task.

    Science.gov (United States)

    Shen, Yi

    2013-05-01

    A subject's sensitivity to a stimulus variation can be studied by estimating the psychometric function. Generally speaking, three parameters of the psychometric function are of interest: the performance threshold, the slope of the function, and the rate at which attention lapses occur. In the present study, three psychophysical procedures were used to estimate the three-parameter psychometric function for an auditory gap detection task. These were an up-down staircase (up-down) procedure, an entropy-based Bayesian (entropy) procedure, and an updated maximum-likelihood (UML) procedure. Data collected from four young, normal-hearing listeners showed that while all three procedures provided similar estimates of the threshold parameter, the up-down procedure performed slightly better in estimating the slope and lapse rate for 200 trials of data collection. When the lapse rate was increased by mixing in random responses for the three adaptive procedures, the larger lapse rate was especially detrimental to the efficiency of the up-down procedure, and the UML procedure provided better estimates of the threshold and slope than did the other two procedures.

  19. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    Science.gov (United States)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  20. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations

    International Nuclear Information System (INIS)

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use

  1. Phoenix – A model-based Human Reliability Analysis methodology: Qualitative Analysis Procedure

    International Nuclear Information System (INIS)

    Ekanem, Nsimah J.; Mosleh, Ali; Shen, Song-Hua

    2016-01-01

    Phoenix method is an attempt to address various issues in the field of Human Reliability Analysis (HRA). Built on a cognitive human response model, Phoenix incorporates strong elements of current HRA good practices, leverages lessons learned from empirical studies, and takes advantage of the best features of existing and emerging HRA methods. Its original framework was introduced in previous publications. This paper reports on the completed methodology, summarizing the steps and techniques of its qualitative analysis phase. The methodology introduces the “Crew Response Tree” which provides a structure for capturing the context associated with Human Failure Events (HFEs), including errors of omission and commission. It also uses a team-centered version of the Information, Decision and Action cognitive model and “macro-cognitive” abstractions of crew behavior, as well as relevant findings from cognitive psychology literature and operating experience, to identify potential causes of failures and influencing factors during procedure-driven and knowledge-supported crew-plant interactions. The result is the set of identified HFEs and likely scenarios leading to each. The methodology itself is generic in the sense that it is compatible with various quantification methods, and can be adapted for use across different environments including nuclear, oil and gas, aerospace, aviation, and healthcare. - Highlights: • Produces a detailed, consistent, traceable, reproducible and properly documented HRA. • Uses “Crew Response Tree” to capture context associated with Human Failure Events. • Models dependencies between Human Failure Events and influencing factors. • Provides a human performance model for relating context to performance. • Provides a framework for relating Crew Failure Modes to its influencing factors.

  2. Estimating The Reliability of the Lawrence Livermore National Laboratory (LLNL) Flash X-ray (FXR) Machine

    International Nuclear Information System (INIS)

    Ong, M M; Kihara, R; Zentler, J M; Kreitzer, B R; DeHope, W J

    2007-01-01

    At Lawrence Livermore National Laboratory (LLNL), our flash X-ray accelerator (FXR) is used on multi-million dollar hydrodynamic experiments. Because of the importance of the radiographs, FXR must be ultra-reliable. Flash linear accelerators that can generate a 3 kA beam at 18 MeV are very complex. They have thousands, if not millions, of critical components that could prevent the machine from performing correctly. For the last five years, we have quantified and are tracking component failures. From this data, we have determined that the reliability of the high-voltage gas-switches that initiate the pulses, which drive the accelerator cells, dominates the statistics. The failure mode is a single-switch pre-fire that reduces the energy of the beam and degrades the X-ray spot-size. The unfortunate result is a lower resolution radiograph. FXR is a production machine that allows only a modest number of pulses for testing. Therefore, reliability switch testing that requires thousands of shots is performed on our test stand. Study of representative switches has produced pre-fire statistical information and probability distribution curves. This information is applied to FXR to develop test procedures and determine individual switch reliability using a minimal number of accelerator pulses

  3. Reliability of the spent fuel identification for flask loading procedure used by COGEMA for fuel transport to La Hague

    International Nuclear Information System (INIS)

    Eid, M.; Zachar, M.; Pretesacque, P.

    1991-01-01

    The Spent Fuel Identification for Flask Loading (SFIFL) procedure designed by COGEMA is analysed and its reliability calculated. The reliability of the procedure is defined as the probability of transporting only approved fuel elements for a given number of shipments. The procedure describes a non-coherent system. A non-coherent system is the one in which two successive failures could result in a success, from the system mission point of view. A technique that describes the system with the help of its maximal cuts (states) is used for calculations. A maximal cut contains more than one failure which can split into two cuts (sub-states). Cuts splitting will enable us to analyse, in a systematic way, non-coherent systems with independent basic components. (author)

  4. Reliability of the spent fuel identification for flask loading procedure used by COGEMA for fuel transport to La Hague

    International Nuclear Information System (INIS)

    Eid, M.; Zachar, M.; Pretesacque, P.

    1990-01-01

    The Spent Fuel Identification for Flask Loading, SFIFL, procedure designed by COGEMA is analysed and its reliability is calculated. The reliability of the procedure is defined as the probability of transporting only approved fuel elements for a given number of shipments. The procedure describes a non-coherent system. A non-coherent system is the one in which two successive failures could result in a success, from the system mission point of view. A technique that describes the system with the help of its maximal cuts (states), is used for calculations. A maximal cut contains more than one failure can split into two cuts, (sub-states). Cuts splitting will enable us to analyse, in a systematic way, non-coherent systems with independent basic components. (author)

  5. Uncertainty analysis methods for estimation of reliability of passive system of VHTR

    International Nuclear Information System (INIS)

    Han, S.J.

    2012-01-01

    An estimation of reliability of passive system for the probabilistic safety assessment (PSA) of a very high temperature reactor (VHTR) is under development in Korea. The essential approach of this estimation is to measure the uncertainty of the system performance under a specific accident condition. The uncertainty propagation approach according to the simulation of phenomenological models (computer codes) is adopted as a typical method to estimate the uncertainty for this purpose. This presentation introduced the uncertainty propagation and discussed the related issues focusing on the propagation object and its surrogates. To achieve a sufficient level of depth of uncertainty results, the applicability of the propagation should be carefully reviewed. For an example study, Latin-hypercube sampling (LHS) method as a direct propagation was tested for a specific accident sequence of VHTR. The reactor cavity cooling system (RCCS) developed by KAERI was considered for this example study. This is an air-cooled type passive system that has no active components for its operation. The accident sequence is a low pressure conduction cooling (LPCC) accident that is considered as a design basis accident for the safety design of VHTR. This sequence is due to a large failure of the pressure boundary of the reactor system such as a guillotine break of coolant pipe lines. The presentation discussed the obtained insights (benefit and weakness) to apply an estimation of reliability of passive system

  6. ESTIMATION OF PARAMETERS AND RELIABILITY FUNCTION OF EXPONENTIATED EXPONENTIAL DISTRIBUTION: BAYESIAN APPROACH UNDER GENERAL ENTROPY LOSS FUNCTION

    Directory of Open Access Journals (Sweden)

    Sanjay Kumar Singh

    2011-06-01

    Full Text Available In this Paper we propose Bayes estimators of the parameters of Exponentiated Exponential distribution and Reliability functions under General Entropy loss function for Type II censored sample. The proposed estimators have been compared with the corresponding Bayes estimators obtained under Squared Error loss function and maximum likelihood estimators for their simulated risks (average loss over sample space.

  7. Reliability estimation of safety-critical software-based systems using Bayesian networks

    International Nuclear Information System (INIS)

    Helminen, A.

    2001-06-01

    Due to the nature of software faults and the way they cause system failures new methods are needed for the safety and reliability evaluation of software-based safety-critical automation systems in nuclear power plants. In the research project 'Programmable automation system safety integrity assessment (PASSI)', belonging to the Finnish Nuclear Safety Research Programme (FINNUS, 1999-2002), various safety assessment methods and tools for software based systems are developed and evaluated. The project is financed together by the Radiation and Nuclear Safety Authority (STUK), the Ministry of Trade and Industry (KTM) and the Technical Research Centre of Finland (VTT). In this report the applicability of Bayesian networks to the reliability estimation of software-based systems is studied. The applicability is evaluated by building Bayesian network models for the systems of interest and performing simulations for these models. In the simulations hypothetical evidence is used for defining the parameter relations and for determining the ability to compensate disparate evidence in the models. Based on the experiences from modelling and simulations we are able to conclude that Bayesian networks provide a good method for the reliability estimation of software-based systems. (orig.)

  8. Estimating reliability of degraded system based on the probability density evolution with multi-parameter

    Directory of Open Access Journals (Sweden)

    Jiang Ge

    2017-01-01

    Full Text Available System degradation was usually caused by multiple-parameter degradation. The assessment result of system reliability by universal generating function was low accurate when compared with the Monte Carlo simulation. And the probability density function of the system output performance cannot be got. So the reliability assessment method based on the probability density evolution with multi-parameter was presented for complexly degraded system. Firstly, the system output function was founded according to the transitive relation between component parameters and the system output performance. Then, the probability density evolution equation based on the probability conservation principle and the system output function was established. Furthermore, probability distribution characteristics of the system output performance was obtained by solving differential equation. Finally, the reliability of the degraded system was estimated. This method did not need to discrete the performance parameters and can establish continuous probability density function of the system output performance with high calculation efficiency and low cost. Numerical example shows that this method is applicable to evaluate the reliability of multi-parameter degraded system.

  9. Estimation of the reliability function for two-parameter exponentiated Rayleigh or Burr type X distribution

    Directory of Open Access Journals (Sweden)

    Anupam Pathak

    2014-11-01

    Full Text Available Abstract: Problem Statement: The two-parameter exponentiated Rayleigh distribution has been widely used especially in the modelling of life time event data. It provides a statistical model which has a wide variety of application in many areas and the main advantage is its ability in the context of life time event among other distributions. The uniformly minimum variance unbiased and maximum likelihood estimation methods are the way to estimate the parameters of the distribution. In this study we explore and compare the performance of the uniformly minimum variance unbiased and maximum likelihood estimators of the reliability function R(t=P(X>t and P=P(X>Y for the two-parameter exponentiated Rayleigh distribution. Approach: A new technique of obtaining these parametric functions is introduced in which major role is played by the powers of the parameter(s and the functional forms of the parametric functions to be estimated are not needed.  We explore the performance of these estimators numerically under varying conditions. Through the simulation study a comparison are made on the performance of these estimators with respect to the Biasness, Mean Square Error (MSE, 95% confidence length and corresponding coverage percentage. Conclusion: Based on the results of simulation study the UMVUES of R(t and ‘P’ for the two-parameter exponentiated Rayleigh distribution found to be superior than MLES of R(t and ‘P’.

  10. Reliability assessment using Bayesian networks. Case study on quantative reliability estimation of a software-based motor protection relay

    International Nuclear Information System (INIS)

    Helminen, A.; Pulkkinen, U.

    2003-06-01

    In this report a quantitative reliability assessment of motor protection relay SPAM 150 C has been carried out. The assessment focuses to the methodological analysis of the quantitative reliability assessment using the software-based motor protection relay as a case study. The assessment method is based on Bayesian networks and tries to take the full advantage of the previous work done in a project called Programmable Automation System Safety Integrity assessment (PASSI). From the results and experiences achieved during the work it is justified to claim that the assessment method presented in the work enables a flexible use of qualitative and quantitative elements of reliability related evidence in a single reliability assessment. At the same time the assessment method is a concurrent way of reasoning one's beliefs and references about the reliability of the system. Full advantage of the assessment method is taken when using the method as a way to cultivate the information related to the reliability of software-based systems. The method can also be used as a communicational instrument in a licensing process of software-based systems. (orig.)

  11. Updated Value of Service Reliability Estimates for Electric Utility Customers in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Michael [Nexant Inc., Burlington, MA (United States); Schellenberg, Josh [Nexant Inc., Burlington, MA (United States); Blundell, Marshall [Nexant Inc., Burlington, MA (United States)

    2015-01-01

    This report updates the 2009 meta-analysis that provides estimates of the value of service reliability for electricity customers in the United States (U.S.). The meta-dataset now includes 34 different datasets from surveys fielded by 10 different utility companies between 1989 and 2012. Because these studies used nearly identical interruption cost estimation or willingness-to-pay/accept methods, it was possible to integrate their results into a single meta-dataset describing the value of electric service reliability observed in all of them. Once the datasets from the various studies were combined, a two-part regression model was used to estimate customer damage functions that can be generally applied to calculate customer interruption costs per event by season, time of day, day of week, and geographical regions within the U.S. for industrial, commercial, and residential customers. This report focuses on the backwards stepwise selection process that was used to develop the final revised model for all customer classes. Across customer classes, the revised customer interruption cost model has improved significantly because it incorporates more data and does not include the many extraneous variables that were in the original specification from the 2009 meta-analysis. The backwards stepwise selection process led to a more parsimonious model that only included key variables, while still achieving comparable out-of-sample predictive performance. In turn, users of interruption cost estimation tools such as the Interruption Cost Estimate (ICE) Calculator will have less customer characteristics information to provide and the associated inputs page will be far less cumbersome. The upcoming new version of the ICE Calculator is anticipated to be released in 2015.

  12. Estimating the operator's performance time of emergency procedural tasks based on a task complexity measure

    International Nuclear Information System (INIS)

    Jung, Won Dae; Park, Jink Yun

    2012-01-01

    It is important to understand the amount of time required to execute an emergency procedural task in a high-stress situation for managing human performance under emergencies in a nuclear power plant. However, the time to execute an emergency procedural task is highly dependent upon expert judgment due to the lack of actual data. This paper proposes an analytical method to estimate the operator's performance time (OPT) of a procedural task, which is based on a measure of the task complexity (TACOM). The proposed method for estimating an OPT is an equation that uses the TACOM as a variable, and the OPT of a procedural task can be calculated if its relevant TACOM score is available. The validity of the proposed equation is demonstrated by comparing the estimated OPTs with the observed OPTs for emergency procedural tasks in a steam generator tube rupture scenario.

  13. Lifetime Reliability Estimate and Extreme Permanent Deformations of Randomly Excited Elasto-Plastic Structures

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1983-01-01

    plastic deformation during several loadings can be modelled as a filtered Poisson process. Using the Markov property of this quantity the considered first-passage problem as well as the related extreme distribution problems are then solved numerically, and the results are compared to simulation studies.......A method is presented for life-time reliability' estimates of randomly excited yielding systems, assuming the structure to be safe, when the plastic deformations are confined below certain limits. The accumulated plastic deformations during any single significant loading history are considered...

  14. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability.

    Science.gov (United States)

    Vermeulen, Margit I; Tromp, Fred; Zuithoff, Nicolaas P A; Pieters, Ron H M; Damoiseaux, Roger A M J; Kuyvenhoven, Marijke M

    2014-12-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the competencies that are required to complete GP training. The objective was to explore reliability and validity aspects of the instruments developed. The new selection procedure comprising the National GP Knowledge Test (LHK), a situational judgement tests (SJT), a patterned behaviour descriptive interview (PBDI) and a simulated encounter (SIM) was piloted alongside the current procedure. Forty-seven candidates volunteered in both procedures. Admission decision was based on the results of the current procedure. Study participants did hardly differ from the other candidates. The mean scores of the candidates on the LHK and SJT were 21.9 % (SD 8.7) and 83.8% (SD 3.1), respectively. The mean self-reported competency scores (PBDI) were higher than the observed competencies (SIM): 3.7(SD 0.5) and 2.9(SD 0.6), respectively. Content-related competencies showed low correlations with one another when measured with different instruments, whereas more diverse competencies measured by a single instrument showed strong to moderate correlations. Moreover, a moderate correlation between LHK and SJT was found. The internal consistencies (intraclass correlation, ICC) of LHK and SJT were poor while the ICC of PBDI and SIM showed acceptable levels of reliability. Findings on content validity and reliability of these new instruments are promising to realize a competency based procedure. Further development of the instruments and research on predictive validity should be pursued.

  15. Portfolio assessment during medical internships: How to obtain a reliable and feasible assessment procedure?

    Science.gov (United States)

    Michels, Nele R M; Driessen, Erik W; Muijtjens, Arno M M; Van Gaal, Luc F; Bossaert, Leo L; De Winter, Benedicte Y

    2009-12-01

    A portfolio is used to mentor and assess students' clinical performance at the workplace. However, students and raters often perceive the portfolio as a time-consuming instrument. In this study, we investigated whether assessment during medical internship by a portfolio can combine reliability and feasibility. The domain-oriented reliability of 61 double-rated portfolios was measured, using a generalisability analysis with portfolio tasks and raters as sources of variation in measuring the performance of a student. We obtained reliability (Phi coefficient) of 0.87 with this internship portfolio containing 15 double-rated tasks. The generalisability analysis showed that an acceptable level of reliability (Phi = 0.80) was maintained when the amount of portfolio tasks was decreased to 13 or 9 using one and two raters, respectively. Our study shows that a portfolio can be a reliable method for the assessment of workplace learning. The possibility of reducing the amount of tasks or raters while maintaining a sufficient level of reliability suggests an increase in feasibility of portfolio use for both students and raters.

  16. Girsanov's transformation based variance reduced Monte Carlo simulation schemes for reliability estimation in nonlinear stochastic dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in

    2017-07-15

    The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.

  17. Two-step estimation procedures for inhomogeneous shot-noise Cox processes

    DEFF Research Database (Denmark)

    Prokesová, Michaela; Dvorák, Jirí; Jensen, Eva B. Vedel

    In the present paper we develop several two-step estimation procedures for inhomogeneous shot-noise Cox processes. The intensity function is parametrized by the inhomogeneity parameters while the pair-correlation function is parametrized by the interaction parameters. The suggested procedures...

  18. Preliminary investigation on reliability of genomic estimated breeding values in the Danish and Swedish Holstein Population

    DEFF Research Database (Denmark)

    Su, G; Guldbrandtsen, B; Gregersen, V R

    2010-01-01

    or no effects, and a single prior distribution common for all SNP. It was found that, in general, the model with a common prior distribution of scaling factors had better predictive ability than any mixture prior models. Therefore, a common prior model was used to estimate SNP effects and breeding values......Abstract This study investigated the reliability of genomic estimated breeding values (GEBV) in the Danish Holstein population. The data in the analysis included 3,330 bulls with both published conventional EBV and single nucleotide polymorphism (SNP) markers. After data editing, 38,134 SNP markers...... were available. In the analysis, all SNP were fitted simultaneously as random effects in a Bayesian variable selection model, which allows heterogeneous variances for different SNP markers. The response variables were the official EBV. Direct GEBV were calculated as the sum of individual SNP effects...

  19. Between-day reliability of a method for non-invasive estimation of muscle composition.

    Science.gov (United States)

    Simunič, Boštjan

    2012-08-01

    Tensiomyography is a method for valid and non-invasive estimation of skeletal muscle fibre type composition. The validity of selected temporal tensiomyographic measures has been well established recently; there is, however, no evidence regarding the method's between-day reliability. Therefore it is the aim of this paper to establish the between-day repeatability of tensiomyographic measures in three skeletal muscles. For three consecutive days, 10 healthy male volunteers (mean±SD: age 24.6 ± 3.0 years; height 177.9 ± 3.9 cm; weight 72.4 ± 5.2 kg) were examined in a supine position. Four temporal measures (delay, contraction, sustain, and half-relaxation time) and maximal amplitude were extracted from the displacement-time tensiomyogram. A reliability analysis was performed with calculations of bias, random error, coefficient of variation (CV), standard error of measurement, and intra-class correlation coefficient (ICC) with a 95% confidence interval. An analysis of ICC demonstrated excellent agreement (ICC were over 0.94 in 14 out of 15 tested parameters). However, lower CV was observed in half-relaxation time, presumably because of the specifics of the parameter definition itself. These data indicate that for the three muscles tested, tensiomyographic measurements were reproducible across consecutive test days. Furthermore, we indicated the most possible origin of the lowest reliability detected in half-relaxation time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Reliability estimate of unconfined compressive strength of black cotton soil stabilized with cement and quarry dust

    Directory of Open Access Journals (Sweden)

    Dayo Oluwatoyin AKANBI

    2017-06-01

    Full Text Available Reliability estimates of unconfined compressive strength values from laboratory results for specimens compacted at British Standard Light (BSLfor compacted quarry dust treated black cotton soil using cement for road sub – base material was developed by incorporating data obtained from Unconfined compressive strength (UCS test gotten from the laboratory test to produce a predictive model. Data obtained were incorporated into a FORTRAN-based first-order reliability program to obtain reliability index values. Variable factors such as water content relative to optimum (WRO, hydraulic modulus (HM, quarry dust (QD, cement (C, Tri-Calcium silicate (C3S, Di-calcium silicate (C2S, Tri-Calcium Aluminate (C3A, and maximum dry density (MDD produced acceptable safety index value of1.0and they were achieved at coefficient of variation (COV ranges of 10-100%. Observed trends indicate that WRO, C3S, C2S and MDD are greatly influenced by the COV and therefore must be strictly controlled in QD/C treated black cotton soil for use as sub-base material in road pavements. Stochastically, British Standard light (BSL can be used to model the 7 days unconfined compressive strength of compacted quarry dust/cement treated black cotton soil as a sub-base material for road pavement at all coefficient of variation (COV range 10 – 100% because the safety index obtained are higher than the acceptable 1.0 value.

  1. Hybrid time-variant reliability estimation for active control structures under aleatory and epistemic uncertainties

    Science.gov (United States)

    Wang, Lei; Xiong, Chuang; Wang, Xiaojun; Li, Yunlong; Xu, Menghui

    2018-04-01

    Considering that multi-source uncertainties from inherent nature as well as the external environment are unavoidable and severely affect the controller performance, the dynamic safety assessment with high confidence is of great significance for scientists and engineers. In view of this, the uncertainty quantification analysis and time-variant reliability estimation corresponding to the closed-loop control problems are conducted in this study under a mixture of random, interval, and convex uncertainties. By combining the state-space transformation and the natural set expansion, the boundary laws of controlled response histories are first confirmed with specific implementation of random items. For nonlinear cases, the collocation set methodology and fourth Rounge-Kutta algorithm are introduced as well. Enlightened by the first-passage model in random process theory as well as by the static probabilistic reliability ideas, a new definition of the hybrid time-variant reliability measurement is provided for the vibration control systems and the related solution details are further expounded. Two engineering examples are eventually presented to demonstrate the validity and applicability of the methodology developed.

  2. Regional inversion of CO2 ecosystem fluxes from atmospheric measurements. Reliability of the uncertainty estimates

    Energy Technology Data Exchange (ETDEWEB)

    Broquet, G.; Chevallier, F.; Breon, F.M.; Yver, C.; Ciais, P.; Ramonet, M.; Schmidt, M. [Laboratoire des Sciences du Climat et de l' Environnement, CEA-CNRS-UVSQ, UMR8212, IPSL, Gif-sur-Yvette (France); Alemanno, M. [Servizio Meteorologico dell' Aeronautica Militare Italiana, Centro Aeronautica Militare di Montagna, Monte Cimone/Sestola (Italy); Apadula, F. [Research on Energy Systems, RSE, Environment and Sustainable Development Department, Milano (Italy); Hammer, S. [Universitaet Heidelberg, Institut fuer Umweltphysik, Heidelberg (Germany); Haszpra, L. [Hungarian Meteorological Service, Budapest (Hungary); Meinhardt, F. [Federal Environmental Agency, Kirchzarten (Germany); Necki, J. [AGH University of Science and Technology, Krakow (Poland); Piacentino, S. [ENEA, Laboratory for Earth Observations and Analyses, Palermo (Italy); Thompson, R.L. [Max Planck Institute for Biogeochemistry, Jena (Germany); Vermeulen, A.T. [Energy research Centre of the Netherlands ECN, EEE-EA, Petten (Netherlands)

    2013-07-01

    The Bayesian framework of CO2 flux inversions permits estimates of the retrieved flux uncertainties. Here, the reliability of these theoretical estimates is studied through a comparison against the misfits between the inverted fluxes and independent measurements of the CO2 Net Ecosystem Exchange (NEE) made by the eddy covariance technique at local (few hectares) scale. Regional inversions at 0.5{sup 0} resolution are applied for the western European domain where {approx}50 eddy covariance sites are operated. These inversions are conducted for the period 2002-2007. They use a mesoscale atmospheric transport model, a prior estimate of the NEE from a terrestrial ecosystem model and rely on the variational assimilation of in situ continuous measurements of CO2 atmospheric mole fractions. Averaged over monthly periods and over the whole domain, the misfits are in good agreement with the theoretical uncertainties for prior and inverted NEE, and pass the chi-square test for the variance at the 30% and 5% significance levels respectively, despite the scale mismatch and the independence between the prior (respectively inverted) NEE and the flux measurements. The theoretical uncertainty reduction for the monthly NEE at the measurement sites is 53% while the inversion decreases the standard deviation of the misfits by 38 %. These results build confidence in the NEE estimates at the European/monthly scales and in their theoretical uncertainty from the regional inverse modelling system. However, the uncertainties at the monthly (respectively annual) scale remain larger than the amplitude of the inter-annual variability of monthly (respectively annual) fluxes, so that this study does not engender confidence in the inter-annual variations. The uncertainties at the monthly scale are significantly smaller than the seasonal variations. The seasonal cycle of the inverted fluxes is thus reliable. In particular, the CO2 sink period over the European continent likely ends later than

  3. Use of performance curves in estimating number of procedures required to achieve proficiency in coronary angiography

    DEFF Research Database (Denmark)

    Räder, Sune B E W; Jørgensen, Erik; Bech, Bo

    2011-01-01

    .001 for all parameters. To approach the experts' level of DAP and contrast media use, trainees need 394 and 588 procedures, respectively. Performance curves showed large individual differences in the development of competence. Conclusion: On average, trainees needed 300 procedures to reach sufficient level...... needed for trainees to reach recommended reference levels was estimated as 226 and 353, for DAP and use of contrast media, respectively. After 300 procedures, trainees' procedure time, fluoroscopy time, DAP, and contrast media volume were significantly higher compared with experts' performance, P ...Background: Current guidelines in cardiology training programs recommend 100-300 coronary angiography procedures for certification. We aimed to assess the number of procedures needed to reach sufficient proficiency. Methods: Procedure time, fluoroscopy time, dose area product (DAP), and contrast...

  4. Estimating the Optimal Capacity for Reservoir Dam based on Reliability Level for Meeting Demands

    Directory of Open Access Journals (Sweden)

    Mehrdad Taghian

    2017-02-01

    Full Text Available Introduction: One of the practical and classic problems in the water resource studies is estimation of the optimal reservoir capacity to satisfy demands. However, full supplying demands for total periods need a very high dam to supply demands during severe drought conditions. That means a major part of reservoir capacity and costs is only usable for a short period of the reservoir lifetime, which would be unjustified in economic analysis. Thus, in the proposed method and model, the full meeting demand is only possible for a percent time of the statistical period that is according to reliability constraint. In the general methods, although this concept apparently seems simple, there is a necessity to add binary variables for meeting or not meeting demands in the linear programming model structures. Thus, with many binary variables, solving the problem will be time consuming and difficult. Another way to solve the problem is the application of the yield model. This model includes some simpler assumptions and that is so difficult to consider details of the water resource system. The applicationof evolutionary algorithms, for the problems have many constraints, is also very complicated. Therefore, this study pursues another solution. Materials and Methods: In this study, for development and improvement the usual methods, instead of mix integer linear programming (MILP and the above methods, a simulation model including flow network linear programming is used coupled with an interface manual code in Matlab to account the reliability based on output file of the simulation model. The acre reservoir simulation program (ARSP has been utilized as a simulation model. A major advantage of the ARSP is its inherent flexibility in defining the operating policies through a penalty structure specified by the user. The ARSP utilizes network flow optimization techniques to handle a subset of general linear programming (LP problems for individual time intervals

  5. A procedure for estimation of pipe break probabilities due to IGSCC

    International Nuclear Information System (INIS)

    Bergman, M.; Brickstad, B.; Nilsson, F.

    1998-06-01

    A procedure has been developed for estimation of the failure probability of welds joints in nuclear piping susceptible to intergranular stress corrosion cracking. The procedure aims at a robust and rapid estimate of the failure probability for a specific weld with known stress state. Random properties are taken into account of the crack initiation rate, the initial crack length, the in-service inspection efficiency and the leak rate. A computer realization of the procedure has been developed for user friendly applications by design engineers. Some examples are considered to investigate the sensitivity of the failure probability to different input quantities. (au)

  6. Adjoint sensitivity analysis procedure of Markov chains with applications on reliability of IFMIF accelerator-system facilities

    Energy Technology Data Exchange (ETDEWEB)

    Balan, I.

    2005-05-01

    This work presents the implementation of the Adjoint Sensitivity Analysis Procedure (ASAP) for the Continuous Time, Discrete Space Markov chains (CTMC), as an alternative to the other computational expensive methods. In order to develop this procedure as an end product in reliability studies, the reliability of the physical systems is analyzed using a coupled Fault-Tree - Markov chain technique, i.e. the abstraction of the physical system is performed using as the high level interface the Fault-Tree and afterwards this one is automatically converted into a Markov chain. The resulting differential equations based on the Markov chain model are solved in order to evaluate the system reliability. Further sensitivity analyses using ASAP applied to CTMC equations are performed to study the influence of uncertainties in input data to the reliability measures and to get the confidence in the final reliability results. The methods to generate the Markov chain and the ASAP for the Markov chain equations have been implemented into the new computer code system QUEFT/MARKOMAGS/MCADJSEN for reliability and sensitivity analysis of physical systems. The validation of this code system has been carried out by using simple problems for which analytical solutions can be obtained. Typical sensitivity results show that the numerical solution using ASAP is robust, stable and accurate. The method and the code system developed during this work can be used further as an efficient and flexible tool to evaluate the sensitivities of reliability measures for any physical system analyzed using the Markov chain. Reliability and sensitivity analyses using these methods have been performed during this work for the IFMIF Accelerator System Facilities. The reliability studies using Markov chain have been concentrated around the availability of the main subsystems of this complex physical system for a typical mission time. The sensitivity studies for two typical responses using ASAP have been

  7. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In the nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model to Dynamic Safety System(DDS) shows that the estimated reliability of the system is quite reasonable and realistic

  8. An integrated model for reliability estimation of digital nuclear protection system based on fault tree and software control flow methodologies

    International Nuclear Information System (INIS)

    Kim, Man Cheol; Seong, Poong Hyun

    2000-01-01

    In nuclear industry, the difficulty of proving the reliabilities of digital systems prohibits the widespread use of digital systems in various nuclear application such as plant protection system. Even though there exist a few models which are used to estimate the reliabilities of digital systems, we develop a new integrated model which is more realistic than the existing models. We divide the process of estimating the reliability of a digital system into two phases, a high-level phase and a low-level phase, and the boundary of two phases is the reliabilities of subsystems. We apply software control flow method to the low-level phase and fault tree analysis to the high-level phase. The application of the model of dynamic safety system (DSS) shows that the estimated reliability of the system is quite reasonable and realistic. (author)

  9. Outlier identification procedures for contingency tables using maximum likelihood and $L_1$ estimates

    NARCIS (Netherlands)

    Kuhnt, S.

    2004-01-01

    Observed cell counts in contingency tables are perceived as outliers if they have low probability under an anticipated loglinear Poisson model. New procedures for the identification of such outliers are derived using the classical maximum likelihood estimator and an estimator based on the L1 norm.

  10. Do strict rules and moving images increase the reliability of sequential identification procedures?.

    OpenAIRE

    Valentine, Tim; Darling, Stephen; Memon, Amina

    2007-01-01

    Live identification procedures in England and Wales have been replaced by use of video, which provides a sequential presentation of facial images. Sequential presentation of photographs provides some protection to innocent suspects from mistaken identification when used with strict instructions designed to prevent relative judgements (Lindsay, Lea & Fulford, 1991). However, the current procedure in England and Wales is incompatible with these strict instructions. The reported research investi...

  11. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    Science.gov (United States)

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  12. Dynamic control of the lumbopelvic complex; lack of reliability of established test procedures

    DEFF Research Database (Denmark)

    Henriksen, Marius; Lund, Hans; Bliddal, Henning

    2007-01-01

    used in order to account for learning effects. Intraclass correlation coefficients were low for the sitting (0.54) and supported standing positions (0.36). In the standing position, a significant difference between test and retest was observed (P = 0.003) and further reliability analysis was therefore...

  13. A Procedure to Obtain Reliable Pair Distribution Functions of Non-Crystalline Materials from Diffraction Data

    DEFF Research Database (Denmark)

    Hansen, Flemming Yssing; Carneiro, K.

    1977-01-01

    A simple numerical method, which unifies the calculation of structure factors from X-ray or neutron diffraction data with the calculation of reliable pair distribution functions, is described. The objective of the method is to eliminate systematic errors in the normalizations and corrections of t...

  14. An assessment of the BEST procedure to estimate the soil water retention curve

    Science.gov (United States)

    Castellini, Mirko; Di Prima, Simone; Iovino, Massimo

    2017-04-01

    The Beerkan Estimation of Soil Transfer parameters (BEST) procedure represents a very attractive method to accurately and quickly obtain a complete hydraulic characterization of the soil (Lassabatère et al., 2006). However, further investigations are needed to check the prediction reliability of soil water retention curve (Castellini et al., 2016). Four soils with different physical properties (texture, bulk density, porosity and stoniness) were considered in this investigation. Sites of measurement were located at Palermo University (PAL site) and Villabate (VIL site) in Sicily, Arborea (ARB site) in Sardinia and in Foggia (FOG site), Apulia. For a given site, BEST procedure was applied and the water retention curve was estimated using the available BEST-algorithms (i.e., slope, intercept and steady), and the reference values of the infiltration constants (β=0.6 and γ=0.75) were considered. The water retention curves estimated by BEST were then compared with those obtained in laboratory by the evaporation method (Wind, 1968). About ten experiments were carried out with both methods. A sensitivity analysis of the constants β and γ within their feasible range of variability (0.1analysis showed that S tended to increase for increasing β values and decreasing values of γ for all the BEST-algorithms and soils. On the other hand, Ks tended to decrease for increasing β and γ values. Our results also reveal that: i) BEST-intercept and BEST-steady algorithms yield lower S and higher Ks values than BEST-slope; ii) these algorithms yield also more variable values. For the latter, a higher sensitiveness of these two alternative algorithms to β than for γ was established. The decreasing sensitiveness to γ may lead to a possible lack in the correction of the simplified theoretical description of the parabolic two-dimensional and one-dimensional wetting front along the soil profile (Smettem et al., 1994). This likely resulted in lower S and higher Ks values

  15. Reliable Dual Tensor Model Estimation in Single and Crossing Fibers Based on Jeffreys Prior

    Science.gov (United States)

    Yang, Jianfei; Poot, Dirk H. J.; Caan, Matthan W. A.; Su, Tanja; Majoie, Charles B. L. M.; van Vliet, Lucas J.; Vos, Frans M.

    2016-01-01

    Purpose This paper presents and studies a framework for reliable modeling of diffusion MRI using a data-acquisition adaptive prior. Methods Automated relevance determination estimates the mean of the posterior distribution of a rank-2 dual tensor model exploiting Jeffreys prior (JARD). This data-acquisition prior is based on the Fisher information matrix and enables the assessment whether two tensors are mandatory to describe the data. The method is compared to Maximum Likelihood Estimation (MLE) of the dual tensor model and to FSL’s ball-and-stick approach. Results Monte Carlo experiments demonstrated that JARD’s volume fractions correlated well with the ground truth for single and crossing fiber configurations. In single fiber configurations JARD automatically reduced the volume fraction of one compartment to (almost) zero. The variance in fractional anisotropy (FA) of the main tensor component was thereby reduced compared to MLE. JARD and MLE gave a comparable outcome in data simulating crossing fibers. On brain data, JARD yielded a smaller spread in FA along the corpus callosum compared to MLE. Tract-based spatial statistics demonstrated a higher sensitivity in detecting age-related white matter atrophy using JARD compared to both MLE and the ball-and-stick approach. Conclusions The proposed framework offers accurate and precise estimation of diffusion properties in single and dual fiber regions. PMID:27760166

  16. Reliability of using nondestructive tests to estimate compressive strength of building stones and bricks

    Directory of Open Access Journals (Sweden)

    Ali Abd Elhakam Aliabdo

    2012-09-01

    Full Text Available This study aims to investigate the relationships between Schmidt hardness rebound number (RN and ultrasonic pulse velocity (UPV versus compressive strength (fc of stones and bricks. Four types of rocks (marble, pink lime stone, white lime stone and basalt and two types of burned bricks and lime-sand bricks were studied. Linear and non-linear models were proposed. High correlations were found between RN and UPV versus compressive strength. Validation of proposed models was assessed using other specimens for each material. Linear models for each material showed good correlations than non-linear models. General model between RN and compressive strength of tested stones and bricks showed a high correlation with regression coefficient R2 value of 0.94. Estimation of compressive strength for the studied stones and bricks using their rebound number and ultrasonic pulse velocity in a combined method was generally more reliable than using rebound number or ultrasonic pulse velocity only.

  17. Reliability estimation of structures under stochastic loading—A case study on nuclear piping

    International Nuclear Information System (INIS)

    Hari Prasad, M.; Rami Reddy, G.; Dubey, P.N.; Srividya, A.; Verma, A.K.

    2013-01-01

    Highlights: ► Structures are generally subjected to different types of loadings. ► One such type of loading is random sequence and has been treated as a stochastic fatigue loading. ► In this methodology both stress amplitude and number of cycles to failure have been considered as random variables. ► The methodology has been demonstrated with a case study on nuclear piping. ► The failure probability of piping has been estimated as a function of time. - Abstract: Generally structures are subjected to different types of loadings throughout their life time. These loads can be either discrete in nature or continuous in nature and also these can be either stationary or non stationary processes. This means that the structural reliability analysis not only considers random variables but also considers random variables which are functions of time, referred to as stochastic processes. A stochastic process can be viewed as a family of random variables. When a structure is subjected to a random loading, based on the stresses developed in the structure and failure criteria the failure probability can be estimated. In practice the structures are designed with higher factor of safety to take care of such random loads. In such cases the structure will fail only when the random loads are cyclic in nature. In traditional reliability analysis, the variation in the load is treated as a random variable and to account for the number of occurrences of the loading the concept of extreme value theory is used. But with this method one is neglecting the damage accumulation that will take place from one loading to another loading. Hence, in this paper, a new way of dealing with these types of problems has been discussed by using the concept of stochastic fatigue loading. The random loading has been considered as earthquake loading. The methodology has been demonstrated with a case study on nuclear power plant piping.

  18. A procedure for estimating the dose modifying effect of chemotherapy on radiation response

    International Nuclear Information System (INIS)

    Hao, Y.; Keane, T.

    1994-01-01

    A procedure based on a logistic regression model was used to estimate the dose-modifying effect of chemotherapy on the response of normal tissues to radiation. The DEF in the proposed procedure is expressed as a function of logistic regression coefficients, response levels and values of covariates in the model. The proposed procedure is advantageous as it allows consideration of both the response levels and the values of covariates in calculating the DEF. A plot of the DEF against the response or a covariate describes how the DEF varies with the response levels or the covariate values. Confidence intervals of the DEF were obtained based on the normal approximation of the distribution of the estimated DEF and on a non-parametric Bootstrap method. An example is given to illustrate the proposed procedure. (Author)

  19. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  20. Development of an Estimating Procedure for the Annual PLAN Process - with Special Emphasis on the Estimating Group

    International Nuclear Information System (INIS)

    Lichtenberg, Steen

    2003-01-01

    This research study deals with the PLAN 2000 procedure. This complex annual estimating procedure is based on the Swedish law on financing, 1992:1537. It requires the Swedish Nuclear Power inspectorate, SKI, to submit to the Government a fully supported annual proposal for the following year's unit fee for nuclear generated electricity to be paid by the owners of the Swedish nuclear power plants. The function of this Fund, KAF, is to finance the future Swedish decommissioning programme. The underlying reason for the study is current criticism of the existing procedure, not least of the composition and working conditions of the analysis group. The purpose of the study is to improve the procedure. The aim is (1) to maximise the realism and neutrality of the necessary estimates in order to allow the KAF Fund to grow steadily at the current rate to the desired target size, allowing it to pay all relevant costs associated with this large decommissioning programme; (2) to do this with a controlled degree of safety; (3) to improve the transparency of the whole procedure in order to avoid any distrust of the procedure and its results. The scope covers all technical and statistical issues; and to some degree also the directly related organisational aspects, notably in respect of the present law and its administration. However, some details are dealt with which seem contrary to the aim of the law. Since 1996, SKI has delegated to the Swedish Nuclear Fuel and Waste Management Co., SKB, the task of performing the basic part of the necessary annual estimating procedure. SKI has then evaluated and supplemented the base estimate before the drafting of the final proposals for the Government and the Board of the Fund, KAFS. Some basic requirements are crucial to the quality of the result of the study: (1) full identification of all potential sources of major uncertainty and the subsequent correct handling of these, (2) balanced and unbiased quantitative evaluation of uncertain

  1. New analysis procedure for fast and reliable size measurement of nanoparticles from atomic force microscopy images

    International Nuclear Information System (INIS)

    Boyd, Robert D.; Cuenat, Alexandre

    2011-01-01

    Accurate size measurement during nanoparticle production is essential for the continuing innovation, quality and safety of nano-enabled products. Size measurement by analysing a number of separate particles individually has particular advantages over ensemble methods. In the latter case nanoparticles have to be well dispersed in a fluid and changes that may occur during analysis, such as agglomeration and degradation, will not be detected which could lead to misleading results. Atomic force microscopy (AFM) allows imaging of particles both in air and liquid, however, the strong interactions between the probe and the particle will cause the broadening of the lateral dimension in the final image. In this paper a new procedure to measure the size of spherical nanoparticles from AFM images via vertical height measurement is described. This procedure will quickly analyse hundred of particles simultaneously and reproduce the measurements obtained from electron microscopy (EM). Nanoparticles samples that were difficult, if not impossible, to analyse with EM were successfully measured using this method. The combination of this procedure with the use of a metrological AFM moves closer to true traceable measurements of nanoparticle dispersions.

  2. Reliability of different sampling densities for estimating and mapping lichen diversity in biomonitoring studies

    International Nuclear Information System (INIS)

    Ferretti, M.; Brambilla, E.; Brunialti, G.; Fornasier, F.; Mazzali, C.; Giordani, P.; Nimis, P.L.

    2004-01-01

    Sampling requirements related to lichen biomonitoring include optimal sampling density for obtaining precise and unbiased estimates of population parameters and maps of known reliability. Two available datasets on a sub-national scale in Italy were used to determine a cost-effective sampling density to be adopted in medium-to-large-scale biomonitoring studies. As expected, the relative error in the mean Lichen Biodiversity (Italian acronym: BL) values and the error associated with the interpolation of BL values for (unmeasured) grid cells increased as the sampling density decreased. However, the increase in size of the error was not linear and even a considerable reduction (up to 50%) in the original sampling effort led to a far smaller increase in errors in the mean estimates (<6%) and in mapping (<18%) as compared with the original sampling densities. A reduction in the sampling effort can result in considerable savings of resources, which can then be used for a more detailed investigation of potentially problematic areas. It is, however, necessary to decide the acceptable level of precision at the design stage of the investigation, so as to select the proper sampling density. - An acceptable level of precision must be decided before determining a sampling design

  3. Reliability of third molar development for age estimation in Gujarati population: A comparative study.

    Science.gov (United States)

    Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema

    2015-01-01

    Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.

  4. Reliability analysis of road network for estimation of public evacuation time around NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Bang, Sun-Young; Lee, Gab-Bock; Chung, Yang-Geun [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2007-07-01

    The most strong protection method of radiation emergency preparedness is the evacuation of the public members when a great deal of radioactivity is released to environment. After the Three Mile Island (TMI) nuclear power plant meltdown in the United States and Chernobyl nuclear power plant disaster in the U.S.S.R, many advanced countries including the United States and Japan have continued research on estimation of public evacuation time as one of emergency countermeasure technologies. Also in South Korea, 'Framework Act on Civil Defense: Radioactive Disaster Preparedness Plan' was established in 1983 and nuclear power plants set up a radiation emergency plan and have regularly carried out radiation emergency preparedness trainings. Nonetheless, there is still a need to improve technology to estimate public evacuation time by executing precise analysis of traffic flow to prepare practical and efficient ways to protect the public. In this research, road network for Wolsong and Kori NPPs was constructed by CORSIM code and Reliability analysis of this road network was performed.

  5. Heuristic and probabilistic wind power availability estimation procedures: Improved tools for technology and site selection

    Energy Technology Data Exchange (ETDEWEB)

    Nigim, K.A. [University of Waterloo, Waterloo, Ont. (Canada). Department of Electrical and Computer Engineering; Parker, Paul [University of Waterloo, Waterloo, Ont. (Canada). Department of Geography, Environmental Studies

    2007-04-15

    The paper describes two investigative procedures to estimate wind power from measured wind velocities. Wind velocity data are manipulated to visualize the site potential by investigating the probable wind power availability and its capacity to meet a targeted demand. The first procedure is an availability procedure that looks at the wind characteristics and its probable energy capturing profile. This profile of wind enables the probable maximum operating wind velocity profile for a selected wind turbine design to be predicted. The structured procedures allow for a consequent adjustment, sorting and grouping of the measured wind velocity data taken at different time intervals and hub heights. The second procedure is the adequacy procedure that investigates the probable degree of availability and the application consequences. Both procedures are programmed using MathCAD symbolic mathematical software. The math tool is used to generate a visual interpolation of the data as well as numerical results from extensive data sets that exceed the capacity of conventional spreadsheet tools. Two sites located in Southern Ontario, Canada are investigated using the procedures. Successful implementation of the procedures supports informed decision making where a hill site is shown to have much higher wind potential than that measured at the local airport. The process is suitable for a wide spectrum of users who are considering the energy potential for either a grid-tied or off-grid wind energy system. (author)

  6. A constrained polynomial regression procedure for estimating the local False Discovery Rate

    Directory of Open Access Journals (Sweden)

    Broët Philippe

    2007-06-01

    Full Text Available Abstract Background In the context of genomic association studies, for which a large number of statistical tests are performed simultaneously, the local False Discovery Rate (lFDR, which quantifies the evidence of a specific gene association with a clinical or biological variable of interest, is a relevant criterion for taking into account the multiple testing problem. The lFDR not only allows an inference to be made for each gene through its specific value, but also an estimate of Benjamini-Hochberg's False Discovery Rate (FDR for subsets of genes. Results In the framework of estimating procedures without any distributional assumption under the alternative hypothesis, a new and efficient procedure for estimating the lFDR is described. The results of a simulation study indicated good performances for the proposed estimator in comparison to four published ones. The five different procedures were applied to real datasets. Conclusion A novel and efficient procedure for estimating lFDR was developed and evaluated.

  7. Estimating effective dose to pediatric patients undergoing interventional radiology procedures using anthropomorphic phantoms and MOSFET dosimeters.

    Science.gov (United States)

    Miksys, Nelson; Gordon, Christopher L; Thomas, Karen; Connolly, Bairbre L

    2010-05-01

    The purpose of this study was to estimate the effective doses received by pediatric patients during interventional radiology procedures and to present those doses in "look-up tables" standardized according to minute of fluoroscopy and frame of digital subtraction angiography (DSA). Organ doses were measured with metal oxide semiconductor field effect transistor (MOSFET) dosimeters inserted within three anthropomorphic phantoms, representing children at ages 1, 5, and 10 years, at locations corresponding to radiosensitive organs. The phantoms were exposed to mock interventional radiology procedures of the head, chest, and abdomen using posteroanterior and lateral geometries, varying magnification, and fluoroscopy or DSA exposures. Effective doses were calculated from organ doses recorded by the MOSFET dosimeters and are presented in look-up tables according to the different age groups. The largest effective dose burden for fluoroscopy was recorded for posteroanterior and lateral abdominal procedures (0.2-1.1 mSv/min of fluoroscopy), whereas procedures of the head resulted in the lowest effective doses (0.02-0.08 mSv/min of fluoroscopy). DSA exposures of the abdomen imparted higher doses (0.02-0.07 mSv/DSA frame) than did those involving the head and chest. Patient doses during interventional procedures vary significantly depending on the type of procedure. User-friendly look-up tables may provide a helpful tool for health care providers in estimating effective doses for an individual procedure.

  8. Reliability of Nationwide Prevalence Estimates of Dementia: A Critical Appraisal Based on Brazilian Surveys.

    Directory of Open Access Journals (Sweden)

    Flávio Chaimowicz

    Full Text Available The nationwide dementia prevalence is usually calculated by applying the results of local surveys to countries' populations. To evaluate the reliability of such estimations in developing countries, we chose Brazil as an example. We carried out a systematic review of dementia surveys, ascertained their risk of bias, and present the best estimate of occurrence of dementia in Brazil.We carried out an electronic search of PubMed, Latin-American databases, and a Brazilian thesis database for surveys focusing on dementia prevalence in Brazil. The systematic review was registered at PROSPERO (CRD42014008815. Among the 35 studies found, 15 analyzed population-based random samples. However, most of them utilized inadequate criteria for diagnostics. Six studies without these limitations were further analyzed to assess the risk of selection, attrition, outcome and population bias as well as several statistical issues. All the studies presented moderate or high risk of bias in at least two domains due to the following features: high non-response, inaccurate cut-offs, and doubtful accuracy of the examiners. Two studies had limited external validity due to high rates of illiteracy or low income. The three studies with adequate generalizability and the lowest risk of bias presented a prevalence of dementia between 7.1% and 8.3% among subjects aged 65 years and older. However, after adjustment for accuracy of screening, the best available evidence points towards a figure between 15.2% and 16.3%.The risk of bias may strongly limit the generalizability of dementia prevalence estimates in developing countries. Extrapolations that have already been made for Brazil and Latin America were based on a prevalence that should have been adjusted for screening accuracy or not used at all due to severe bias. Similar evaluations regarding other developing countries are needed in order to verify the scope of these limitations.

  9. Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.

    Science.gov (United States)

    Cockle, Diane L; Bell, Lynne S

    2015-08-01

    Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in

  10. Synthesis of [{sup 123}I]IBZM: a reliable procedure for routine clinical studies

    Energy Technology Data Exchange (ETDEWEB)

    Zea-Ponce, Yolanda E-mail: yolanda@neuron.cpmc.columbia.edu; Laruelle, Marc

    1999-08-01

    The single photon emission computed tomography (SPECT) D{sub 2}/D{sub 3} receptor radiotracer [{sup 123}I]IBZM, is prepared by electrophilic radioiodination of the precursor BZM with high-purity sodium [{sup 123}I]iodide in the presence of diluted peracetic acid. However, in our hands, the most commonly used procedure for this radiosynthesis produced variable and inconsistent labeling yields, to such extent that it became inappropriate for routine clinical studies. Our goal was to modify the labeling procedure, to obtain consistently better labeling and radiochemical yields. The best conditions found for the radioiodination were as follows: 50 {mu}g precursor in 50 {mu}L EtOH mixed with buffer pH 2; Na[{sup 123}I]I in 0.1 M NaOH (<180 {mu}L), 50 {mu}L peracetic acid diluted solution, heating at 65 deg. C for 14 min. Purification was achieved by solid phase extraction (SPE) and reverse-phase high performance liquid chromatography (HPLC). Under these conditions, labeling yield average was 76{+-}4% (n=31); radiochemical yield was 69{+-}4% and radiochemical purity was 98{+-}1%. With larger volumes of the Na[{sup 123}I]I solution the yields were consistent but lower. For example, for volumes between 417 and 523 {mu}L the labeling yield was 61{+-}5% (n=21), radiochemical yield was 56{+-} 5% and radiochemical purity was 98{+-}1%.

  11. Synthesis of [123I]IBZM: a reliable procedure for routine clinical studies

    International Nuclear Information System (INIS)

    Zea-Ponce, Yolanda; Laruelle, Marc

    1999-01-01

    The single photon emission computed tomography (SPECT) D 2 /D 3 receptor radiotracer [ 123 I]IBZM, is prepared by electrophilic radioiodination of the precursor BZM with high-purity sodium [ 123 I]iodide in the presence of diluted peracetic acid. However, in our hands, the most commonly used procedure for this radiosynthesis produced variable and inconsistent labeling yields, to such extent that it became inappropriate for routine clinical studies. Our goal was to modify the labeling procedure, to obtain consistently better labeling and radiochemical yields. The best conditions found for the radioiodination were as follows: 50 μg precursor in 50 μL EtOH mixed with buffer pH 2; Na[ 123 I]I in 0.1 M NaOH ( 123 I]I solution the yields were consistent but lower. For example, for volumes between 417 and 523 μL the labeling yield was 61±5% (n=21), radiochemical yield was 56± 5% and radiochemical purity was 98±1%

  12. TCS: a new multiple sequence alignment reliability measure to estimate alignment accuracy and improve phylogenetic tree reconstruction.

    Science.gov (United States)

    Chang, Jia-Ming; Di Tommaso, Paolo; Notredame, Cedric

    2014-06-01

    Multiple sequence alignment (MSA) is a key modeling procedure when analyzing biological sequences. Homology and evolutionary modeling are the most common applications of MSAs. Both are known to be sensitive to the underlying MSA accuracy. In this work, we show how this problem can be partly overcome using the transitive consistency score (TCS), an extended version of the T-Coffee scoring scheme. Using this local evaluation function, we show that one can identify the most reliable portions of an MSA, as judged from BAliBASE and PREFAB structure-based reference alignments. We also show how this measure can be used to improve phylogenetic tree reconstruction using both an established simulated data set and a novel empirical yeast data set. For this purpose, we describe a novel lossless alternative to site filtering that involves overweighting the trustworthy columns. Our approach relies on the T-Coffee framework; it uses libraries of pairwise alignments to evaluate any third party MSA. Pairwise projections can be produced using fast or slow methods, thus allowing a trade-off between speed and accuracy. We compared TCS with Heads-or-Tails, GUIDANCE, Gblocks, and trimAl and found it to lead to significantly better estimates of structural accuracy and more accurate phylogenetic trees. The software is available from www.tcoffee.org/Projects/tcs. © The Author 2014. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Guide to data processing and revision: Part 2, Human error probability data entry and revision procedures

    International Nuclear Information System (INIS)

    Gilmore, W.E.; Gertman, D.I.; Gilbert, B.G.; Reece, W.J.

    1988-11-01

    The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) is an automated data base management system for processing and storing human error probability (HEP) and hardware component failure data (HCFD). The NUCLARR system software resides on an IBM (or compatible) personal micro-computer. Users can perform data base searches to furnish HEP estimates and HCFD rates. In this manner, the NUCLARR system can be used to support a variety of risk assessment activities. This volume, Volume 3 of a 5-volume series, presents the procedures used to process HEP and HCFD for entry in NUCLARR and describes how to modify the existing NUCLARR taxonomy in order to add either equipment types or action verbs. Volume 3 also specifies the various roles of the administrative staff on assignment to the NUCLARR Clearinghouse who are tasked with maintaining the data base, dealing with user requests, and processing NUCLARR data. 5 refs., 34 figs., 3 tabs

  14. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Guide to data processing and revision: Part 3, Hardware component failure data entry and revision procedures

    International Nuclear Information System (INIS)

    Gilmore, W.E.; Gertman, D.I.; Gilbert, B.G.; Reece, W.J.

    1988-11-01

    The Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) is an automated data base management system for processing and storing human error probability (HEP) and hardware component failure data (HCFD). The NUCLARR system software resides on an IBM (or compatible) personal micro-computer. Users can perform data base searches to furnish HEP estimates and HCFD rates. In this manner, the NUCLARR system can be used to support a variety of risk assessment activities. This volume, Volume 3 of a 5-volume series, presents the procedures used to process HEP and HCFD for entry in NUCLARR and describes how to modify the existing NUCLARR taxonomy in order to add equipment types of action verbs. Volume 3 also specifies the various roles of the administrative staff on assignment to the NUCLARR Clearinghouse who are tasked with maintaining the data base, dealing with user requests, and processing NUCLARR data

  15. Estimating the Cost of Neurosurgical Procedures in a Low-Income Setting: An Observational Economic Analysis.

    Science.gov (United States)

    Abdelgadir, Jihad; Tran, Tu; Muhindo, Alex; Obiga, Doomwin; Mukasa, John; Ssenyonjo, Hussein; Muhumza, Michael; Kiryabwire, Joel; Haglund, Michael M; Sloan, Frank A

    2017-05-01

    There are no data on cost of neurosurgery in low-income and middle-income countries. The objective of this study was to estimate the cost of neurosurgical procedures in a low-resource setting to better inform resource allocation and health sector planning. In this observational economic analysis, microcosting was used to estimate the direct and indirect costs of neurosurgical procedures at Mulago National Referral Hospital (Kampala, Uganda). During the study period, October 2014 to September 2015, 1440 charts were reviewed. Of these patients, 434 had surgery, whereas the other 1006 were treated nonsurgically. Thirteen types of procedures were performed at the hospital. The estimated mean cost of a neurosurgical procedure was $542.14 (standard deviation [SD], $253.62). The mean cost of different procedures ranged from $291 (SD, $101) for burr hole evacuations to $1,221 (SD, $473) for excision of brain tumors. For most surgeries, overhead costs represented the largest proportion of the total cost (29%-41%). This is the first study using primary data to determine the cost of neurosurgery in a low-resource setting. Operating theater capacity is likely the binding constraint on operative volume, and thus, investing in operating theaters should achieve a higher level of efficiency. Findings from this study could be used by stakeholders and policy makers for resource allocation and to perform economic analyses to establish the value of neurosurgery in achieving global health goals. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  17. Framework for estimating response time data to conduct a seismic human reliability analysis - its feasibility

    International Nuclear Information System (INIS)

    Park, Jinkyun; Kin, Yochan; Jung, Wondea; Jang, Seung Cheol

    2014-01-01

    This is because the PSA has been used for several decades as the representative tool to evaluate the safety of NPPs. To this end, it is inevitable to evaluate human error probabilities (HEPs) in conducting important tasks being considered in the PSA framework (i.e., HFEs; human failure events), which are able to significantly affect the safety of NPPs. In addition, it should be emphasized that the provision of a realistic human performance data is an important precondition for calculating HEPs under a seismic condition. Unfortunately, it seems that HRA methods being currently used for calculating HEPs under a seismic event do not properly consider the performance variation of human operators. For this reason, in this paper, a framework to estimate response time data that are critical for calculating HEPs is suggested with respect to a seismic intensity. This paper suggested a systematic framework for estimating response time data that would be one of the most critical for calculating HEPs. Although extensive review of existing literatures is indispensable for identifying response times of human operators who have to conduct a series of tasks prescribed in procedures based on a couple of wrong indications, it is highly expected that response time data for seismic HRA can be properly secured through revisiting response time data collected from diverse situations without concerning a seismic event

  18. Procedures and methods that increase reliability and reproducibility of the transplanted kidney perfusion index

    International Nuclear Information System (INIS)

    Smokvina, A.

    1994-01-01

    At different times following surgery and during various complications, 119 studies were performed on 57 patients. In many patients studies were repeated several times. Twenty-three studies were performed in as many patients, in whom a normal function of the transplanted kidney was established by other diagnostic methods and retrospective analysis. Comparison was made of the perfusion index results obtained by the Hilson et al. method from 1978 and the ones obtained by my own modified method, which for calculating the index also takes into account: the time difference in appearance of the initial portions of the artery and kidney curves; the positioning of the region of interest over the distal part of the aorta; the bolus injection into the arteriovenous shunt of the forearm with high specific activity of small volumes of Tc-99m labelled agents; a fast 0.5 seconds study of data collection; and a standard for normalization of numerical data. The reliability of one or the other method tested by simulated time shift of the peak of arterial curves shows that the deviation percentage from the main index value in the unmodified method is 2-5 times greater than in the modified method. The normal value of the perfusion index applying the modified method is 91-171. (author)

  19. Procedure for estimating nonfuel operation and maintenance costs for large steam-electric power plants

    International Nuclear Information System (INIS)

    Myers, M.L.; Fuller, L.C.

    1979-01-01

    Revised guidelines are presented for estimating annual nonfuel operation and maintenance costs for large steam-electric power plants, specifically light-water-reactor plants and coal-fired plants. Previous guidelines were published in October 1975 in ERDA 76-37, a Procedure for Estimating Nonfuel Operating and Maintenance Costs for Large Steam-Electric Power Plants. Estimates for coal-fired plants include the option of limestone slurry scrubbing for flue gas desulfurization. A computer program, OMCOST, is also presented which covers all plant options

  20. The juvenile face as a suitable age indicator in child pornography cases: a pilot study on the reliability of automated and visual estimation approaches.

    Science.gov (United States)

    Ratnayake, M; Obertová, Z; Dose, M; Gabriel, P; Bröker, H M; Brauckmann, M; Barkus, A; Rizgeliene, R; Tutkuviene, J; Ritz-Timme, S; Marasciuolo, L; Gibelli, D; Cattaneo, C

    2014-09-01

    In cases of suspected child pornography, the age of the victim represents a crucial factor for legal prosecution. The conventional methods for age estimation provide unreliable age estimates, particularly if teenage victims are concerned. In this pilot study, the potential of age estimation for screening purposes is explored for juvenile faces. In addition to a visual approach, an automated procedure is introduced, which has the ability to rapidly scan through large numbers of suspicious image data in order to trace juvenile faces. Age estimations were performed by experts, non-experts and the Demonstrator of a developed software on frontal facial images of 50 females aged 10-19 years from Germany, Italy, and Lithuania. To test the accuracy, the mean absolute error (MAE) between the estimates and the real ages was calculated for each examiner and the Demonstrator. The Demonstrator achieved the lowest MAE (1.47 years) for the 50 test images. Decreased image quality had no significant impact on the performance and classification results. The experts delivered slightly less accurate MAE (1.63 years). Throughout the tested age range, both the manual and the automated approach led to reliable age estimates within the limits of natural biological variability. The visual analysis of the face produces reasonably accurate age estimates up to the age of 18 years, which is the legally relevant age threshold for victims in cases of pedo-pornography. This approach can be applied in conjunction with the conventional methods for a preliminary age estimation of juveniles depicted on images.

  1. Automated procedure for volumetric measurement of metastases. Estimation of tumor burden

    International Nuclear Information System (INIS)

    Fabel, M.; Bolte, H.

    2008-01-01

    Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation. (orig.) [de

  2. A stochastic estimation procedure for intermittently-observed semi-Markov multistate models with back transitions.

    Science.gov (United States)

    Aralis, Hilary; Brookmeyer, Ron

    2017-01-01

    Multistate models provide an important method for analyzing a wide range of life history processes including disease progression and patient recovery following medical intervention. Panel data consisting of the states occupied by an individual at a series of discrete time points are often used to estimate transition intensities of the underlying continuous-time process. When transition intensities depend on the time elapsed in the current state and back transitions between states are possible, this intermittent observation process presents difficulties in estimation due to intractability of the likelihood function. In this manuscript, we present an iterative stochastic expectation-maximization algorithm that relies on a simulation-based approximation to the likelihood function and implement this algorithm using rejection sampling. In a simulation study, we demonstrate the feasibility and performance of the proposed procedure. We then demonstrate application of the algorithm to a study of dementia, the Nun Study, consisting of intermittently-observed elderly subjects in one of four possible states corresponding to intact cognition, impaired cognition, dementia, and death. We show that the proposed stochastic expectation-maximization algorithm substantially reduces bias in model parameter estimates compared to an alternative approach used in the literature, minimal path estimation. We conclude that in estimating intermittently observed semi-Markov models, the proposed approach is a computationally feasible and accurate estimation procedure that leads to substantial improvements in back transition estimates.

  3. Estimation of equivalent dose on the ends of hemodynamic physicians during neurological procedures

    International Nuclear Information System (INIS)

    Squair, Peterson L.; Souza, Luiz C. de; Oliveira, Paulo Marcio C. de

    2005-01-01

    The estimation of doses in the hands of physicists during hemodynamic procedures is important to verify the application of radiation protection related to the optimization and limit of dose, principles required by the Portaria 453/98 of Ministry of Health/ANVISA, Brazil. It was checked the levels of exposure of the hands of doctors during the use of the equipment in hemodynamic neurological procedures through dosimetric rings with thermoluminescent dosemeters detectors of LiF: Mg, Ti (TLD-100), calibrated in personal Dose equivalent HP (0.07). The average equivalent dose in the end obtained was 41.12. μSv per scan with an expanded uncertainty of 20% for k = 2. This value is relative to the hemodynamic Neurology procedure using radiological protection procedures accessible to minimize the dose

  4. Simple and Reliable Method to Estimate the Fingertip Static Coefficient of Friction in Precision Grip.

    Science.gov (United States)

    Barrea, Allan; Bulens, David Cordova; Lefevre, Philippe; Thonnard, Jean-Louis

    2016-01-01

    The static coefficient of friction (µ static ) plays an important role in dexterous object manipulation. Minimal normal force (i.e., grip force) needed to avoid dropping an object is determined by the tangential force at the fingertip-object contact and the frictional properties of the skin-object contact. Although frequently assumed to be constant for all levels of normal force (NF, the force normal to the contact), µ static actually varies nonlinearly with NF and increases at low NF levels. No method is currently available to measure the relationship between µ static and NF easily. Therefore, we propose a new method allowing the simple and reliable measurement of the fingertip µ static at different NF levels, as well as an algorithm for determining µ static from measured forces and torques. Our method is based on active, back-and-forth movements of a subject's finger on the surface of a fixed six-axis force and torque sensor. µ static is computed as the ratio of the tangential to the normal force at slip onset. A negative power law captures the relationship between µ static and NF. Our method allows the continuous estimation of µ static as a function of NF during dexterous manipulation, based on the relationship between µ static and NF measured before manipulation.

  5. Validation of DWI pre-processing procedures for reliable differentiation between human brain gliomas.

    Science.gov (United States)

    Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I

    2018-02-01

    Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.

  6. Using operational data to estimate the reliable yields of water-supply wells

    Science.gov (United States)

    Misstear, Bruce D. R.; Beeson, Sarah

    The reliable yield of a water-supply well depends on many different factors, including the properties of the well and the aquifer; the capacities of the pumps, raw-water mains, and treatment works; the interference effects from other wells; and the constraints imposed by ion licences, water quality, and environmental issues. A relatively simple methodology for estimating reliable yields has been developed that takes into account all of these factors. The methodology is based mainly on an analysis of water-level and source-output data, where such data are available. Good operational data are especially important when dealing with wells in shallow, unconfined, fissure-flow aquifers, where actual well performance may vary considerably from that predicted using a more analytical approach. Key issues in the yield-assessment process are the identification of a deepest advisable pumping water level, and the collection of the appropriate well, aquifer, and operational data. Although developed for water-supply operators in the United Kingdom, this approach to estimating the reliable yields of water-supply wells using operational data should be applicable to a wide range of hydrogeological conditions elsewhere. Résumé La productivité d'un puits capté pour l'adduction d'eau potable dépend de différents facteurs, parmi lesquels les propriétés du puits et de l'aquifère, la puissance des pompes, le traitement des eaux brutes, les effets d'interférences avec d'autres puits et les contraintes imposées par les autorisations d'exploitation, par la qualité des eaux et par les conditions environnementales. Une méthodologie relativement simple d'estimation de la productivité qui prenne en compte tous ces facteurs a été mise au point. Cette méthodologie est basée surtout sur une analyse des données concernant le niveau piézométrique et le débit de prélèvement, quand ces données sont disponibles. De bonnes données opérationnelles sont particuli

  7. An operational procedure for precipitable and cloud liquid water estimate in non-raining conditions over sea Study on the assessment of the nonlinear physical inversion algorithm

    CERN Document Server

    Nativi, S; Mazzetti, P

    2004-01-01

    In a previous work, an operative procedure to estimate precipitable and liquid water in non-raining conditions over sea was developed and assessed. The procedure is based on a fast non-linear physical inversion scheme and a forward model; it is valid for most of satellite microwave radiometers and it also estimates water effective profiles. This paper presents two improvements of the procedure: first, a refinement to provide modularity of the software components and portability across different computation system architectures; second, the adoption of the CERN MINUIT minimisation package, which addresses the problem of global minimisation but is computationally more demanding. Together with the increased computational performance that allowed to impose stricter requirements on the quality of fit, these refinements improved fitting precision and reliability, and allowed to relax the requirements on the initial guesses for the model parameters. The re-analysis of the same data-set considered in the previous pap...

  8. Estimation of the collective dose in the Portuguese population due to medical procedures in 2010

    International Nuclear Information System (INIS)

    Teles, Pedro; Vaz, Pedro; Sousa, M. Carmen de; Paulo, Graciano; Santos, Joana; Pascoal, Ana; Cardoso, Gabriela; Santos, Ana Isabel; Lanca, Isabel; Matela, Nuno; Janeiro, Luis; Sousa, Patrick; Carvoeiras, Pedro; Parafita, Rui; Simaozinho, Paula

    2013-01-01

    In a wide range of medical fields, technological advancements have led to an increase in the average collective dose in national populations worldwide. Periodic estimations of the average collective population dose due to medical exposure is, therefore of utmost importance, and is now mandatory in countries within the European Union (article 12 of EURATOM directive 97/ 43). Presented in this work is a report on the estimation of the collective dose in the Portuguese population due to nuclear medicine diagnostic procedures and the Top 20 diagnostic radiology examinations, which represent the 20 exams that contribute the most to the total collective dose in diagnostic radiology and interventional procedures in Europe. This work involved the collaboration of a multidisciplinary taskforce comprising representatives of all major Portuguese stakeholders (universities, research institutions, public and private health care providers, administrative services of the National Healthcare System, scientific and professional associations and private service providers). This allowed us to gather a comprehensive amount of data necessary for a robust estimation of the collective effective dose to the Portuguese population. The methodology used for data collection and dose estimation was based on European Commission recommendations, as this work was performed in the framework of the European wide Dose Datamed II project. This is the first study estimating the collective dose for the population in Portugal, considering such a wide national coverage and range of procedures and consisting of important baseline reference data. The taskforce intends to continue developing periodic collective dose estimations in the future. The estimated annual average effective dose for the Portuguese population was of 0.080±0.017 mSv caput -1 for nuclear medicine exams and of 0.96±0.68 mSv caput -1 for the Top 20 diagnostic radiology exams. (authors)

  9. Application of the Evidence Procedure to the Estimation of Wireless Channels

    Directory of Open Access Journals (Sweden)

    Fleury Bernard H

    2007-01-01

    Full Text Available We address the application of the Bayesian evidence procedure to the estimation of wireless channels. The proposed scheme is based on relevance vector machines (RVM originally proposed by M. Tipping. RVMs allow to estimate channel parameters as well as to assess the number of multipath components constituting the channel within the Bayesian framework by locally maximizing the evidence integral. We show that, in the case of channel sounding using pulse-compression techniques, it is possible to cast the channel model as a general linear model, thus allowing RVM methods to be applied. We extend the original RVM algorithm to the multiple-observation/multiple-sensor scenario by proposing a new graphical model to represent multipath components. Through the analysis of the evidence procedure we develop a thresholding algorithm that is used in estimating the number of components. We also discuss the relationship of the evidence procedure to the standard minimum description length (MDL criterion. We show that the maximum of the evidence corresponds to the minimum of the MDL criterion. The applicability of the proposed scheme is demonstrated with synthetic as well as real-world channel measurements, and a performance increase over the conventional MDL criterion applied to maximum-likelihood estimates of the channel parameters is observed.

  10. A Comparison of the Approaches of Generalizability Theory and Item Response Theory in Estimating the Reliability of Test Scores for Testlet-Composed Tests

    Science.gov (United States)

    Lee, Guemin; Park, In-Yong

    2012-01-01

    Previous assessments of the reliability of test scores for testlet-composed tests have indicated that item-based estimation methods overestimate reliability. This study was designed to address issues related to the extent to which item-based estimation methods overestimate the reliability of test scores composed of testlets and to compare several…

  11. Bayesian reliability analysis for non-periodic inspection with estimation of uncertain parameters; Bayesian shinraisei kaiseki wo tekiyoshita hiteiki kozo kensa ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Itagaki, H. [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H.; Ito, S. [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M.

    1996-12-31

    Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.

  12. Bayesian reliability analysis for non-periodic inspection with estimation of uncertain parameters; Bayesian shinraisei kaiseki wo tekiyoshita hiteiki kozo kensa ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Itagaki, H [Yokohama National University, Yokohama (Japan). Faculty of Engineering; Asada, H; Ito, S [National Aerospace Laboratory, Tokyo (Japan); Shinozuka, M

    1997-12-31

    Risk assessed structural positions in a pressurized fuselage of a transport-type aircraft applied with damage tolerance design are taken up as the subject of discussion. A small number of data obtained from inspections on the positions was used to discuss the Bayesian reliability analysis that can estimate also a proper non-periodic inspection schedule, while estimating proper values for uncertain factors. As a result, time period of generating fatigue cracks was determined according to procedure of detailed visual inspections. The analysis method was found capable of estimating values that are thought reasonable and the proper inspection schedule using these values, in spite of placing the fatigue crack progress expression in a very simple form and estimating both factors as the uncertain factors. Thus, the present analysis method was verified of its effectiveness. This study has discussed at the same time the structural positions, modeling of fatigue cracks generated and develop in the positions, conditions for destruction, damage factors, and capability of the inspection from different viewpoints. This reliability analysis method is thought effective also on such other structures as offshore structures. 18 refs., 8 figs., 1 tab.

  13. Classification of Amazonian rosewood essential oil by Raman spectroscopy and PLS-DA with reliability estimation.

    Science.gov (United States)

    Almeida, Mariana R; Fidelis, Carlos H V; Barata, Lauro E S; Poppi, Ronei J

    2013-12-15

    The Amazon tree Aniba rosaeodora Ducke (rosewood) provides an essential oil valuable for the perfume industry, but after decades of predatory extraction it is at risk of extinction. The extraction of the essential oil from wood implies the cutting of the tree, and then the study of oil extracted from the leaves is important as a sustainable alternative. The goal of this study was to test the applicability of Raman spectroscopy and Partial Least Square Discriminant Analysis (PLS-DA) as means to classify the essential oil extracted from different parties (wood, leaves and branches) of the Brazilian tree A. rosaeodora. For the development of classification models, the Raman spectra were split into two sets: training and test. The value of the limit that separates the classes was calculated based on the distribution of samples of training. This value was calculated in a manner that the classes are divided with a lower probability of incorrect classification for future estimates. The best model presented sensitivity and specificity of 100%, predictive accuracy and efficiency of 100%. These results give an overall vision of the behavior of the model, but do not give information about individual samples; in this case, the confidence interval for each sample of classification was also calculated using the resampling bootstrap technique. The methodology developed have the potential to be an alternative for standard procedures used for oil analysis and it can be employed as screening method, since it is fast, non-destructive and robust. © 2013 Elsevier B.V. All rights reserved.

  14. Procedures for treating common cause failures in safety and reliability studies: Volume 2, Analytic background and techniques: Final report

    International Nuclear Information System (INIS)

    Mosleh, A.; Fleming, K.N.; Parry, G.W.; Paula, H.M.; Worledge, D.H.; Rasmuson, D.M.

    1988-12-01

    This report presents a framework for the inclusion of the impact of common cause failures in risk and reliability evaluations. Common cause failures are defined as that subset of dependent failures for which causes are not explicitly included in the logic model as basic events. The emphasis here is on providing procedures for a practical, systematic approach that can be used to perform and clearly document the analysis. The framework and the methods discussed for performing the different stages of the analysis integrate insights obtained from engineering assessments of the system and the historical evidence from multiple failure events into a systematic, reproducible, and defensible analysis. This document, Volume 2, contains a series of appendices that provide additional background and methodological detail on several important topics discussed in Volume 1

  15. Estimating the reliability of glycemic index values and potential sources of methodological and biological variability.

    Science.gov (United States)

    Matthan, Nirupa R; Ausman, Lynne M; Meng, Huicui; Tighiouart, Hocine; Lichtenstein, Alice H

    2016-10-01

    The utility of glycemic index (GI) values for chronic disease risk management remains controversial. Although absolute GI value determinations for individual foods have been shown to vary significantly in individuals with diabetes, there is a dearth of data on the reliability of GI value determinations and potential sources of variability among healthy adults. We examined the intra- and inter-individual variability in glycemic response to a single food challenge and methodologic and biological factors that potentially mediate this response. The GI value for white bread was determined by using standardized methodology in 63 volunteers free from chronic disease and recruited to differ by sex, age (18-85 y), and body mass index [BMI (in kg/m 2 ): 20-35]. Volunteers randomly underwent 3 sets of food challenges involving glucose (reference) and white bread (test food), both providing 50 g available carbohydrates. Serum glucose and insulin were monitored for 5 h postingestion, and GI values were calculated by using different area under the curve (AUC) methods. Biochemical variables were measured by using standard assays and body composition by dual-energy X-ray absorptiometry. The mean ± SD GI value for white bread was 62 ± 15 when calculated by using the recommended method. Mean intra- and interindividual CVs were 20% and 25%, respectively. Increasing sample size, replication of reference and test foods, and length of blood sampling, as well as AUC calculation method, did not improve the CVs. Among the biological factors assessed, insulin index and glycated hemoglobin values explained 15% and 16% of the variability in mean GI value for white bread, respectively. These data indicate that there is substantial variability in individual responses to GI value determinations, demonstrating that it is unlikely to be a good approach to guiding food choices. Additionally, even in healthy individuals, glycemic status significantly contributes to the variability in GI value

  16. Integration of external estimated breeding values and associated reliabilities using correlations among traits and effects.

    Science.gov (United States)

    Vandenplas, J; Colinet, F G; Glorieux, G; Bertozzi, C; Gengler, N

    2015-12-01

    Based on a Bayesian view of linear mixed models, several studies showed the possibilities to integrate estimated breeding values (EBV) and associated reliabilities (REL) provided by genetic evaluations performed outside a given evaluation system into this genetic evaluation. Hereafter, the term "internal" refers to this given genetic evaluation system, and the term "external" refers to all other genetic evaluations performed outside the internal evaluation system. Bayesian approaches integrate external information (i.e., external EBV and associated REL) by altering both the mean and (co)variance of the prior distributions of the additive genetic effects based on the knowledge of this external information. Extensions of the Bayesian approaches to multivariate settings are interesting because external information expressed on other scales, measurement units, or trait definitions, or associated with different heritabilities and genetic parameters than the internal traits, could be integrated into a multivariate genetic evaluation without the need to convert external information to the internal traits. Therefore, the aim of this study was to test the integration of external EBV and associated REL, expressed on a 305-d basis and genetically correlated with a trait of interest, into a multivariate genetic evaluation using a random regression test-day model for the trait of interest. The approach we used was a multivariate Bayesian approach. Results showed that the integration of external information led to a genetic evaluation for the trait of interest for, at least, animals associated with external information, as accurate as a bivariate evaluation including all available phenotypic information. In conclusion, the multivariate Bayesian approaches have the potential to integrate external information correlated with the internal phenotypic traits, and potentially to the different random regressions, into a multivariate genetic evaluation. This allows the use of different

  17. Reliability of the fuel identification procedure used by COGEMA during cask loading for shipment to LA HAGUE

    International Nuclear Information System (INIS)

    Pretesacque, P.; Eid, M.; Zachar, M.

    1993-01-01

    This study has been carried out to demonstrate the reliability of the system of the spent fuel identification used by COGEMA and NTL prior to shipment to the reprocessing plant of La Hague. This was a prerequisite for the French competent authority to accept the 'burnup credit' assumption in the criticality assessment of spent fuel packages. The probability to load a non-irradiated and non-specified fuel assembly was considered as acceptable if our identification and irradiation status measurement procedures were used. Furthermore, the task analysis enabled us to improve the working conditions at reactor sites, the quality of the working documentation, and consequently to improve the reliability of the system. The NTL experience of transporting to La Hague, as consignor, more than 10,000 fuel assemblies since the date of implementation of our system in 1984 without any non-conformance on fuel identification, validated the formalism of this study as well as our assumptions on basic events probabilities. (J.P.N.)

  18. Reliability Estimation with Uncertainties Consideration for High Power IGBTs in 2.3 MW Wind Turbine Converter System

    DEFF Research Database (Denmark)

    Kostandyan, Erik; Ma, Ke

    2012-01-01

    This paper investigates the lifetime of high power IGBTs (insulated gate bipolar transistors) used in large wind turbine applications. Since the IGBTs are critical components in a wind turbine power converter, it is of great importance to assess their reliability in the design phase of the turbine....... Minimum, maximum and average junction temperatures profiles for the grid side IGBTs are estimated at each wind speed input values. The selected failure mechanism is the crack propagation in solder joint under the silicon die. Based on junction temperature profiles and physics of failure model......, the probabilistic and determinist damage models are presented with estimated fatigue lives. Reliably levels were assessed by means of First Order Reliability Method taking into account uncertainties....

  19. A flexible latent class approach to estimating test-score reliability

    NARCIS (Netherlands)

    van der Palm, D.W.; van der Ark, L.A.; Sijtsma, K.

    2014-01-01

    The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution

  20. On estimation of reliability of a nuclear power plant with tokamak reactor

    International Nuclear Information System (INIS)

    Klemin, A.I.; Smetannikov, V.P.; Shiverskij, E.A.

    1982-01-01

    The results of the analysis of INTOR plant reliability are presented. The first stage of the analysis consists in the calculation of the INTOR plant structural reliability factors (15 ibs main systems have been considered). For each system the failure flow parameter (W(1/h)) and operational readiness Ksub(r) have been determined which for the plant as a whole besides these factors-technological utilization coefficient Ksub(TU) and mean-cycles-between failures Tsub(o). The second stage of the reliability analysis consists in investigating methods of improving its reliability factors reratively to the one calculated at the first level stage. It is shown that the reliability of the whole plant to the most essential extent is determined by the power supply system reliability. The following as to the influence extent on the INTOR plant reliability is the cryogenic system. Calculations of the INTOR plant reliability factors have given the following values: W=4,5x10 -3 1/h. Tsub(o)=152 h, Ksub(r)=0,71, Ksub(TU)=o,4 g

  1. A Procedure for Structural Weight Estimation of Single Stage to Orbit Launch Vehicles (Interim User's Manual)

    Science.gov (United States)

    Martinovic, Zoran N.; Cerro, Jeffrey A.

    2002-01-01

    This is an interim user's manual for current procedures used in the Vehicle Analysis Branch at NASA Langley Research Center, Hampton, Virginia, for launch vehicle structural subsystem weight estimation based on finite element modeling and structural analysis. The process is intended to complement traditional methods of conceptual and early preliminary structural design such as the application of empirical weight estimation or application of classical engineering design equations and criteria on one dimensional "line" models. Functions of two commercially available software codes are coupled together. Vehicle modeling and analysis are done using SDRC/I-DEAS, and structural sizing is performed with the Collier Research Corp. HyperSizer program.

  2. Lifetime prediction and reliability estimation methodology for Stirling-type pulse tube refrigerators by gaseous contamination accelerated degradation testing

    Science.gov (United States)

    Wan, Fubin; Tan, Yuanyuan; Jiang, Zhenhua; Chen, Xun; Wu, Yinong; Zhao, Peng

    2017-12-01

    Lifetime and reliability are the two performance parameters of premium importance for modern space Stirling-type pulse tube refrigerators (SPTRs), which are required to operate in excess of 10 years. Demonstration of these parameters provides a significant challenge. This paper proposes a lifetime prediction and reliability estimation method that utilizes accelerated degradation testing (ADT) for SPTRs related to gaseous contamination failure. The method was experimentally validated via three groups of gaseous contamination ADT. First, the performance degradation model based on mechanism of contamination failure and material outgassing characteristics of SPTRs was established. Next, a preliminary test was performed to determine whether the mechanism of contamination failure of the SPTRs during ADT is consistent with normal life testing. Subsequently, the experimental program of ADT was designed for SPTRs. Then, three groups of gaseous contamination ADT were performed at elevated ambient temperatures of 40 °C, 50 °C, and 60 °C, respectively and the estimated lifetimes of the SPTRs under normal condition were obtained through acceleration model (Arrhenius model). The results show good fitting of the degradation model with the experimental data. Finally, we obtained the reliability estimation of SPTRs through using the Weibull distribution. The proposed novel methodology enables us to take less than one year time to estimate the reliability of the SPTRs designed for more than 10 years.

  3. On systematic and statistic errors in radionuclide mass activity estimation procedure

    International Nuclear Information System (INIS)

    Smelcerovic, M.; Djuric, G.; Popovic, D.

    1989-01-01

    One of the most important requirements during nuclear accidents is the fast estimation of the mass activity of the radionuclides that suddenly and without control reach the environment. The paper points to systematic errors in the procedures of sampling, sample preparation and measurement itself, that in high degree contribute to total mass activity evaluation error. Statistic errors in gamma spectrometry as well as in total mass alpha and beta activity evaluation are also discussed. Beside, some of the possible sources of errors in the partial mass activity evaluation for some of the radionuclides are presented. The contribution of the errors in the total mass activity evaluation error is estimated and procedures that could possibly reduce it are discussed (author)

  4. Nonparametric bootstrap procedures for predictive inference based on recursive estimation schemes

    OpenAIRE

    Corradi, Valentina; Swanson, Norman R.

    2005-01-01

    Our objectives in this paper are twofold. First, we introduce block bootstrap techniques that are (first order) valid in recursive estimation frameworks. Thereafter, we present two examples where predictive accuracy tests are made operational using our new bootstrap procedures. In one application, we outline a consistent test for out-of-sample nonlinear Granger causality, and in the other we outline a test for selecting amongst multiple alternative forecasting models, all of which are possibl...

  5. Best estimate procedures for fatigue evaluation in the framework of German KTA code

    Energy Technology Data Exchange (ETDEWEB)

    Seichter, Johannes [Siempelkamp Pruef- und Gutachter-Gesellschaft mbH, Dresden (Germany); Reese, Sven H.; Klucke, Dietmar [E.ON Kernkraft GmbH, Hannover (Germany)

    2013-07-01

    By decreasing the level of conservatism in fatigue analyses it is possible to reduce as well fatigue usage factors calculated for EOL (end of life) as 'actual CUF' (cumulative fatigue usage factor) of NPP components considerably. It is the opinion of the authors, that the mentioned best estimate procedures should be used in the course of fatigue assessment to fulfill e.g. the demands of the KTA code with regard to environmental assisted fatigue. (orig.)

  6. A new procedure for estimating the cell temperature of a high concentrator photovoltaic grid connected system based on atmospheric parameters

    International Nuclear Information System (INIS)

    Fernández, Eduardo F.; Almonacid, Florencia

    2015-01-01

    Highlights: • Concentrating grid-connected systems are working at maximum power point. • The operating cell temperature is inherently lower than at open circuit. • Two novel methods for estimating the cell temperature are proposed. • Both predict the operating cell temperature from atmospheric parameters. • Experimental results show that both methods perform effectively. - Abstract: The working cell temperature of high concentrator photovoltaic systems is a crucial parameter when analysing their performance and reliability. At the same time, due to the special features of this technology, the direct measurement of the cell temperature is very complex and is usually obtained by using different indirect methods. High concentrator photovoltaic modules in a system are operating at maximum power since they are connected to an inverter. So that, their cell temperature is lower than the cell temperature of a module at open-circuit voltage since an important part of the light power density is converted into electricity. In this paper, a procedure for indirectly estimating the cell temperature of a high concentrator photovoltaic system from atmospheric parameters is addressed. Therefore, this new procedure has the advantage that is valid for estimating the cell temperature of a system at any location of interest if the atmospheric parameters are available. To achieve this goal, two different methods are proposed: one based on simple mathematical relationships and another based on artificial intelligent techniques. Results show that both methods predicts the cell temperature of a module connected to an inverter with a low margin of error with a normalised root mean square error lower or equal than 3.3%, an absolute root mean square error lower or equal than 2 °C, a mean absolute error lower or equal then 1.5 °C, and a mean bias error and a mean relative error almost equal to 0%

  7. Measurement and estimation of maximum skin dose to the patient for different interventional procedures

    International Nuclear Information System (INIS)

    Cheng Yuxi; Liu Lantao; Wei Kedao; Yu Peng; Yan Shulin; Li Tianchang

    2005-01-01

    Objective: To determine the dose distribution and maximum skin dose to the patient for four interventional procedures: coronary angiography (CA), hepatic angiography (HA), radiofrequency ablation (RF) and cerebral angiography (CAG), and to estimate the definitive effect of radiation on skin. Methods: Skin dose was measured using LiF: Mg, Cu, P TLD chips. A total of 9 measuring points were chosen on the back of the patient with two TLDs placed at each point, for CA, HA and RF interventional procedures, whereas two TLDs were placed on one point each at the postero-anterior (PA) and lateral side (LAT) respectively, during the CAG procedure. Results: The results revealed that the maximum skin dose to the patient was 1683.91 mGy for the HA procedure with a mean value of 607.29 mGy. The maximum skin dose at the PA point was 959.3 mGy for the CAG with a mean value of 418.79 mGy; While the maximum and the mean doses at the LAT point were 704 mGy and 191.52 mGy, respectively. For the RF procedure the maximum dose was 853.82 mGy and the mean was 219.67 mGy. For the CA procedure the maximum dose was 456.1 mGy and the mean was 227.63 mGy. Conclusion: All the measured dose values in this study are estimated ones which could not provide the accurate maximum value because it is difficult to measure using a great deal of TLDs. On the other hand, the small area of skin exposed to high dose could be missed as the distribution of the dose is successive. (authors)

  8. Nuclear reactor component populations, reliability data bases, and their relationship to failure rate estimation and uncertainty analysis

    International Nuclear Information System (INIS)

    Martz, H.F.; Beckman, R.J.

    1981-12-01

    Probabilistic risk analyses are used to assess the risks inherent in the operation of existing and proposed nuclear power reactors. In performing such risk analyses the failure rates of various components which are used in a variety of reactor systems must be estimated. These failure rate estimates serve as input to fault trees and event trees used in the analyses. Component failure rate estimation is often based on relevant field failure data from different reliability data sources such as LERs, NPRDS, and the In-Plant Data Program. Various statistical data analysis and estimation methods have been proposed over the years to provide the required estimates of the component failure rates. This report discusses the basis and extent to which statistical methods can be used to obtain component failure rate estimates. The report is expository in nature and focuses on the general philosophical basis for such statistical methods. Various terms and concepts are defined and illustrated by means of numerous simple examples

  9. Reliability of different mark-recapture methods for population size estimation tested against reference population sizes constructed from field data.

    Directory of Open Access Journals (Sweden)

    Annegret Grimm

    Full Text Available Reliable estimates of population size are fundamental in many ecological studies and biodiversity conservation. Selecting appropriate methods to estimate abundance is often very difficult, especially if data are scarce. Most studies concerning the reliability of different estimators used simulation data based on assumptions about capture variability that do not necessarily reflect conditions in natural populations. Here, we used data from an intensively studied closed population of the arboreal gecko Gehyra variegata to construct reference population sizes for assessing twelve different population size estimators in terms of bias, precision, accuracy, and their 95%-confidence intervals. Two of the reference populations reflect natural biological entities, whereas the other reference populations reflect artificial subsets of the population. Since individual heterogeneity was assumed, we tested modifications of the Lincoln-Petersen estimator, a set of models in programs MARK and CARE-2, and a truncated geometric distribution. Ranking of methods was similar across criteria. Models accounting for individual heterogeneity performed best in all assessment criteria. For populations from heterogeneous habitats without obvious covariates explaining individual heterogeneity, we recommend using the moment estimator or the interpolated jackknife estimator (both implemented in CAPTURE/MARK. If data for capture frequencies are substantial, we recommend the sample coverage or the estimating equation (both models implemented in CARE-2. Depending on the distribution of catchabilities, our proposed multiple Lincoln-Petersen and a truncated geometric distribution obtained comparably good results. The former usually resulted in a minimum population size and the latter can be recommended when there is a long tail of low capture probabilities. Models with covariates and mixture models performed poorly. Our approach identified suitable methods and extended options to

  10. SU-F-P-44: A Direct Estimate of Peak Skin Dose for Interventional Fluoroscopy Procedures

    International Nuclear Information System (INIS)

    Weir, V; Zhang, J

    2016-01-01

    Purpose: There is an increasing demand for medical physicist to calculate peak skin dose (PSD) for interventional fluoroscopy procedures. The dose information (Dose-Area-Product and Air Kerma) displayed in the console cannot directly be used for this purpose. Our clinical experience shows that the use of the existing methods may overestimate or underestimate PSD. This study attempts to develop a direct estimate of PSD from the displayed dose metrics. Methods: An anthropomorphic torso phantom was used for dose measurements for a common fluoroscopic procedure. Entrance skin doses were measured with a Piranha solid state point detector placed on the table surface below the torso phantom. An initial “reference dose rate” (RE) measurement was conducted by comparing the displayed dose rate (mGy/min) to the dose rate measured. The distance from table top to focal spot was taken as the reference distance (RD at the RE. Table height was then adjusted. The displayed air kerma and DAP were recorded and sent to three physicists to estimate PSD. An inverse square correction was applied to correct displayed air kerma at various table heights. The PSD estimated by physicists and the PSD by the proposed method were then compared with the measurements. The estimated DAPs were compared to displayed DAP readings (mGycm2). Results: The difference between estimated PSD by the proposed method and direct measurements was less than 5%. For the same set of data, the estimated PSD by each of three physicists is different from measurements by ±52%. The DAP calculated by the proposed method and displayed DAP readings in the console is less than 20% at various table heights. Conclusion: PSD may be simply estimated from displayed air kerma or DAP if the distance between table top and tube focal spot or if x-ray beam area on table top is available.

  11. SU-F-P-44: A Direct Estimate of Peak Skin Dose for Interventional Fluoroscopy Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Weir, V [Baylor Scott and White Healthcare System, Dallas, TX (United States); Zhang, J [University of Kentucky, Lexington, KY (United States)

    2016-06-15

    Purpose: There is an increasing demand for medical physicist to calculate peak skin dose (PSD) for interventional fluoroscopy procedures. The dose information (Dose-Area-Product and Air Kerma) displayed in the console cannot directly be used for this purpose. Our clinical experience shows that the use of the existing methods may overestimate or underestimate PSD. This study attempts to develop a direct estimate of PSD from the displayed dose metrics. Methods: An anthropomorphic torso phantom was used for dose measurements for a common fluoroscopic procedure. Entrance skin doses were measured with a Piranha solid state point detector placed on the table surface below the torso phantom. An initial “reference dose rate” (RE) measurement was conducted by comparing the displayed dose rate (mGy/min) to the dose rate measured. The distance from table top to focal spot was taken as the reference distance (RD at the RE. Table height was then adjusted. The displayed air kerma and DAP were recorded and sent to three physicists to estimate PSD. An inverse square correction was applied to correct displayed air kerma at various table heights. The PSD estimated by physicists and the PSD by the proposed method were then compared with the measurements. The estimated DAPs were compared to displayed DAP readings (mGycm2). Results: The difference between estimated PSD by the proposed method and direct measurements was less than 5%. For the same set of data, the estimated PSD by each of three physicists is different from measurements by ±52%. The DAP calculated by the proposed method and displayed DAP readings in the console is less than 20% at various table heights. Conclusion: PSD may be simply estimated from displayed air kerma or DAP if the distance between table top and tube focal spot or if x-ray beam area on table top is available.

  12. A Quantile Regression Approach to Estimating the Distribution of Anesthetic Procedure Time during Induction.

    Directory of Open Access Journals (Sweden)

    Hsin-Lun Wu

    Full Text Available Although procedure time analyses are important for operating room management, it is not easy to extract useful information from clinical procedure time data. A novel approach was proposed to analyze procedure time during anesthetic induction. A two-step regression analysis was performed to explore influential factors of anesthetic induction time (AIT. Linear regression with stepwise model selection was used to select significant correlates of AIT and then quantile regression was employed to illustrate the dynamic relationships between AIT and selected variables at distinct quantiles. A total of 1,060 patients were analyzed. The first and second-year residents (R1-R2 required longer AIT than the third and fourth-year residents and attending anesthesiologists (p = 0.006. Factors prolonging AIT included American Society of Anesthesiologist physical status ≧ III, arterial, central venous and epidural catheterization, and use of bronchoscopy. Presence of surgeon before induction would decrease AIT (p < 0.001. Types of surgery also had significant influence on AIT. Quantile regression satisfactorily estimated extra time needed to complete induction for each influential factor at distinct quantiles. Our analysis on AIT demonstrated the benefit of quantile regression analysis to provide more comprehensive view of the relationships between procedure time and related factors. This novel two-step regression approach has potential applications to procedure time analysis in operating room management.

  13. An application of the fault tree analysis for the power system reliability estimation

    International Nuclear Information System (INIS)

    Volkanovski, A.; Cepin, M.; Mavko, B.

    2007-01-01

    The power system is a complex system with its main function to produce, transfer and provide consumers with electrical energy. Combinations of failures of components in the system can result in a failure of power delivery to certain load points and in some cases in a full blackout of power system. The power system reliability directly affects safe and reliable operation of nuclear power plants because the loss of offsite power is a significant contributor to the core damage frequency in probabilistic safety assessments of nuclear power plants. The method, which is based on the integration of the fault tree analysis with the analysis of the power flows in the power system, was developed and implemented for power system reliability assessment. The main contributors to the power system reliability are identified, both quantitatively and qualitatively. (author)

  14. Chemical oil-spill dispersants: evaluation of three laboratory procedures for estimating performance

    International Nuclear Information System (INIS)

    Clayton, J.R.; Marsden, P.

    1992-09-01

    The report presents data from studies designed to evaluate characteristics of selected bench-scale test methods for estimating performance of chemical agents for dispersing oil from surface slicks into an underlying water column. In order to mitigate the effect of surface slicks with chemical dispersant agents, however, an on-scene coordinator must have information and an understanding of performance characteristics for available dispersant agents. Performance of candidate dispersant agents can be estimated on the basis of laboratory testing procedures that are designed to evaluate performance of different agents. Data presented in the report assist in the evaluation of candidate test methods for estimating performance of candidate dispersant agents. Three test methods were selected for evaluating performance: the currently accepted Revised Standard EPA test, Environmental Canada's Swirling Flask test, and the IFP-Dilution test

  15. On a Bayesian estimation procedure for determining the average ore grade of a uranium deposit

    International Nuclear Information System (INIS)

    Heising, C.D.; Zamora-Reyes, J.A.

    1996-01-01

    A Bayesian procedure is applied to estimate the average ore grade of a specific uranium deposit (the Morrison formation in New Mexico). Experimental data taken from drilling tests for this formation constitute deposit specific information, E 2 . This information is combined, through a single stage application of Bayes' theorem, with the more extensive and well established information on all similar formations in the region, E 1 . It is assumed that the best estimate for the deposit specific case should include the relevant experimental evidence collected from other like formations giving incomplete information on the specific deposit. This follows traditional methods for resource estimation, which presume that previous collective experience obtained from similar formations in the geological region can be used to infer the geologic characteristics of a less well characterized formation. (Author)

  16. An iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions, Addendum

    Science.gov (United States)

    Peters, B. C., Jr.; Walker, H. F.

    1975-01-01

    New results and insights concerning a previously published iterative procedure for obtaining maximum-likelihood estimates of the parameters for a mixture of normal distributions were discussed. It was shown that the procedure converges locally to the consistent maximum likelihood estimate as long as a specified parameter is bounded between two limits. Bound values were given to yield optimal local convergence.

  17. Estimation of staff lens doses during interventional procedures. Comparing cardiology, neuroradiology and interventional radiology

    International Nuclear Information System (INIS)

    Vano, E.; Sanchez, R.M.; Fernandez, J.M.

    2015-01-01

    The purpose of this article is to estimate lens doses using over apron active personal dosemeters in interventional catheterisation laboratories (cardiology IC, neuroradiology IN and radiology IR) and to investigate correlations between occupational lens doses and patient doses. Active electronic personal dosemeters placed over the lead apron were used on a sample of 204 IC procedures, 274 IN and 220 IR (all performed at the same university hospital). Patient dose values (kerma area product) were also recorded to evaluate correlations with occupational doses. Operators used the ceiling-suspended screen in most cases. The median and third quartile values of equivalent dose Hp(10) per procedure measured over the apron for IC, IN and IR resulted, respectively, in 21/67, 19/44 and 24/54 μSv. Patient dose values (median/third quartile) were 75/128, 83/176 and 61/159 Gy cm 2 , respectively. The median ratios for dosemeters worn over the apron by operators ( protected by the ceiling-suspended screen) and patient doses were 0.36; 0.21 and 0.46 μSv Gy -1 cm -2 , respectively. With the conservative approach used (lens doses estimated from the over apron chest dosemeter) we came to the conclusion that more than 800 procedures y -1 and per operator were necessary to reach the new lens dose limit for the three interventional specialties. (authors)

  18. Advanced RESTART method for the estimation of the probability of failure of highly reliable hybrid dynamic systems

    International Nuclear Information System (INIS)

    Turati, Pietro; Pedroni, Nicola; Zio, Enrico

    2016-01-01

    The efficient estimation of system reliability characteristics is of paramount importance for many engineering applications. Real world system reliability modeling calls for the capability of treating systems that are: i) dynamic, ii) complex, iii) hybrid and iv) highly reliable. Advanced Monte Carlo (MC) methods offer a way to solve these types of problems, which are feasible according to the potentially high computational costs. In this paper, the REpetitive Simulation Trials After Reaching Thresholds (RESTART) method is employed, extending it to hybrid systems for the first time (to the authors’ knowledge). The estimation accuracy and precision of RESTART highly depend on the choice of the Importance Function (IF) indicating how close the system is to failure: in this respect, proper IFs are here originally proposed to improve the performance of RESTART for the analysis of hybrid systems. The resulting overall simulation approach is applied to estimate the probability of failure of the control system of a liquid hold-up tank and of a pump-valve subsystem subject to degradation induced by fatigue. The results are compared to those obtained by standard MC simulation and by RESTART with classical IFs available in the literature. The comparison shows the improvement in the performance obtained by our approach. - Highlights: • We consider the issue of estimating small failure probabilities in dynamic systems. • We employ the RESTART method to estimate the failure probabilities. • New Importance Functions (IFs) are introduced to increase the method performance. • We adopt two dynamic, hybrid, highly reliable systems as case studies. • A comparison with literature IFs proves the effectiveness of the new IFs.

  19. Application of Fault Tree Analysis for Estimating Temperature Alarm Circuit Reliability

    International Nuclear Information System (INIS)

    El-Shanshoury, A.I.; El-Shanshoury, G.I.

    2011-01-01

    Fault Tree Analysis (FTA) is one of the most widely-used methods in system reliability analysis. It is a graphical technique that provides a systematic description of the combinations of possible occurrences in a system, which can result in an undesirable outcome. The presented paper deals with the application of FTA method in analyzing temperature alarm circuit. The criticality failure of this circuit comes from failing to alarm when temperature exceeds a certain limit. In order for a circuit to be safe, a detailed analysis of the faults causing circuit failure is performed by configuring fault tree diagram (qualitative analysis). Calculations of circuit quantitative reliability parameters such as Failure Rate (FR) and Mean Time between Failures (MTBF) are also done by using Relex 2009 computer program. Benefits of FTA are assessing system reliability or safety during operation, improving understanding of the system, and identifying root causes of equipment failures

  20. Estimating Value of Congestion and of Reliability from Observation of Route Choice Behavior of Car Drivers

    DEFF Research Database (Denmark)

    Prato, Carlo Giacomo; Rasmussen, Thomas Kjær; Nielsen, Otto Anker

    2014-01-01

    In recent years, a consensus has been reached about the relevance of calculating the value of congestion and the value of reliability for better understanding and therefore better prediction of travel behavior. The current study proposed a revealed preference approach that used a large amount...... both congestion and reliability terms. Results illustrated that the value of time and the value of congestion were significantly higher in the peak period because of possible higher penalties for drivers being late and consequently possible higher time pressure. Moreover, results showed...... that the marginal rate of substitution between travel time reliability and total travel time did not vary across periods and traffic conditions, with the obvious caveat that the absolute values were significantly higher for the peak period. Last, results showed the immense potential of exploiting the growing...

  1. Evaluation of procedures for estimating ruminal particle turnover and diet digestibility in ruminant animals

    International Nuclear Information System (INIS)

    Cochran, R.C.

    1985-01-01

    Procedures used in estimating ruminal particle turnover and diet digestibility were evaluated in a series of independent experiments. Experiment 1 and 2 evaluated the influence of sampling site, mathematical model and intraruminal mixing on estimates of ruminal particle turnover in beef steers grazing crested wheatgrass or offered ad libitum levels of prairie hay once daily, respectively. Particle turnover rate constants were estimated by intraruminal administration (via rumen cannula) of ytterbium (Yb)-labeled forage, followed by serial collection of rumen digesta or fecal samples. Rumen Yb concentrations were transformed to natural logarithms and regressed on time. Influence of sampling site (rectum versus rumen) on turnover estimates was modified by the model used to fit fecal marker excretion curves in the grazing study. In contrast, estimated turnover rate constants from rumen sampling were smaller (P < 0.05) than rectally derived rate constants, regardless of fecal model used, when steers were fed once daily. In Experiment 3, in vitro residues subjected to acid or neutral detergent fiber extraction (IVADF and IVNDF), acid detergent fiber incubated in cellulase (ADFIC) and acid detergent lignin (ADL) were evaluated as internal markers for predicting diet digestibility. Both IVADF and IVNDF displayed variable accuracy for prediction of in vivo digestibility whereas ADL and ADFIC inaccurately predicted digestibility of all diets

  2. A recommended procedure for estimating the cosmic-ray spectral parameter of a simple power law

    CERN Document Server

    Howell, L W

    2002-01-01

    A simple power law model with single spectral index alpha sub 1 is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10 sup 1 sup 3 eV. Two procedures for estimating alpha sub 1 --the method of moments and maximum likelihood (ML)--are developed and their statistical performance are compared. The ML procedure is shown to be the superior approach and is then generalized for application to real cosmic-ray data sets. Several other important results, such as the relationship between collecting power and detector energy resolution and inclusion of a non-Gaussian detector response function, are presented. These results have many practical benefits in the design phase of a cosmic-ray detector as they permit instrument developers to make important trade studies in design parameters as a function of one of the science objectives.

  3. Estimation of breast dose and cancer risk in chest and abdomen CT procedures

    International Nuclear Information System (INIS)

    Eltahir, Suha Abubaker Ali

    2013-05-01

    The use of CT in medical diagnosis delivers radiation doses to patents that are higher than those from other radiological procedures. Lack of optimized protocols be an additional source of increased dose in developing countries. The aims of this study are first, to measure patient doses during CT chest and abdomen procedures, second, to estimate the radiation dose to the breast, and third to quantify the radiation risks during the procedures. Patient doses from two common CT examinations were obtained from four hospitals in Khartoum.The patient doses were estimated using measurement of CT dose indexes (CTDI), exposure-related parameters, and the IMPACT spreadsheet based on NRPB conversion factors. A large variation of mean organ doses among hospitals was observed for similar CT examinations. These variations largely originated from different CT scanning protocols used in different hospitals and scanner type. The largest range was found for CT of the chest, for which the dose varied from 2.3 to 47 (average 24.7) mSv and for abdomen CT, it was 1.6 to 18.8 (average 10.2) mSv. Radiation dose to the breast ranged from 1.6 to 32.9 mSv for the chest and 1.1 to 13.2 mSv for the abdomen. The radiation risk per procedure was high. The obtained values were mostly higher than the values of organ doses reported from the other studies. It was concluded that current clinical chest and abdomen protocols result in variable radiation doses to the breast. The magnitude of exposure may have implications for imaging strategies.(Author)

  4. R&D program benefits estimation: DOE Office of Electricity Delivery and Energy Reliability

    Energy Technology Data Exchange (ETDEWEB)

    None, None

    2006-12-04

    The overall mission of the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE) is to lead national efforts to modernize the electric grid, enhance the security and reliability of the energy infrastructure, and facilitate recovery from disruptions to the energy supply. In support of this mission, OE conducts a portfolio of research and development (R&D) activities to advance technologies to enhance electric power delivery. Multiple benefits are anticipated to result from the deployment of these technologies, including higher quality and more reliable power, energy savings, and lower cost electricity. In addition, OE engages State and local government decision-makers and the private sector to address issues related to the reliability and security of the grid, including responding to national emergencies that affect energy delivery. The OE R&D activities are comprised of four R&D lines: High Temperature Superconductivity (HTS), Visualization and Controls (V&C), Energy Storage and Power Electronics (ES&PE), and Distributed Systems Integration (DSI).

  5. Estimation of electromagnetic pumps reliability based on the results of their exploitation

    International Nuclear Information System (INIS)

    Vitkovskij, I.V.; Kirillov, I.R.; Chajka, P.Yu.; Kryuchkov, E.A.; Poplavskij, V.M.; Nosov, Yu.V.; Oshkanov, N.N.

    2007-01-01

    Main factors, determining the service life of induction electromagnetic pumps (IEP), are analyzed. It is shown that the IEP serviceability depends mainly on the winding reliability. The main damaging factors, acting on the windings, are noted. The expressions for calculation of the failure intensity for the coil and case insulations are obtained [ru

  6. Attenuation of the Squared Canonical Correlation Coefficient under Varying Estimates of Score Reliability

    Science.gov (United States)

    Wilson, Celia M.

    2010-01-01

    Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…

  7. Application of fuzzy-MOORA method: Ranking of components for reliability estimation of component-based software systems

    Directory of Open Access Journals (Sweden)

    Zeeshan Ali Siddiqui

    2016-01-01

    Full Text Available Component-based software system (CBSS development technique is an emerging discipline that promises to take software development into a new era. As hardware systems are presently being constructed from kits of parts, software systems may also be assembled from components. It is more reliable to reuse software than to create. It is the glue code and individual components reliability that contribute to the reliability of the overall system. Every component contributes to overall system reliability according to the number of times it is being used, some components are of critical usage, known as usage frequency of component. The usage frequency decides the weight of each component. According to their weights, each component contributes to the overall reliability of the system. Therefore, ranking of components may be obtained by analyzing their reliability impacts on overall application. In this paper, we propose the application of fuzzy multi-objective optimization on the basis of ratio analysis, Fuzzy-MOORA. The method helps us find the best suitable alternative, software component, from a set of available feasible alternatives named software components. It is an accurate and easy to understand tool for solving multi-criteria decision making problems that have imprecise and vague evaluation data. By the use of ratio analysis, the proposed method determines the most suitable alternative among all possible alternatives, and dimensionless measurement will realize the job of ranking of components for estimating CBSS reliability in a non-subjective way. Finally, three case studies are shown to illustrate the use of the proposed technique.

  8. Artificial Intelligence Procedures for Tree Taper Estimation within a Complex Vegetation Mosaic in Brazil.

    Directory of Open Access Journals (Sweden)

    Matheus Henrique Nunes

    Full Text Available Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.

  9. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    Science.gov (United States)

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Estimate of ovarian dose and entrance skin dose in uterine artery embolization procedures

    International Nuclear Information System (INIS)

    Silva, Marcia C.; Nasser, Felipe; Affonso, Breno B.; Araujo Junior, Raimundo T.; Zlotnik, Eduardo; Messina, Marcos L.; Baracat, Edmund C.

    2010-01-01

    The goal of this study was to estimate the ovarian dose and entrance skin dose (ESD) of patients who underwent uterine artery embolization (UAE) procedure. To achieve this, 49 UAE procedures were accompanied where the parameters of image acquisition were recorded for the calculation of the DEP from the output of the X-ray tube. The estimation of the ovarian dose was carried out by the insertion of a vaginal probe containing 3 TLD's. The obtained values were compared with the results of other authors and a higher value of ovarian dose (28,97 cGy) and ESD (403,57 cGy) was found in this work. Analysis of the results allowed to observe that this result was obtained mainly as a result of the high number of arteriography series and the frames/second rates employed. Following on from these observations, the protocol of EMUT was altered reducing the frames/seg rate from 2 to 1. Efforts with a view to reducing the number of arteriography series also became part of the next proceedings. (author)

  11. Procedure for estimating facility decommissioning costs for non-fuel-cycle nuclear facilities

    International Nuclear Information System (INIS)

    Short, S.M.

    1988-01-01

    The Nuclear Regulatory Commission (NRC) staff has been reappraising its regulatory position relative to the decommissioning of nuclear facilities over the last several years. Approximately 30 reports covering the technology, safety, and costs of decommissioning reference nuclear facilities have been published during this period in support of this effort. One of these reports, Technology, Safety, and Costs of Decommissioning Reference Non-Fuel-Cycle Nuclear Facilities (NUREG/CR-1754), was published in 1981 and was felt by the NRC staff to be outdated. The Pacific Northwest Laboratory (PNL) was asked by the NRC staff to revise the information provided in this report to reflect the latest information on decommissioning technology and costs and publish the results as an addendum to the previous report. During the course of this study, the NRC staff also asked that PNL provide a simplified procedure for estimating decommissioning costs of non-fuel-cycle nuclear facilities. The purpose being to provide NRC staff with the means to easily generate their own estimate of decommissioning costs for a given facility for comparison against a licensee's submittal. This report presents the procedure developed for use by NRC staff

  12. Automatic training and reliability estimation for 3D ASM applied to cardiac MRI segmentation.

    Science.gov (United States)

    Tobon-Gomez, Catalina; Sukno, Federico M; Butakoff, Constantine; Huguet, Marina; Frangi, Alejandro F

    2012-07-07

    Training active shape models requires collecting manual ground-truth meshes in a large image database. While shape information can be reused across multiple imaging modalities, intensity information needs to be imaging modality and protocol specific. In this context, this study has two main purposes: (1) to test the potential of using intensity models learned from MRI simulated datasets and (2) to test the potential of including a measure of reliability during the matching process to increase robustness. We used a population of 400 virtual subjects (XCAT phantom), and two clinical populations of 40 and 45 subjects. Virtual subjects were used to generate simulated datasets (MRISIM simulator). Intensity models were trained both on simulated and real datasets. The trained models were used to segment the left ventricle (LV) and right ventricle (RV) from real datasets. Segmentations were also obtained with and without reliability information. Performance was evaluated with point-to-surface and volume errors. Simulated intensity models obtained average accuracy comparable to inter-observer variability for LV segmentation. The inclusion of reliability information reduced volume errors in hypertrophic patients (EF errors from 17 ± 57% to 10 ± 18%; LV MASS errors from -27 ± 22 g to -14 ± 25 g), and in heart failure patients (EF errors from -8 ± 42% to -5 ± 14%). The RV model of the simulated images needs further improvement to better resemble image intensities around the myocardial edges. Both for real and simulated models, reliability information increased segmentation robustness without penalizing accuracy.

  13. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    OpenAIRE

    Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong

    2013-01-01

    In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...

  14. Availability estimation of repairable systems using reliability graph with general gates (RGGG)

    International Nuclear Information System (INIS)

    Goh, Gyoung Tae

    2009-02-01

    By performing risk analysis, we may obtain sufficient information about the system to redesign it and lower the probability of the occurrence of an accident or mitigate the ensuing consequences. The concept of reliability is widely used to express risk of systems. The reliability is used for non-repairable systems. But nuclear power plant systems are repairable systems. With repairable systems, repairable components can improve the availability of a system because faults that are generated in components can be recovered. Hence, the availability of the system is more proper concept in case of repairable systems. Reliability graph with general gate (RGGG) is one of the system reliability analysis methods. The RGGG is a very intuitiveness method as compared with other methods. But the RGGG has not been applied to repairable systems yet. The objective of this study is to extend the RGGG in order to enable one to analyze repairable system. Determining the probability table for each node is a critical process to calculate the system availability in the RGGG method. Therefore finding the proper algorithms and making probability tables in various situations are the major a part of this study. The other part is an example of applying RGGG method to a real system. We find the proper algorithms and probability tables for independent repairable systems, dependent series repairable systems, and k-out-of-m (K/M) redundant parallel repairable systems in this study. We can evaluate the availability of real system using these probability tables. An example for a real system is shown in the latter part of this study. For the purpose of this analysis, the charging pumps subsystem of the chemical and volume control system (CVCS) was selected. The RGGG method extended for repairable systems has the same characteristic of intuitiveness as the original RGGG method and we can confirm that the availability analysis result from the repairable RGGG method is exact

  15. Comparison of J estimating procedures for a solid subjected to bending loads

    International Nuclear Information System (INIS)

    Smith, E.

    1982-01-01

    A. Zahoor and M.F. Kanninen have recently developed a simple procedure for estimating the magnitude of the J-integral for through-wall cracks in pipes subjected to bending loads. This paper gives consideration to their procedure, but to check its predictions against available numerical results, it is explored in detail for the case of a crack in a solid deforming under plane-strain bending conditions. In this case, an implicit assumption in the procedure is that the plastic rotation depends on the ligament size, and not on any other geometrical dimension. This assumption is strictly valid only for deep cracks, and this paper shows the degree of inaccuracy obtained when it is applied to shallow cracks. The assumption is also shown to correlate with the existence of a unique relation, independent of geometrical parameters, between the ligament net-sectionstress and the J-integral, and also with the existence of C.E. Turner's plastic /eta/ factors. 12 refs

  16. Estimation of staff lens doses during interventional procedures. Comparing cardiology, neuroradiology and interventional radiology.

    Science.gov (United States)

    Vano, E; Sanchez, R M; Fernandez, J M

    2015-07-01

    The purpose of this article is to estimate lens doses using over apron active personal dosemeters in interventional catheterisation laboratories (cardiology IC, neuroradiology IN and radiology IR) and to investigate correlations between occupational lens doses and patient doses. Active electronic personal dosemeters placed over the lead apron were used on a sample of 204 IC procedures, 274 IN and 220 IR (all performed at the same university hospital). Patient dose values (kerma area product) were also recorded to evaluate correlations with occupational doses. Operators used the ceiling-suspended screen in most cases. The median and third quartile values of equivalent dose Hp(10) per procedure measured over the apron for IC, IN and IR resulted, respectively, in 21/67, 19/44 and 24/54 µSv. Patient dose values (median/third quartile) were 75/128, 83/176 and 61/159 Gy cm(2), respectively. The median ratios for dosemeters worn over the apron by operators (protected by the ceiling-suspended screen) and patient doses were 0.36; 0.21 and 0.46 µSv Gy(-1) cm(-2), respectively. With the conservative approach used (lens doses estimated from the over apron chest dosemeter) we came to the conclusion that more than 800 procedures y(-1) and per operator were necessary to reach the new lens dose limit for the three interventional specialties. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Evaluation of Acid Digestion Procedures to Estimate Mineral Contents in Materials from Animal Trials

    Directory of Open Access Journals (Sweden)

    M. N. N. Palma

    2015-11-01

    Full Text Available Rigorously standardized laboratory protocols are essential for meaningful comparison of data from multiple sites. Considering that interactions of minerals with organic matrices may vary depending on the material nature, there could be peculiar demands for each material with respect to digestion procedure. Acid digestion procedures were evaluated using different nitric to perchloric acid ratios and one- or two-step digestion to estimate the concentration of calcium, phosphorus, magnesium, and zinc in samples of carcass, bone, excreta, concentrate, forage, and feces. Six procedures were evaluated: ratio of nitric to perchloric acid at 2:1, 3:1, and 4:1 v/v in a one- or two-step digestion. There were no direct or interaction effects (p>0.01 of nitric to perchloric acid ratio or number of digestion steps on magnesium and zinc contents. Calcium and phosphorus contents presented a significant (p0.01 calcium or phosphorus contents in carcass, excreta, concentrate, forage, and feces. Number of digestion steps did not affect mineral content (p>0.01. Estimated concentration of calcium, phosphorus, magnesium, and zinc in carcass, excreta, concentrated, forage, and feces samples can be performed using digestion solution of nitric to perchloric acid 4:1 v/v in a one-step digestion. However, samples of bones demand a stronger digestion solution to analyze the mineral contents, which is represented by an increased proportion of perchloric acid, being recommended a digestion solution of nitric to perchloric acid 2:1 v/v in a one-step digestion.

  18. Estimation of loss of 40K during different cooking procedures of rice

    International Nuclear Information System (INIS)

    Aparna, K.R.; Karunakara, N.; Selvi, B.S.; Joshi, R.M.; Ravi, P.M.

    2008-01-01

    The present regulations on toxic element intake is based on the assumption that 100% of the toxin present in raw materials such as cereals, pulses and vegetables are taken up by human being through ingestion. This is not realistic because of the fact that many of the toxic materials are lost during various cooking processes such as washing, peeling, etc. In order to take into account the loss of radionuclide during cooking, some of the regulatory agencies use Retention Factors (F r ) and Processing Efficiencies (P e ) for impact assessment. In Karnataka, rice is the major dietary item and the cooking procedure varies from place to place. This paper presents the results of estimation of F r and P e for two types of cooking procedures of raw rice and boiled rice commonly used in Karnataka. 40 K is used as tracer in the present study because of its natural abundance, easy detection by gamma ray spectrometry and chemical resemblance with 137 Cs. The concentration of 40 K in raw and processed food was estimated by gamma ray spectrometry using an HPGe detector and F r and P e were estimated. The value of F r ranges from 0.6 to 0.85 and 0.41 to 0.72 for raw rice and boiled rice respectively. Similarly, the values of P e vary from 0.9 to 1 for both types of rice. In the absence of site-specific data for 137 Cs, this data can be used for calculation of 137 Cs in cooked rice during accidental conditions of nuclear installations. (author)

  19. Reliable Quantification of the Potential for Equations Based on Spot Urine Samples to Estimate Population Salt Intake

    DEFF Research Database (Denmark)

    Huang, Liping; Crino, Michelle; Wu, Jason Hy

    2016-01-01

    to a standard format. Individual participant records will be compiled and a series of analyses will be completed to: (1) compare existing equations for estimating 24-hour salt intake from spot urine samples with 24-hour urine samples, and assess the degree of bias according to key demographic and clinical......BACKGROUND: Methods based on spot urine samples (a single sample at one time-point) have been identified as a possible alternative approach to 24-hour urine samples for determining mean population salt intake. OBJECTIVE: The aim of this study is to identify a reliable method for estimating mean...... population salt intake from spot urine samples. This will be done by comparing the performance of existing equations against one other and against estimates derived from 24-hour urine samples. The effects of factors such as ethnicity, sex, age, body mass index, antihypertensive drug use, health status...

  20. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry--BREALD-30.

    Science.gov (United States)

    Junkes, Monica C; Fraiz, Fabian C; Sardenberg, Fernanda; Lee, Jessica Y; Paiva, Saul M; Ferreira, Fernanda M

    2015-01-01

    The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. The BREALD-30 demonstrated good internal reliability. Cronbach's alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent's perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent's perception regarding his/her child's oral health remained significant in the multivariate analysis. The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil.

  1. Estimation of reliability on digital plant protection system in nuclear power plants using fault simulation with self-checking

    International Nuclear Information System (INIS)

    Lee, Jun Seok; Kim, Suk Joon; Seong, Poong Hyun

    2004-01-01

    Safety-critical digital systems in nuclear power plants require high design reliability. Reliable software design and accurate prediction methods for the system reliability are important problems. In the reliability analysis, the error detection coverage of the system is one of the crucial factors, however, it is difficult to evaluate the error detection coverage of digital instrumentation and control system in nuclear power plants due to complexity of the system. To evaluate the error detection coverage for high efficiency and low cost, the simulation based fault injections with self checking are needed for digital instrumentation and control system in nuclear power plants. The target system is local coincidence logic in digital plant protection system and a simplified software modeling for this target system is used in this work. C++ based hardware description of micro computer simulator system is used to evaluate the error detection coverage of the system. From the simulation result, it is possible to estimate the error detection coverage of digital plant protection system in nuclear power plants using simulation based fault injection method with self checking. (author)

  2. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  3. Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure

    Directory of Open Access Journals (Sweden)

    Robert G. MacCann

    2004-03-01

    Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.

  4. Estimating temperature reactivity coefficients by experimental procedures combined with isothermal temperature coefficient measurements and dynamic identification

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Aoki, Yukinori; Shimazu, Yoichiro; Yamasaki, Masatoshi; Hanayama, Yasushi

    2006-01-01

    A method to evaluate the moderator coefficient (MTC) and the Doppler coefficient through experimental procedures performed during reactor physics tests of PWR power plants is proposed. This method combines isothermal temperature coefficient (ITC) measurement experiments and reactor power transient experiments at low power conditions for dynamic identification. In the dynamic identification, either one of temperature coefficients can be determined in such a way that frequency response characteristics of the reactivity change observed by a digital reactivity meter is reproduced from measured data of neutron count rate and the average coolant temperature. The other unknown coefficient can also be determined by subtracting the coefficient obtained from the dynamic identification from ITC. As the proposed method can directly estimate the Doppler coefficient, the applicability of the conventional core design codes to predict the Doppler coefficient can be verified for new types of fuels such as mixed oxide fuels. The digital simulation study was carried out to show the feasibility of the proposed method. The numerical analysis showed that the MTC and the Doppler coefficient can be estimated accurately and even if there are uncertainties in the parameters of the reactor kinetics model, the accuracies of the estimated values are not seriously impaired. (author)

  5. Estimated occupational dose in interventional procedures crystalline; Estimacion de la dosis ocupacional en el cristallino en procedimientos interveniconistas

    Energy Technology Data Exchange (ETDEWEB)

    Portas Ferradas, B. C.; Chapel Gomez, M. L.; Jimenez Alarcon, J. I.

    2011-07-01

    This paper present the result of the estimated doses in the eyes of workers exposed for radiology procedures and interventional cardiology from measurements made with thermoluminescent dosimeter placed near the lens.

  6. A new lifetime estimation model for a quicker LED reliability prediction

    Science.gov (United States)

    Hamon, B. H.; Mendizabal, L.; Feuillet, G.; Gasse, A.; Bataillou, B.

    2014-09-01

    LED reliability and lifetime prediction is a key point for Solid State Lighting adoption. For this purpose, one hundred and fifty LEDs have been aged for a reliability analysis. LEDs have been grouped following nine current-temperature stress conditions. Stress driving current was fixed between 350mA and 1A and ambient temperature between 85C and 120°C. Using integrating sphere and I(V) measurements, a cross study of the evolution of electrical and optical characteristics has been done. Results show two main failure mechanisms regarding lumen maintenance. The first one is the typically observed lumen depreciation and the second one is a much more quicker depreciation related to an increase of the leakage and non radiative currents. Models of the typical lumen depreciation and leakage resistance depreciation have been made using electrical and optical measurements during the aging tests. The combination of those models allows a new method toward a quicker LED lifetime prediction. These two models have been used for lifetime predictions for LEDs.

  7. Evaluation of the Most Reliable Procedure of Determining Jump Height During the Loaded Countermovement Jump Exercise: Take-Off Velocity vs. Flight Time.

    Science.gov (United States)

    Pérez-Castilla, Alejandro; García-Ramos, Amador

    2018-07-01

    Pérez-Castilla, A and García-Ramos, A. Evaluation of the most reliable procedure of determining jump height during the loaded countermovement jump exercise: Take-off velocity vs. flight time. J Strength Cond Res 32(7): 2025-2030, 2018-This study aimed to compare the reliability of jump height between the 2 standard procedures of analyzing force-time data (take-off velocity [TOV] and flight time [FT]) during the loaded countermovement (CMJ) exercise performed with a free-weight barbell and in a Smith machine. The jump height of 17 men (age: 22.2 ± 2.2 years, body mass: 75.2 ± 7.1 kg, and height: 177.0 ± 6.0 cm) was tested in 4 sessions (twice for each CMJ type) against external loads of 17, 30, 45, 60, and 75 kg. Jump height reliability was comparable between the TOV (coefficient of variation [CV]: 6.42 ± 2.41%) and FT (CV: 6.53 ± 2.17%) during the free-weight CMJ, but it was higher for the FT when the CMJ was performed in a Smith machine (CV: 11.34 ± 3.73% for TOV and 5.95 ± 1.12% for FT). Bland-Altman plots revealed trivial differences (≤0.27 cm) and no heteroscedasticity of the errors (R ≤ 0.09) for the jump height obtained by the TOV and FT procedures, whereas the random error between both procedures was higher for the CMJ performed in the Smith machine (2.02 cm) compared with the free-weight barbell (1.26 cm). Based on these results, we recommend the FT procedure to determine jump height during the loaded CMJ performed in a Smith machine, whereas the TOV and FT procedures provide similar reliability during the free-weight CMJ.

  8. Limits to the reliability of size-based fishing status estimation for data-poor stocks

    DEFF Research Database (Denmark)

    Kokkalis, Alexandros; Thygesen, Uffe Høgsbro; Nielsen, Anders

    2015-01-01

    For stocks which are considered “data-poor” no knowledge exist about growth, mortality or recruitment. The only available information is from catches. Here we examine the ability to assess the level of exploitation of a data-poor stock based only on information of the size of individuals in catches....... The model is a formulation of the classic Beverton–Holt theory in terms of size where stock parameters describing growth, natural mortality, recruitment, etc. are determined from life-history invariants. A simulation study was used to compare the reliability of assessments performed under different...... to a considerable improvement in the assessment. Overall, the simulation study demonstrates that it may be possible to classify a data-poor stock as undergoing over- or under-fishing, while the exact status, i.e., how much the fishing mortality is above or below Fmsy, can only be assessed with a substantial...

  9. Bayesian Reliability Estimation for Deteriorating Systems with Limited Samples Using the Maximum Entropy Approach

    Directory of Open Access Journals (Sweden)

    Ning-Cong Xiao

    2013-12-01

    Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.

  10. Quick and reliable estimation of power distribution in a PHWR by ANN

    International Nuclear Information System (INIS)

    Dubey, B.P.; Jagannathan, V.; Kataria, S.K.

    1998-01-01

    Knowledge of the distribution of power in all the channels of a Pressurised Heavy Water Reactor (PHWR) as a result of a perturbation caused by one or more of the regulating devices is very important from the operation and maintenance point of view of the reactor. Theoretical design codes available for this purpose take several minutes to calculate the channel power distribution on modern PCs. Artificial Neural networks (ANNs) have been employed in predicting channel power distribution of Indian PHWRs for any given configuration of regulating devices of the reactor. ANNs produce the result much faster and with good accuracy. This paper describes the methodology of ANN, its reliability, the validation range, and scope for its possible on-line use in the actual reactor

  11. A simplified procedure for mass and stiffness estimation of existing structures

    Science.gov (United States)

    Nigro, Antonella; Ditommaso, Rocco; Carlo Ponzo, Felice; Salvatore Nigro, Domenico

    2016-04-01

    This work focuses the attention on a parametric method for mass and stiffness identification of framed structures, based on frequencies evaluation. The assessment of real structures is greatly affected by the consistency of information retrieved on materials and on the influence of both non-structural components and soil. One of the most important matter is the correct definition of the distribution, both in plan and in elevation, of mass and stiffness: depending on concentrated and distributed loads, the presence of infill panels and the distribution of structural elements. In this study modal identification is performed under several mass-modified conditions and structural parameters consistent with the identified modal parameters are determined. Modal parameter identification of a structure before and after the introduction of additional masses is conducted. By considering the relationship between the additional masses and modal properties before and after the mass modification, structural parameters of a damped system, i.e. mass, stiffness and damping coefficient are inversely estimated from these modal parameters variations. The accuracy of the method can be improved by using various mass-modified conditions. The proposed simplified procedure has been tested on both numerical and experimental models by means linear numerical analyses and shaking table tests performed on scaled structures at the Seismic Laboratory of the University of Basilicata (SISLAB). Results confirm the effectiveness of the proposed procedure to estimate masses and stiffness of existing real structures with a maximum error equal to 10%, under the worst conditions. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC-RELUIS 2015 - RS4 ''Seismic observatory of structures and health monitoring''.

  12. Reliability-based fatigue life estimation of shear riveted connections considering dependency of rivet hole failures

    Directory of Open Access Journals (Sweden)

    Leonetti* Davide

    2018-01-01

    Full Text Available Standards and guidelines for the fatigue design of riveted connections make use of a stress range-endurance (S-N curve based on the net section stress range regardless of the number and the position of the rivets. Almost all tests on which S-N curves are based, are performed with a minimum number of rivets. However, the number of rivets in a row is expected to increase the fail-safe behaviour of the connection, whereas the number of rows is supposed to decrease the theoretical stress concentration at the critical locations, and hence these aspects are not considered in the S-N curves. This paper presents a numerical model predicting the fatigue life of riveted connections by performing a system reliability analysis on a double cover plated riveted butt joint. The connection is considered in three geometries, with different number of rivets in a row and different number of rows. The stress state in the connection is evaluated using a finite element model in which the friction coefficient and the clamping force in the rivets are considered in a deterministic manner. The probability of failure is evaluated for the main plate, and fatigue failure is assumed to be originating at the sides of the rivet holes, the critical locations, or hot-spots. The notch stress approach is applied to assess the fatigue life, considered to be a stochastic quantity. Unlike other system reliability models available in the literature, the evaluation of the probability of failure takes into account the stochastic dependence between the failures at each critical location modelled as a parallel system, which means considering the change of the state of stress in the connection when a ligament between two rivets fails. A sensitivity study is performed to evaluate the effect of the pretension in the rivet and the friction coefficient on the fatigue life.

  13. Software for the estimation of organ equivalent and effective doses from diagnostic radiology procedures

    International Nuclear Information System (INIS)

    Osei, Ernest K; Barnett, Rob

    2009-01-01

    Diagnostic radiological imaging such as conventional radiography, fluoroscopy and computed tomography (CT) examinations will continue to provide tremendous benefits in modern healthcare. The benefit derived by the patient should far outweigh the risk associated with a properly conducted imaging examination. Nonetheless, it is very important to be able to quantify the risk associated with any radiological examination of patients, and effective dose has been considered a useful indicator of patient exposure. Quantification of the risks associated with radiological imaging is very important as such information will be helpful to physicians and their patients for comparing risks from various imaging examinations and for making informed decisions whenever there is a need for any radiological imaging. The determination of equivalent and effective doses in diagnostic radiology is of interest as a basis for estimates of risk from medical exposures. In this paper we describe a simple computer program OrgDose, which calculates the doses to 27 organs in the body and then calculates the organ equivalent and effective doses and the risk from various procedures in the radiology department including conventional radiography, fluoroscopy and computed tomography examinations. The program will be a useful tool for the medical and paramedical personnel who are involved with assessing organ and effective doses and risks from diagnostic radiology procedures.

  14. Validity of anthropometric procedures to estimate body density and body fat percent in military men

    Directory of Open Access Journals (Sweden)

    Ciro Romélio Rodriguez-Añez

    1999-12-01

    Full Text Available The objective of this study was to verify the validity of the Katch e McArdle’s equation (1973,which uses the circumferences of the arm, forearm and abdominal to estimate the body density and the procedure of Cohen (1986 which uses the circumferences of the neck and abdominal to estimate the body fat percent (%F in military men. Therefore data from 50 military men, with mean age of 20.26 ± 2.04 years serving in Santa Maria, RS, was collected. The circumferences were measured according with Katch e McArdle (1973 and Cohen (1986 procedures. The body density measured (Dm obtained under water weighting was used as criteria and its mean value was 1.0706 ± 0.0100 g/ml. The residual lung volume was estimated using the Goldman’s e Becklake’s equation (1959. The %F was obtained with the Siri’s equation (1961 and its mean value was 12.70 ± 4.71%. The validation criterion suggested by Lohman (1992 was followed. The analysis of the results indicated that the procedure developed by Cohen (1986 has concurrent validity to estimate %F in military men or in other samples with similar characteristics with standard error of estimate of 3.45%. . RESUMO Através deste estudo objetivou-se verificar a validade: da equação de Katch e McArdle (1973 que envolve os perímetros do braço, antebraço e abdômen, para estimar a densidade corporal; e, o procedimento de Cohen (1986 que envolve os perímetros do pescoço e abdômen, para estimar o % de gordura (%G; para militares. Para tanto, coletou-se os dados de 50 militares masculinos, com idade média de 20,26 ± 2,04 anos, lotados na cidade de Santa Maria, RS. Mensurou-se os perímetros conforme procedimentos de Katch e McArdle (1973 e Cohen (1986. Utilizou-se a densidade corporal mensurada (Dm através da pesagem hidrostática como critério de validação, cujo valor médio foi de 1,0706 ± 0,0100 g/ml. Estimou-se o volume residual pela equação de Goldman e Becklake (1959. O %G derivado da Dm estimou

  15. Estimating the subjective value of future rewards: comparison of adjusting-amount and adjusting-delay procedures.

    Science.gov (United States)

    Holt, Daniel D; Green, Leonard; Myerson, Joel

    2012-07-01

    The present study examined whether equivalent discounting of delayed rewards is observed with different experimental procedures. If the underlying decision-making process is the same, then similar patterns of results should be observed regardless of procedure, and similar estimates of the subjective value of future rewards (i.e., indifference points) should be obtained. Two experiments compared discounting on three types of procedure: adjusting-delay (AD), adjusting-immediate-amount (AIA), and adjusting-delayed-amount (ADA). For the two procedures for which discounting functions can be established (i.e., AD and AIA), a hyperboloid provided good fits to the data at both the group and individual levels, and individuals' discounting on one procedure tended to be correlated with their discounting on the other. Notably, the AIA procedure produced the more consistent estimates of the degree of discounting, and in particular, discounting on the AIA procedure was unaffected by the order in which choices were presented. Regardless of which of the three procedures was used, however, similar patterns of results were obtained: Participants systematically discounted the value of delayed rewards, and robust magnitude effects were observed. Although each procedure may have its own advantages and disadvantages, use of all three types of procedure in the present study provided converging evidence for common decision-making processes underlying the discounting of delayed rewards. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Reliability of estimating the room volume from a single room impulse response

    OpenAIRE

    Kuster, M.

    2008-01-01

    The methods investigated for the room volume estimation are based on geometrical acoustics, eigenmode, and diffuse field models and no data other than the room impulse response are available. The measurements include several receiver positions in a total of 12 rooms of vastly different sizes and acoustic characteristics. The limitations in identifying the pivotal specular reflections of the geometrical acoustics model in measured room impulse responses are examined both theoretically and expe...

  17. Effective dose to staff from interventional procedures: Estimations from single and double dosimetry

    International Nuclear Information System (INIS)

    Kuipers, G.; Velders, X. L.

    2009-01-01

    The exposure of 11 physicians performing interventional procedures was measured by means of two personal dosemeters. One personal dosemeter was worn outside the lead apron and an additional under the lead apron. The study was set up in order to determine the added value of a dosemeter worn under the lead apron. With the doses measured, the effective doses of the physicians were estimated using an algorithm for single dosimetry and two algorithms for double dosimetry. The effective doses calculated with the single dosimetry algorithm ranged from 0.11 to 0.85 mSv in 4 weeks. With the double dosimetry algorithms, the effective doses ranged from 0.02 mSv to 0.47 mSv. The statistical analysis revealed no significant differences in the accuracy of the effective doses calculated with single or double dosimetry algorithms. It was concluded that the effective dose cannot be considered a more accurate estimate when two dosemeters are used instead of one. (authors)

  18. Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.

    Science.gov (United States)

    Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian

    2004-05-01

    Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.

  19. A hybrid downscaling procedure for estimating the vertical distribution of ambient temperature in local scale

    Science.gov (United States)

    Yiannikopoulou, I.; Philippopoulos, K.; Deligiorgi, D.

    2012-04-01

    The vertical thermal structure of the atmosphere is defined by a combination of dynamic and radiation transfer processes and plays an important role in describing the meteorological conditions at local scales. The scope of this work is to develop and quantify the predictive ability of a hybrid dynamic-statistical downscaling procedure to estimate the vertical profile of ambient temperature at finer spatial scales. The study focuses on the warm period of the year (June - August) and the method is applied to an urban coastal site (Hellinikon), located in eastern Mediterranean. The two-step methodology initially involves the dynamic downscaling of coarse resolution climate data via the RegCM4.0 regional climate model and subsequently the statistical downscaling of the modeled outputs by developing and training site-specific artificial neural networks (ANN). The 2.5ox2.5o gridded NCEP-DOE Reanalysis 2 dataset is used as initial and boundary conditions for the dynamic downscaling element of the methodology, which enhances the regional representivity of the dataset to 20km and provides modeled fields in 18 vertical levels. The regional climate modeling results are compared versus the upper-air Hellinikon radiosonde observations and the mean absolute error (MAE) is calculated between the four grid point values nearest to the station and the ambient temperature at the standard and significant pressure levels. The statistical downscaling element of the methodology consists of an ensemble of ANN models, one for each pressure level, which are trained separately and employ the regional scale RegCM4.0 output. The ANN models are theoretically capable of estimating any measurable input-output function to any desired degree of accuracy. In this study they are used as non-linear function approximators for identifying the relationship between a number of predictor variables and the ambient temperature at the various vertical levels. An insight of the statistically derived input

  20. A new procedure of modal parameter estimation for high-speed digital image correlation

    Science.gov (United States)

    Huňady, Róbert; Hagara, Martin

    2017-09-01

    The paper deals with the use of 3D digital image correlation in determining modal parameters of mechanical systems. It is a non-contact optical method, which for the measurement of full-field spatial displacements and strains of bodies uses precise digital cameras with high image resolution. Most often this method is utilized for testing of components or determination of material properties of various specimens. In the case of using high-speed cameras for measurement, the correlation system is capable of capturing various dynamic behaviors, including vibration. This enables the potential use of the mentioned method in experimental modal analysis. For that purpose, the authors proposed a measuring chain for the correlation system Q-450 and developed a software application called DICMAN 3D, which allows the direct use of this system in the area of modal testing. The created application provides the post-processing of measured data and the estimation of modal parameters. It has its own graphical user interface, in which several algorithms for the determination of natural frequencies, mode shapes and damping of particular modes of vibration are implemented. The paper describes the basic principle of the new estimation procedure which is crucial in the light of post-processing. Since the FRF matrix resulting from the measurement is usually relatively large, the estimation of modal parameters directly from the FRF matrix may be time-consuming and may occupy a large part of computer memory. The procedure implemented in DICMAN 3D provides a significant reduction in memory requirements and computational time while achieving a high accuracy of modal parameters. Its computational efficiency is particularly evident when the FRF matrix consists of thousands of measurement DOFs. The functionality of the created software application is presented on a practical example in which the modal parameters of a composite plate excited by an impact hammer were determined. For the

  1. Estimates of the burst reliability of thin-walled cylinders designed to meet the ASME Code allowables

    International Nuclear Information System (INIS)

    Stancampiano, P.A.; Zemanick, P.P.

    1976-01-01

    Pressure containment components in nuclear power plants are designed by the conventional deterministic safety factor approach to meet the requirements of the ASME Pressure Vessel Code, Section III. The inevitable variabilities and uncertainties associated with the design, manufacture, installation, and service processes suggest a probabilistic design approach may also be pertinent. Accordingly, the burst reliabilities of two thin-walled 304 SS cylindrical vessels such as might be employed in liquid metal plants are estimated. A large vessel fabricated from rolled plate per ASME SA-240 and a smaller pipe sized vessel also fabricated from rolled plate per ASME SA-358 are considered. The vessels are sized to just meet the allowable ASME Code primary membrance stresses at 800 0 F (427 0 C). The bursting probability that the operating pressure is greater than the burst strength of the cylinders is calculated using stress-strength interference theory by direct Monte Carlo simulation on a high speed digital computer. A sensitivity study is employed to identify those design parameters which have the greatest effect on the reliability. The effects of preservice quality assurance defect inspections on the reliability are also evaluated parametrically

  2. Systematic review of survival time in experimental mouse stroke with impact on reliability of infarct estimation

    DEFF Research Database (Denmark)

    Klarskov, Carina Kirstine; Klarskov, Mikkel Buster; Hasseldam, Henrik

    2016-01-01

    infarcts with more substantial edema. Purpose: This paper will give an overview of previous studies of experimental mouse stroke, and correlate survival time to peak time of edema formation. Furthermore, investigations of whether the included studies corrected the infarct measurements for edema...... of reasons for the translational problems from mouse experimental stroke to clinical trials probably exists, including infarct size estimations around the peak time of edema formation. Furthermore, edema is a more prominent feature of stroke in mice than in humans, because of the tendency to produce larger...... of the investigated process. Our findings indicate a need for more research in this area, and establishment of common correction methodology....

  3. Estimate of the Effective Dose Equivalent to the Cypriot Population due to Diagnostic Nuclear Medicine Procedures in the Public Sector

    Energy Technology Data Exchange (ETDEWEB)

    Christofides, S [Medical Physics Department, Nicosia General Hospital (Cyprus)

    1994-12-31

    The Effective Dose Equivalent (EDE) to the Cypriot population due to Diagnostic Nuclear Medicine procedures has been estimated from data published by the Government of Cyprus, in its Health and Hospital Statistics Series for the years 1990, 1991, and 1992. The average EDE per patient was estimated to be 3,09, 3,75 and 4,01 microSievert for 1990, 1991 and 1992 respectively, while the per caput EDE was estimated to be 11,75, 15,16 and 17,09 microSieverts for 1990, 1991 and 1992 respectively, from the procedures in the public sector. (author). 11 refs, 4 tabs.

  4. Estimate of the Effective Dose Equivalent to the Cypriot Population due to Diagnostic Nuclear Medicine Procedures in the Public Sector

    International Nuclear Information System (INIS)

    Christofides, S.

    1994-01-01

    The Effective Dose Equivalent (EDE) to the Cypriot population due to Diagnostic Nuclear Medicine procedures has been estimated from data published by the Government of Cyprus, in its Health and Hospital Statistics Series for the years 1990, 1991, and 1992. The average EDE per patient was estimated to be 3,09, 3,75 and 4,01 microSievert for 1990, 1991 and 1992 respectively, while the per caput EDE was estimated to be 11,75, 15,16 and 17,09 microSieverts for 1990, 1991 and 1992 respectively, from the procedures in the public sector. (author)

  5. Formulating informative, data-based priors for failure probability estimation in reliability analysis

    International Nuclear Information System (INIS)

    Guikema, Seth D.

    2007-01-01

    Priors play an important role in the use of Bayesian methods in risk analysis, and using all available information to formulate an informative prior can lead to more accurate posterior inferences. This paper examines the practical implications of using five different methods for formulating an informative prior for a failure probability based on past data. These methods are the method of moments, maximum likelihood (ML) estimation, maximum entropy estimation, starting from a non-informative 'pre-prior', and fitting a prior based on confidence/credible interval matching. The priors resulting from the use of these different methods are compared qualitatively, and the posteriors are compared quantitatively based on a number of different scenarios of observed data used to update the priors. The results show that the amount of information assumed in the prior makes a critical difference in the accuracy of the posterior inferences. For situations in which the data used to formulate the informative prior is an accurate reflection of the data that is later observed, the ML approach yields the minimum variance posterior. However, the maximum entropy approach is more robust to differences between the data used to formulate the prior and the observed data because it maximizes the uncertainty in the prior subject to the constraints imposed by the past data

  6. Review of cause-based decision tree approach for the development of domestic standard human reliability analysis procedure in low power/shutdown operation probabilistic safety assessment

    International Nuclear Information System (INIS)

    Kang, D. I.; Jung, W. D.

    2003-01-01

    We review the Cause-Based Decision Tree (CBDT) approach to decide whether we incorporate it or not for the development of domestic standard Human Reliability Analysis (HRA) procedure in low power/shutdown operation Probabilistic Safety Assessment (PSA). In this paper, we introduce the cause based decision tree approach, quantify human errors using it, and identify merits and demerits of it in comparision with previously used THERP. The review results show that it is difficult to incorporate the CBDT method for the development of domestic standard HRA procedure in low power/shutdown PSA because the CBDT method need for the subjective judgment of HRA analyst like as THERP. However, it is expected that the incorporation of the CBDT method into the development of domestic standard HRA procedure only for the comparision of quantitative HRA results will relieve the burden of development of detailed HRA procedure and will help maintain consistent quantitative HRA results

  7. A fast and reliable method for simultaneous waveform, amplitude and latency estimation of single-trial EEG/MEG data.

    Directory of Open Access Journals (Sweden)

    Wouter D Weeda

    Full Text Available The amplitude and latency of single-trial EEG/MEG signals may provide valuable information concerning human brain functioning. In this article we propose a new method to reliably estimate single-trial amplitude and latency of EEG/MEG signals. The advantages of the method are fourfold. First, no a-priori specified template function is required. Second, the method allows for multiple signals that may vary independently in amplitude and/or latency. Third, the method is less sensitive to noise as it models data with a parsimonious set of basis functions. Finally, the method is very fast since it is based on an iterative linear least squares algorithm. A simulation study shows that the method yields reliable estimates under different levels of latency variation and signal-to-noise ratioÕs. Furthermore, it shows that the existence of multiple signals can be correctly determined. An application to empirical data from a choice reaction time study indicates that the method describes these data accurately.

  8. Validity and Reliability of the Brazilian Version of the Rapid Estimate of Adult Literacy in Dentistry – BREALD-30

    Science.gov (United States)

    Junkes, Monica C.; Fraiz, Fabian C.; Sardenberg, Fernanda; Lee, Jessica Y.; Paiva, Saul M.; Ferreira, Fernanda M.

    2015-01-01

    Objective The aim of the present study was to translate, perform the cross-cultural adaptation of the Rapid Estimate of Adult Literacy in Dentistry to Brazilian-Portuguese language and test the reliability and validity of this version. Methods After translation and cross-cultural adaptation, interviews were conducted with 258 parents/caregivers of children in treatment at the pediatric dentistry clinics and health units in Curitiba, Brazil. To test the instrument's validity, the scores of Brazilian Rapid Estimate of Adult Literacy in Dentistry (BREALD-30) were compared based on occupation, monthly household income, educational attainment, general literacy, use of dental services and three dental outcomes. Results The BREALD-30 demonstrated good internal reliability. Cronbach’s alpha ranged from 0.88 to 0.89 when words were deleted individually. The analysis of test-retest reliability revealed excellent reproducibility (intraclass correlation coefficient = 0.983 and Kappa coefficient ranging from moderate to nearly perfect). In the bivariate analysis, BREALD-30 scores were significantly correlated with the level of general literacy (rs = 0.593) and income (rs = 0.327) and significantly associated with occupation, educational attainment, use of dental services, self-rated oral health and the respondent’s perception regarding his/her child's oral health. However, only the association between the BREALD-30 score and the respondent’s perception regarding his/her child's oral health remained significant in the multivariate analysis. Conclusion The BREALD-30 demonstrated satisfactory psychometric properties and is therefore applicable to adults in Brazil. PMID:26158724

  9. Reliability and validity of the Turkish version of the Rapid Estimate of Adult Literacy in Dentistry (TREALD-30).

    Science.gov (United States)

    Peker, Kadriye; Köse, Taha Emre; Güray, Beliz; Uysal, Ömer; Erdem, Tamer Lütfi

    2017-04-01

    To culturally adapt the Turkish version of Rapid Estimate of Adult Literacy in Dentistry (TREALD-30) for Turkish-speaking adult dental patients and to evaluate its psychometric properties. After translation and cross-cultural adaptation, TREALD-30 was tested in a sample of 127 adult patients who attended a dental school clinic in Istanbul. Data were collected through clinical examinations and self-completed questionnaires, including TREALD-30, the Oral Health Impact Profile (OHIP), the Rapid Estimate of Adult Literacy in Medicine (REALM), two health literacy screening questions, and socio-behavioral characteristics. Psychometric properties were examined using Classical Test Theory (CTT) and Rasch analysis. Internal consistency (Cronbach's Alpha = 0.91) and test-retest reliability (Intraclass correlation coefficient = 0.99) were satisfactory for TREALD-30. It exhibited good convergent and predictive validity. Monthly family income, years of education, dental flossing, health literacy, and health literacy skills were found as stronger predictors of patients'oral health literacy (OHL). Confirmatory factor analysis (CFA) confirmed a two-factor model. The Rasch model explained 37.9% of the total variance in this dataset. In addition, TREALD-30 had eleven misfitting items, which indicated evidence of multidimensionality. The reliability indeces provided in Rasch analysis (person separation reliability = 0.91 and expected-a-posteriori/plausible reliability = 0.94) indicated that TREALD-30 had acceptable reliability. TREALD-30 showed satisfactory psychometric properties. It may be used to identify patients with low OHL. Socio-demographic factors, oral health behaviors and health literacy skills should be taken into account when planning future studies to assess the OHL in both clinical and community settings.

  10. Intra-rater reliability of motor unit number estimation and quantitative motor unit analysis in subjects with amyotrophic lateral sclerosis.

    Science.gov (United States)

    Ives, Colleen T; Doherty, Timothy J

    2014-01-01

    To assess the intra-rater reliability of decomposition-enhanced spike-triggered averaging (DE-STA) motor unit number estimation (MUNE) and quantitative motor unit potential analysis in the upper trapezius (UT) and biceps brachii (BB) of subjects with amyotrophic lateral sclerosis (ALS) and to compare the results from the UT to control data. Patients diagnosed with clinically probable or definite ALS completed the experimental protocol twice with the same evaluator for the UT (n=10) and BB (n=9). Intra-rater reliability for the UT was good for the maximum compound muscle action potential (CMAP) (ICC=0.88), mean surface-detected motor unit potential (S-MUP) (ICC=0.87) and MUNE (ICC=0.88), and for the BB was moderate for maximum CMAP (ICC=0.61), and excellent for mean S-MUP (ICC=0.94) and MUNE (ICC=0.93). A significant difference between tests was found for UT MUNE. Comparing subjects with ALS to control subjects, UT maximum CMAP (p<0.01) and MUNE (p<0.001) values were significantly lower, and mean S-MUP values significantly greater (p<0.05) in subjects with ALS. This study has demonstrated the ability of the DE-STA MUNE technique to collect highly reliable data from two separate muscle groups and to detect the underlying pathophysiology of the disease. This was the first study to examine the reliability of this technique in subjects with ALS, and demonstrates its potential for future use as an outcome measure in ALS clinical trials and studies of ALS disease severity and natural history. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Assimilation of Remotely Sensed Soil Moisture Profiles into a Crop Modeling Framework for Reliable Yield Estimations

    Science.gov (United States)

    Mishra, V.; Cruise, J.; Mecikalski, J. R.

    2017-12-01

    Much effort has been expended recently on the assimilation of remotely sensed soil moisture into operational land surface models (LSM). These efforts have normally been focused on the use of data derived from the microwave bands and results have often shown that improvements to model simulations have been limited due to the fact that microwave signals only penetrate the top 2-5 cm of the soil surface. It is possible that model simulations could be further improved through the introduction of geostationary satellite thermal infrared (TIR) based root zone soil moisture in addition to the microwave deduced surface estimates. In this study, root zone soil moisture estimates from the TIR based Atmospheric Land Exchange Inverse (ALEXI) model were merged with NASA Soil Moisture Active Passive (SMAP) based surface estimates through the application of informational entropy. Entropy can be used to characterize the movement of moisture within the vadose zone and accounts for both advection and diffusion processes. The Principle of Maximum Entropy (POME) can be used to derive complete soil moisture profiles and, fortuitously, only requires a surface boundary condition as well as the overall mean moisture content of the soil column. A lower boundary can be considered a soil parameter or obtained from the LSM itself. In this study, SMAP provided the surface boundary while ALEXI supplied the mean and the entropy integral was used to tie the two together and produce the vertical profile. However, prior to the merging, the coarse resolution (9 km) SMAP data were downscaled to the finer resolution (4.7 km) ALEXI grid. The disaggregation scheme followed the Soil Evaporative Efficiency approach and again, all necessary inputs were available from the TIR model. The profiles were then assimilated into a standard agricultural crop model (Decision Support System for Agrotechnology, DSSAT) via the ensemble Kalman Filter. The study was conducted over the Southeastern United States for the

  12. Online Reliable Peak Charge/Discharge Power Estimation of Series-Connected Lithium-Ion Battery Packs

    Directory of Open Access Journals (Sweden)

    Bo Jiang

    2017-03-01

    Full Text Available The accurate peak power estimation of a battery pack is essential to the power-train control of electric vehicles (EVs. It helps to evaluate the maximum charge and discharge capability of the battery system, and thus to optimally control the power-train system to meet the requirement of acceleration, gradient climbing and regenerative braking while achieving a high energy efficiency. A novel online peak power estimation method for series-connected lithium-ion battery packs is proposed, which considers the influence of cell difference on the peak power of the battery packs. A new parameter identification algorithm based on adaptive ratio vectors is designed to online identify the parameters of each individual cell in a series-connected battery pack. The ratio vectors reflecting cell difference are deduced strictly based on the analysis of battery characteristics. Based on the online parameter identification, the peak power estimation considering cell difference is further developed. Some validation experiments in different battery aging conditions and with different current profiles have been implemented to verify the proposed method. The results indicate that the ratio vector-based identification algorithm can achieve the same accuracy as the repetitive RLS (recursive least squares based identification while evidently reducing the computation cost, and the proposed peak power estimation method is more effective and reliable for series-connected battery packs due to the consideration of cell difference.

  13. CO{sub 2}-recycling by plants: how reliable is the carbon isotope estimation?

    Energy Technology Data Exchange (ETDEWEB)

    Siegwolf, R T.W.; Saurer, M [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Koerner, C [Basel Univ., Basel (Switzerland)

    1997-06-01

    In the study of plant carbon relations, the amount of the respiratory losses from the soil was estimated, determining the gradient of the stable isotope {sup 13}C with increasing plant canopy height. According to the literature 8-26% of the CO{sub 2} released in the forests by soil and plant respiratory processes are reassimilated (recycled) by photosynthesis during the day. Our own measurements however, which we conducted in grass land showed diverging results from no indicating of carbon recycling, to a considerable {delta}{sup 13}C gradient suggesting a high carbon recycling rate. The role of other factors, such as air humidity and irradiation which influence the {delta}{sup 13}C in a canopy as well, are discussed. (author) 3 figs., 4 refs.

  14. Preliminary study for the reliability Assurance on results and procedure of the out-pile mechanical characterization test for a fuel assembly; Lateral Vibration Test (I)

    International Nuclear Information System (INIS)

    Lee, Kang Hee; Yoon, Kyung Hee; Kim, Hyung Kyu

    2007-01-01

    The reliability assurance with respect to the test procedure and results of the out-pile mechanical performance test for the nuclear fuel assembly is an essential task to assure the test quality and to get a permission for fuel loading into the commercial reactor core. For the case of vibration test, proper management and appropriate calibration of instruments and devices used in the test, various efforts to minimize the possible error during the test and signal acquisition process are needed. Additionally, the deep understanding both of the theoretical assumption and simplification for the signal processing/modal analysis and of the functions of the devices used in the test were highly required. In this study, the overall procedure and result of lateral vibration test were assembly's mechanical characterization were briefly introduced. A series of measures to assure and improve the reliability of the vibration test were discussed

  15. Preliminary study for the reliability Assurance on results and procedure of the out-pile mechanical characterization test for a fuel assembly; Lateral Vibration Test (I)

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kang Hee; Yoon, Kyung Hee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2007-07-01

    The reliability assurance with respect to the test procedure and results of the out-pile mechanical performance test for the nuclear fuel assembly is an essential task to assure the test quality and to get a permission for fuel loading into the commercial reactor core. For the case of vibration test, proper management and appropriate calibration of instruments and devices used in the test, various efforts to minimize the possible error during the test and signal acquisition process are needed. Additionally, the deep understanding both of the theoretical assumption and simplification for the signal processing/modal analysis and of the functions of the devices used in the test were highly required. In this study, the overall procedure and result of lateral vibration test were assembly's mechanical characterization were briefly introduced. A series of measures to assure and improve the reliability of the vibration test were discussed.

  16. The estimation of radiation effective dose from diagnostic medical procedures in general population of northern Iran

    International Nuclear Information System (INIS)

    Shabestani Monfared, A.; Abdi, R.

    2006-01-01

    The risks of low-dose Ionizing radiation from radiology and nuclear medicine are not clearly determined. Effective dose to population is a very important factor in risk estimation. The study aimed to determine the effective dose from diagnostic radiation medicine in a northern province of Iran. Materials and Methods: Data about various radiologic and nuclear medicine procedures were collected from all radiology and nuclear medicine departments In Mazandaran Province (population = 2,898,031); and using the standard dosimetry tables, the total dose, dose per examination, and annual effective dose per capita as well as the annual gonadal dose per capita were estimated. Results: 655,730 radiologic examinations in a year's period, lead to 1.45 mSv, 0.33 mSv and 0.31 mGy as average effective dose per examination, annual average effective dose to member of the public, and annual average gonadal dose per capita, respectively. The frequency of medical radiologic examinations was 2,262 examinations annually per 10,000 members of population. However, the total number of nuclear medicine examinations in the same period was 7074, with 4.37 mSv, 9.6 μSv and 9.8 μGy, as average effective dose per examination, annual average effective dose to member of the public and annual average gonadal dose per caput, respectively. The frequency of nuclear medicine examination was 24 examinations annually per 10,000 members of population. Conclusion: The average effective dose per examination was nearly similar to other studies. However, the average annual effective dose and annual average gonadal dose per capita were less than the similar values in other reports, which could be due to lesser number of radiation medicine examinations in the present study

  17. Reliability of estimated glomerular filtration rate in patients treated with platinum containing therapy

    DEFF Research Database (Denmark)

    Lauritsen, Jakob; Gundgaard, Maria G; Mortensen, Mette S

    2014-01-01

    (median percentage error), precision (median absolute percentage error) and accuracy (p10 and p30). The precision of carboplatin dosage based on eGFR was calculated. Data on mGFR, eGFR, and PCr were available in 390 patients, with a total of ∼ 1,600 measurements. Median PCr and mGFR synchronically...... decreased after chemotherapy, yielding high bias and low precision of most estimates. Post-chemotherapy, bias ranged from -0.2% (MDRD after four cycles) to 33.8% (CKD-EPI after five cycles+), precision ranged from 11.6% (MDRD after four cycles) to 33.8% (CKD-EPI after five cycles+) and accuracy (p30) ranged...... from 37.5% (CKD-EPI after five cycles+) to 86.9% (MDRD after four cycles). Although MDRD appeared acceptable after chemotherapy because of high accuracy, this equation underestimated GFR in all other measurements. Before and years after treatment, Cockcroft-Gault and Wright offered best results...

  18. Methods of Estimation the Reliability and Increasing the Informativeness of the Laboratory Results (Analysis of the Laboratory Case of Measurement the Indicators of Thyroid Function)

    OpenAIRE

    N A Kovyazina; N A Alhutova; N N Zybina; N M Kalinina

    2014-01-01

    The goal of the study was to demonstrate the multilevel laboratory quality management system and point at the methods of estimating the reliability and increasing the amount of information content of the laboratory results (on the example of the laboratory case). Results. The article examines the stages of laboratory quality management which has helped to estimate the reliability of the results of determining Free T3, Free T4 and TSH. The measurement results are presented by the expanded unce...

  19. Reliable estimation of antimicrobial use and its evolution between 2010 and 2013 in French swine farms.

    Science.gov (United States)

    Hémonic, Anne; Chauvin, Claire; Delzescaux, Didier; Verliat, Fabien; Corrégé, Isabelle

    2018-01-01

    There has been a strong implication of both the French swine industry and the national authorities on reducing the use of antimicrobials in swine production since 2010. The annual monitoring of antimicrobial sales by the French Veterinary Medicines Agency (Anses-ANMV) provides estimates but not detailed figures on actual on-farm usage of antimicrobials in swine production. In order to provide detailed information on the 2010 and 2013 antimicrobial use in the French swine industry, the methodology of cross-sectional retrospective study on a representative sample of at least 150 farms has been elected. The analysis of the collected data shows a strong and significant decrease in antimicrobial exposure of pigs between 2010 and 2013. Over three years, the average number of days of treatment significantly decreased by 29% in suckling piglets and by 19% in weaned piglets. In fattening pigs, the drop (- 29%) was not statistically significant. Only usage in sows did increase over that period (+ 17%, non-significant), which might be associated with the transition to group-housing of pregnant sows that took place at the time. Also, over that period, the use of third- and fourth generation cephalosporins in suckling piglets decreased by 89%, and by 82% in sows, which confirms that the voluntary moratorium on these classes of antimicrobials decided at the end of 2010 has been effectively implemented. The methodology of random sampling of farms appears as a precise and robust tool to monitor antimicrobial use within a production animal species, able to fulfil industry and national authorities' objectives and requirements to assess the outcome of concerted efforts on antimicrobial use reduction. It demonstrates that the use of antimicrobials decreased in the French swine industry between 2010 and 2013, including the classes considered as critical for human medicine.

  20. A mathematical procedure to estimate solar absorptance of shallow water ponds

    International Nuclear Information System (INIS)

    Wu Hongbo; Tang Runsheng; Li Zhimin; Zhong Hao

    2009-01-01

    In this article, a mathematical procedure is developed for estimating solar absorption of shallow water ponds with different pond floor based on the fact that the solar radiation trapped inside the water layer undergoes multiplicative reflection and absorption and on that the solar absorption of water is selective. Theoretical model indicates that the solar absorption of a water pond is related to the reflectivity of the pond floor, the solar spectrum and the water depth. To validate the mathematical model, a concrete water pond measuring 3 x 3 x 0.24 m was constructed. Experimental results indicate that solar reflectivity calculated based on the mathematical model proposed in this work were in good agreement with those measured. For water ponds with a water-permeable floor, such as concrete floor, theoretical calculations of the solar absorptance of a water pond should be done based on the reflectivity of full wet floor, whereas for water ponds with a non-water-permeable floor, theoretical calculations should be done based on the fact that solar reflection on the floor is neither perfect specular reflection nor prefect isotropic diffuse reflection. Results of numerical calculation show that theoretical calculations of solar absorption of a water pond by dividing solar spectrum into six bands were pretty agreement with those by dividing solar spectrum into 20 bands.

  1. Estimating the impact of structural directionality: How reliable are undirected connectomes?

    Directory of Open Access Journals (Sweden)

    Penelope Kale

    2018-06-01

    Full Text Available Directionality is a fundamental feature of network connections. Most structural brain networks are intrinsically directed because of the nature of chemical synapses, which comprise most neuronal connections. Because of the limitations of noninvasive imaging techniques, the directionality of connections between structurally connected regions of the human brain cannot be confirmed. Hence, connections are represented as undirected, and it is still unknown how this lack of directionality affects brain network topology. Using six directed brain networks from different species and parcellations (cat, mouse, C. elegans, and three macaque networks, we estimate the inaccuracies in network measures (degree, betweenness, clustering coefficient, path length, global efficiency, participation index, and small-worldness associated with the removal of the directionality of connections. We employ three different methods to render directed brain networks undirected: (a remove unidirectional connections, (b add reciprocal connections, and (c combine equal numbers of removed and added unidirectional connections. We quantify the extent of inaccuracy in network measures introduced through neglecting connection directionality for individual nodes and across the network. We find that the coarse division between core and peripheral nodes remains accurate for undirected networks. However, hub nodes differ considerably when directionality is neglected. Comparing the different methods to generate undirected networks from directed ones, we generally find that the addition of reciprocal connections (false positives causes larger errors in graph-theoretic measures than the removal of the same number of directed connections (false negatives. These findings suggest that directionality plays an essential role in shaping brain networks and highlight some limitations of undirected connectomes. Most brain networks are inherently directed because of the nature of chemical synapses

  2. A practical approach for calculating reliable cost estimates from observational data: application to cost analyses in maternal and child health.

    Science.gov (United States)

    Salemi, Jason L; Comins, Meg M; Chandler, Kristen; Mogos, Mulubrhan F; Salihu, Hamisu M

    2013-08-01

    Comparative effectiveness research (CER) and cost-effectiveness analysis are valuable tools for informing health policy and clinical care decisions. Despite the increased availability of rich observational databases with economic measures, few researchers have the skills needed to conduct valid and reliable cost analyses for CER. The objectives of this paper are to (i) describe a practical approach for calculating cost estimates from hospital charges in discharge data using publicly available hospital cost reports, and (ii) assess the impact of using different methods for cost estimation in maternal and child health (MCH) studies by conducting economic analyses on gestational diabetes (GDM) and pre-pregnancy overweight/obesity. In Florida, we have constructed a clinically enhanced, longitudinal, encounter-level MCH database covering over 2.3 million infants (and their mothers) born alive from 1998 to 2009. Using this as a template, we describe a detailed methodology to use publicly available data to calculate hospital-wide and department-specific cost-to-charge ratios (CCRs), link them to the master database, and convert reported hospital charges to refined cost estimates. We then conduct an economic analysis as a case study on women by GDM and pre-pregnancy body mass index (BMI) status to compare the impact of using different methods on cost estimation. Over 60 % of inpatient charges for birth hospitalizations came from the nursery/labor/delivery units, which have very different cost-to-charge markups (CCR = 0.70) than the commonly substituted hospital average (CCR = 0.29). Using estimated mean, per-person maternal hospitalization costs for women with GDM as an example, unadjusted charges ($US14,696) grossly overestimated actual cost, compared with hospital-wide ($US3,498) and department-level ($US4,986) CCR adjustments. However, the refined cost estimation method, although more accurate, did not alter our conclusions that infant/maternal hospitalization costs

  3. Estimation of gingival crevicular blood glucose level for the screening of diabetes mellitus: A simple yet reliable method.

    Science.gov (United States)

    Parihar, Sarita; Tripathi, Richik; Parihar, Ajit Vikram; Samadi, Fahad M; Chandra, Akhilesh; Bhavsar, Neeta

    2016-01-01

    This study was designed to assess the reliability of blood glucose level estimation in gingival crevicular blood(GCB) for screening diabetes mellitus. 70 patients were included in study. A randomized, double-blind clinical trial was performed. Among these, 39 patients were diabetic (including 4 patients who were diagnosed during the study) and rest 31 patients were non-diabetic. GCB obtained during routine periodontal examination was analyzed by glucometer to know blood glucose level. The same patient underwent for finger stick blood (FSB) glucose level estimation with glucometer and venous blood (VB) glucose level with standardized laboratory method as per American Diabetes Association Guidelines. 1 All the three blood glucose levels were compared. Periodontal parameters were also recorded including gingival index (GI) and probing pocket depth (PPD). A strong positive correlation ( r ) was observed between glucose levels of GCB with FSB and VB with the values of 0.986 and 0.972 in diabetic group and 0.820 and 0.721 in non-diabetic group. As well, the mean values of GI and PPD were more in diabetic group than non-diabetic group with the statistically significant difference ( p  blood glucose level as the values were closest to glucose levels estimated by VB. The technique is safe, easy to perform and non-invasive to the patient and can increase the frequency of diagnosing diabetes during routine periodontal therapy.

  4. Reliability of CKD-EPI predictive equation in estimating chronic kidney disease prevalence in the Croatian endemic nephropathy area.

    Science.gov (United States)

    Fuček, Mirjana; Dika, Živka; Karanović, Sandra; Vuković Brinar, Ivana; Premužić, Vedran; Kos, Jelena; Cvitković, Ante; Mišić, Maja; Samardžić, Josip; Rogić, Dunja; Jelaković, Bojan

    2018-02-15

    Chronic kidney disease (CKD) is a significant public health problem and it is not possible to precisely predict its progression to terminal renal failure. According to current guidelines, CKD stages are classified based on the estimated glomerular filtration rate (eGFR) and albuminuria. Aims of this study were to determine the reliability of predictive equation in estimation of CKD prevalence in Croatian areas with endemic nephropathy (EN), compare the results with non-endemic areas, and to determine if the prevalence of CKD stages 3-5 was increased in subjects with EN. A total of 1573 inhabitants of the Croatian Posavina rural area from 6 endemic and 3 non-endemic villages were enrolled. Participants were classified according to the modified criteria of the World Health Organization for EN. Estimated GFR was calculated using Chronic Kidney Disease Epidemiology Collaboration equation (CKD-EPI). The results showed a very high CKD prevalence in the Croatian rural area (19%). CKD prevalence was significantly higher in EN then in non EN villages with the lowest eGFR value in diseased subgroup. eGFR correlated significantly with the diagnosis of EN. Kidney function assessment using CKD-EPI predictive equation proved to be a good marker in differentiating the study subgroups, remained as one of the diagnostic criteria for EN.

  5. Determinants of the reliability of ultrasound tomography sound speed estimates as a surrogate for volumetric breast density

    Energy Technology Data Exchange (ETDEWEB)

    Khodr, Zeina G.; Pfeiffer, Ruth M.; Gierach, Gretchen L., E-mail: GierachG@mail.nih.gov [Department of Health and Human Services, Division of Cancer Epidemiology and Genetics, National Cancer Institute, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States); Sak, Mark A.; Bey-Knight, Lisa [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 (United States); Duric, Nebojsa; Littrup, Peter [Karmanos Cancer Institute, Wayne State University, 4100 John R, Detroit, Michigan 48201 and Delphinus Medical Technologies, 46701 Commerce Center Drive, Plymouth, Michigan 48170 (United States); Ali, Haythem; Vallieres, Patricia [Henry Ford Health System, 2799 W Grand Boulevard, Detroit, Michigan 48202 (United States); Sherman, Mark E. [Division of Cancer Prevention, National Cancer Institute, Department of Health and Human Services, 9609 Medical Center Drive MSC 9774, Bethesda, Maryland 20892 (United States)

    2015-10-15

    Purpose: High breast density, as measured by mammography, is associated with increased breast cancer risk, but standard methods of assessment have limitations including 2D representation of breast tissue, distortion due to breast compression, and use of ionizing radiation. Ultrasound tomography (UST) is a novel imaging method that averts these limitations and uses sound speed measures rather than x-ray imaging to estimate breast density. The authors evaluated the reproducibility of measures of speed of sound and changes in this parameter using UST. Methods: One experienced and five newly trained raters measured sound speed in serial UST scans for 22 women (two scans per person) to assess inter-rater reliability. Intrarater reliability was assessed for four raters. A random effects model was used to calculate the percent variation in sound speed and change in sound speed attributable to subject, scan, rater, and repeat reads. The authors estimated the intraclass correlation coefficients (ICCs) for these measures based on data from the authors’ experienced rater. Results: Median (range) time between baseline and follow-up UST scans was five (1–13) months. Contributions of factors to sound speed variance were differences between subjects (86.0%), baseline versus follow-up scans (7.5%), inter-rater evaluations (1.1%), and intrarater reproducibility (∼0%). When evaluating change in sound speed between scans, 2.7% and ∼0% of variation were attributed to inter- and intrarater variation, respectively. For the experienced rater’s repeat reads, agreement for sound speed was excellent (ICC = 93.4%) and for change in sound speed substantial (ICC = 70.4%), indicating very good reproducibility of these measures. Conclusions: UST provided highly reproducible sound speed measurements, which reflect breast density, suggesting that UST has utility in sensitively assessing change in density.

  6. Simple estimation procedures for regression analysis of interval-censored failure time data under the proportional hazards model.

    Science.gov (United States)

    Sun, Jianguo; Feng, Yanqin; Zhao, Hui

    2015-01-01

    Interval-censored failure time data occur in many fields including epidemiological and medical studies as well as financial and sociological studies, and many authors have investigated their analysis (Sun, The statistical analysis of interval-censored failure time data, 2006; Zhang, Stat Modeling 9:321-343, 2009). In particular, a number of procedures have been developed for regression analysis of interval-censored data arising from the proportional hazards model (Finkelstein, Biometrics 42:845-854, 1986; Huang, Ann Stat 24:540-568, 1996; Pan, Biometrics 56:199-203, 2000). For most of these procedures, however, one drawback is that they involve estimation of both regression parameters and baseline cumulative hazard function. In this paper, we propose two simple estimation approaches that do not need estimation of the baseline cumulative hazard function. The asymptotic properties of the resulting estimates are given, and an extensive simulation study is conducted and indicates that they work well for practical situations.

  7. Validating the absolute reliability of a fat free mass estimate equation in hemodialysis patients using near-infrared spectroscopy.

    Science.gov (United States)

    Kono, Kenichi; Nishida, Yusuke; Moriyama, Yoshihumi; Taoka, Masahiro; Sato, Takashi

    2015-06-01

    The assessment of nutritional states using fat free mass (FFM) measured with near-infrared spectroscopy (NIRS) is clinically useful. This measurement should incorporate the patient's post-dialysis weight ("dry weight"), in order to exclude the effects of any change in water mass. We therefore used NIRS to investigate the regression, independent variables, and absolute reliability of FFM in dry weight. The study included 47 outpatients from the hemodialysis unit. Body weight was measured before dialysis, and FFM was measured using NIRS before and after dialysis treatment. Multiple regression analysis was used to estimate the FFM in dry weight as the dependent variable. The measured FFM before dialysis treatment (Mw-FFM), and the difference between measured and dry weight (Mw-Dw) were independent variables. We performed Bland-Altman analysis to detect errors between the statistically estimated FFM and the measured FFM after dialysis treatment. The multiple regression equation to estimate the FFM in dry weight was: Dw-FFM = 0.038 + (0.984 × Mw-FFM) + (-0.571 × [Mw-Dw]); R(2)  = 0.99). There was no systematic bias between the estimated and the measured values of FFM in dry weight. Using NIRS, FFM in dry weight can be calculated by an equation including FFM in measured weight and the difference between the measured weight and the dry weight. © 2015 The Authors. Therapeutic Apheresis and Dialysis © 2015 International Society for Apheresis.

  8. Lifetime Reliability Assessment of Concrete Slab Bridges

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    A procedure for lifetime assesment of the reliability of short concrete slab bridges is presented in the paper. Corrosion of the reinforcement is the deterioration mechanism used for estimating the reliability profiles for such bridges. The importance of using sensitivity measures is stressed....... Finally the produce is illustrated on 6 existing UK bridges....

  9. Scale Reliability Evaluation with Heterogeneous Populations

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling approach for scale reliability evaluation in heterogeneous populations is discussed. The method can be used for point and interval estimation of reliability of multicomponent measuring instruments in populations representing mixtures of an unknown number of latent classes or subpopulations. The procedure is helpful also…

  10. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  11. Diagnostic procedure on brake pad assembly based on Young's modulus estimation

    International Nuclear Information System (INIS)

    Chiariotti, P; Santolini, C; Tomasini, E P; Martarelli, M

    2013-01-01

    Quality control of brake pads is an important issue, since the pad is a key component of the braking system. Typical damage of a brake pad assembly is the pad–backing plate detachment that affects and modifies the mechanical properties of the whole system. The most sensitive parameter to the damage is the effective Young's modulus, since the damage induces a decrease of the pad assembly stiffness and therefore of its effective Young's modulus: indeed its variation could be used for diagnostic purposes. The effective Young's modulus can be estimated from the first bending resonance frequency identified from the frequency response function measured on the pad assembly. Two kinds of excitation methods, i.e. conventional impulse excitation and magnetic actuation, will be presented and two different measurement sensors, e.g. laser Doppler vibrometer and microphone, analyzed. The robustness of the effective Young's modulus as a diagnostic feature will be demonstrated in comparison to the first bending resonance frequency, which is more sensitive to geometrical dimensions. Variability in the sample dimension, in fact, will induce a variation of the resonance frequency which could be mistaken for damage. The diagnostic approach has been applied to a set of undamaged and damaged pad assemblies showing good performance in terms of damage identification. The environmental temperature can be an important interfering input for the diagnostic procedure, since it influences the effective Young's modulus of the assembly. For that reason, a test at different temperatures in the range between 15 °C and 30 °C has been performed, evidencing that damage identification technique is efficient at any temperature. The robustness of the Young's modulus as a diagnostic feature with respect to damping is also presented. (paper)

  12. Assessing the reliability of the borderline regression method as a standard setting procedure for objective structured clinical examination

    Directory of Open Access Journals (Sweden)

    Sara Mortaz Hejri

    2013-01-01

    Full Text Available Background: One of the methods used for standard setting is the borderline regression method (BRM. This study aims to assess the reliability of BRM when the pass-fail standard in an objective structured clinical examination (OSCE was calculated by averaging the BRM standards obtained for each station separately. Materials and Methods: In nine stations of the OSCE with direct observation the examiners gave each student a checklist score and a global score. Using a linear regression model for each station, we calculated the checklist score cut-off on the regression equation for the global scale cut-off set at 2. The OSCE pass-fail standard was defined as the average of all station′s standard. To determine the reliability, the root mean square error (RMSE was calculated. The R2 coefficient and the inter-grade discrimination were calculated to assess the quality of OSCE. Results: The mean total test score was 60.78. The OSCE pass-fail standard and its RMSE were 47.37 and 0.55, respectively. The R2 coefficients ranged from 0.44 to 0.79. The inter-grade discrimination score varied greatly among stations. Conclusion: The RMSE of the standard was very small indicating that BRM is a reliable method of setting standard for OSCE, which has the advantage of providing data for quality assurance.

  13. An overview of the IAEA Safety Series on procedures for evaluating the reliability of predictions made by environmental transfer models

    International Nuclear Information System (INIS)

    Hoffman, F.W.; Hofer, E.

    1987-10-01

    The International Atomic Energy Agency is preparing a Safety Series publication on practical approaches for evaluating the reliability of the predictions made by environmental radiological assessment models. This publication identifies factors that affect the reliability of these predictions and discusses methods for quantifying uncertainty. Emphasis is placed on understanding the quantity of interest specified by the assessment question and distinguishing between stochastic variability and lack of knowledge about either the true value or the true distribution of values for quantity of interest. Among the many approaches discussed, model testing using independent data sets (model validation) is considered the best method for evaluating the accuracy in model predictions. Analytical and numerical methods for propagating the uncertainties in model parameters are presented and the strengths and weaknesses of model intercomparison exercises are also discussed. It is recognized that subjective judgment is employed throughout the entire modelling process, and quantitative reliability statements must be subjectively obtained when models are applied to different situations from those under which they have been tested. (6 refs.)

  14. Reliability estimation of a N- M-cold-standby redundancy system in a multicomponent stress-strength model with generalized half-logistic distribution

    Science.gov (United States)

    Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei

    2018-01-01

    In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.

  15. The Point Zoro Symmetric Single-Step Procedure for Simultaneous Estimation of Polynomial Zeros

    Directory of Open Access Journals (Sweden)

    Mansor Monsi

    2012-01-01

    Full Text Available The point symmetric single step procedure PSS1 has R-order of convergence at least 3. This procedure is modified by adding another single-step, which is the third step in PSS1. This modified procedure is called the point zoro symmetric single-step PZSS1. It is proven that the R-order of convergence of PZSS1 is at least 4 which is higher than the R-order of convergence of PT1, PS1, and PSS1. Hence, computational time is reduced since this procedure is more efficient for bounding simple zeros simultaneously.

  16. Consequences of alternative tree-level biomass estimation procedures on U.S. forest carbon stock estimates

    Science.gov (United States)

    Grant M. Domke; Christopher W. Woodall; James E. Smith; James A. Westfall; Ronald E. McRoberts

    2012-01-01

    Forest ecosystems are the largest terrestrial carbon sink on earth and their management has been recognized as a relatively cost-effective strategy for offsetting greenhouse gas emissions. Forest carbon stocks in the U.S. are estimated using data from the USDA Forest Service, Forest Inventory and Analysis (FIA) program. In an attempt to balance accuracy with...

  17. A competency based selection procedure for Dutch postgraduate GP training: a pilot study on validity and reliability

    NARCIS (Netherlands)

    Vermeulen, M.I.; Tromp, F.; Zuithoff, N.P.; Pieters, R.H.; Damoiseaux, R.A.; Kuyvenhoven, M.M.

    2014-01-01

    Abstract Background: Historically, semi-structured interviews (SSI) have been the core of the Dutch selection for postgraduate general practice (GP) training. This paper describes a pilot study on a newly designed competency-based selection procedure that assesses whether candidates have the

  18. Assessing the impact of uncertainty on flood risk estimates with reliability analysis using 1-D and 2-D hydraulic models

    Directory of Open Access Journals (Sweden)

    L. Altarejos-García

    2012-07-01

    Full Text Available This paper addresses the use of reliability techniques such as Rosenblueth's Point-Estimate Method (PEM as a practical alternative to more precise Monte Carlo approaches to get estimates of the mean and variance of uncertain flood parameters water depth and velocity. These parameters define the flood severity, which is a concept used for decision-making in the context of flood risk assessment. The method proposed is particularly useful when the degree of complexity of the hydraulic models makes Monte Carlo inapplicable in terms of computing time, but when a measure of the variability of these parameters is still needed. The capacity of PEM, which is a special case of numerical quadrature based on orthogonal polynomials, to evaluate the first two moments of performance functions such as the water depth and velocity is demonstrated in the case of a single river reach using a 1-D HEC-RAS model. It is shown that in some cases, using a simple variable transformation, statistical distributions of both water depth and velocity approximate the lognormal. As this distribution is fully defined by its mean and variance, PEM can be used to define the full probability distribution function of these flood parameters and so allowing for probability estimations of flood severity. Then, an application of the method to the same river reach using a 2-D Shallow Water Equations (SWE model is performed. Flood maps of mean and standard deviation of water depth and velocity are obtained, and uncertainty in the extension of flooded areas with different severity levels is assessed. It is recognized, though, that whenever application of Monte Carlo method is practically feasible, it is a preferred approach.

  19. The transverse diameter of the chest on routine radiographs reliably estimates gestational age and weight in premature infants

    Energy Technology Data Exchange (ETDEWEB)

    Dietz, Kelly R. [University of Minnesota, Department of Radiology, Minneapolis, MN (United States); Zhang, Lei [University of Minnesota, Biostatistical Design and Analysis Center, Minneapolis, MN (United States); Seidel, Frank G. [Lucile Packard Children' s Hospital, Department of Radiology, Stanford, CA (United States)

    2015-08-15

    Prior to digital radiography it was possible for a radiologist to easily estimate the size of a patient on an analog film. Because variable magnification may be applied at the time of processing an image, it is now more difficult to visually estimate an infant's size on the monitor. Since gestational age and weight significantly impact the differential diagnosis of neonatal diseases and determine the expected size of kidneys or appearance of the brain by MRI or US, this information is useful to a pediatric radiologist. Although this information may be present in the electronic medical record, it is frequently not readily available to the pediatric radiologist at the time of image interpretation. To determine if there was a correlation between gestational age and weight of a premature infant with their transverse chest diameter (rib to rib) on admission chest radiographs. This retrospective study was approved by the institutional review board, which waived informed consent. The maximum transverse chest diameter outer rib to outer rib was measured on admission portable chest radiographs of 464 patients admitted to the neonatal intensive care unit (NICU) during the 2010 calendar year. Regression analysis was used to investigate the association between chest diameter and gestational age/birth weight. Quadratic term of chest diameter was used in the regression model. Chest diameter was statistically significantly associated with both gestational age (P < 0.0001) and birth weight (P < 0.0001). An infant's gestational age and birth weight can be reliably estimated by comparing a simple measurement of the transverse chest diameter on digital chest radiograph with the tables and graphs in our study. (orig.)

  20. Linear Interaction Energy Based Prediction of Cytochrome P450 1A2 Binding Affinities with Reliability Estimation.

    Directory of Open Access Journals (Sweden)

    Luigi Capoferri

    Full Text Available Prediction of human Cytochrome P450 (CYP binding affinities of small ligands, i.e., substrates and inhibitors, represents an important task for predicting drug-drug interactions. A quantitative assessment of the ligand binding affinity towards different CYPs can provide an estimate of inhibitory activity or an indication of isoforms prone to interact with the substrate of inhibitors. However, the accuracy of global quantitative models for CYP substrate binding or inhibition based on traditional molecular descriptors can be limited, because of the lack of information on the structure and flexibility of the catalytic site of CYPs. Here we describe the application of a method that combines protein-ligand docking, Molecular Dynamics (MD simulations and Linear Interaction Energy (LIE theory, to allow for quantitative CYP affinity prediction. Using this combined approach, a LIE model for human CYP 1A2 was developed and evaluated, based on a structurally diverse dataset for which the estimated experimental uncertainty was 3.3 kJ mol-1. For the computed CYP 1A2 binding affinities, the model showed a root mean square error (RMSE of 4.1 kJ mol-1 and a standard error in prediction (SDEP in cross-validation of 4.3 kJ mol-1. A novel approach that includes information on both structural ligand description and protein-ligand interaction was developed for estimating the reliability of predictions, and was able to identify compounds from an external test set with a SDEP for the predicted affinities of 4.6 kJ mol-1 (corresponding to 0.8 pKi units.

  1. Pre-procedural peripheral endothelial function is associated with increased serum creatinine following percutaneous coronary procedure in stable patients with a preserved estimated glomerular filtration rate.

    Science.gov (United States)

    Sumida, Hitoshi; Matsuzawa, Yasushi; Sugiyama, Seigo; Sugamura, Koichi; Nozaki, Toshimitsu; Akiyama, Eiichi; Ohba, Keisuke; Konishi, Masaaki; Matsubara, Junichi; Fujisue, Koichiro; Maeda, Hirofumi; Kurokawa, Hirofumi; Iwashita, Satomi; Ogawa, Hisao; Tsujita, Kenichi

    2017-11-01

    Worsening renal function, indicated by increased serum creatinine (SCr), is a common complication of percutaneous coronary procedures. Risk factors for increased SCr overlap with coronary risk factors involved in endothelial dysfunction. We hypothesized that endothelial dysfunction, measured using the reactive hyperemia peripheral arterial tonometry index (RHI), can predict periprocedure-increased SCr. RHI was assessed before elective coronary procedures in 316 consecutive stable patients with a preserved estimated glomerular filtration rate (eGFR, >60mL/min/1.73m 2 ). SCr was measured before and 2 days after procedures. There was no significant correlation between natural logarithmic transformations of RHI (Ln-RHI) and basal Ln-eGFR. Periprocedure increase in SCr was observed in 148 (47%) patients. The increased SCr group had significantly lower Ln-RHI [0.48 (0.36, 0.62) vs. 0.59 (0.49, 0.76), pfunction by RHI is an effective strategy to assess the patient's risk conditions for worsening renal function after percutaneous coronary procedures. Copyright © 2017 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  2. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  3. Reliability of activation cross sections for estimation of shutdown dose rate in the ITER port cell and port interspace

    Science.gov (United States)

    García, Raquel; García, Mauricio; Ogando, Francisco; Pampin, Raúl; Sanz, Javier

    2017-09-01

    This paper explores the quality of available activation cross section (XS) data for accurate Shutdown Dose Rate (SDDR) prediction in the ITER Port Cell and Port Interspace areas, where different maintenance activities are foreseen. For this purpose the EAF library (2007 and 2010 versions) has been investigated, as it is typically used by the ITER community. Based on both reports/papers on SDDR in ITER and own calculations, major nuclides contributing to the SDDR coming from the activation of i) relevant materials placed in ITER and ii) candidate materials for the bioshield plug as L2N and barite concretes, are identified. Then, relevant production pathways are obtained. EAF XS quality for all pathways is checked following the procedure used for validating and testing the successive EAF versions. Also, possible improvements from using the TENDL-2015 library are assessed by comparing EAF and TENDL XS with available differential experimental data from EXFOR. Results point out that most of the activation XS related to materials currently placed in ITER are reliable, and only a few need improvement. Also, many of the XS related to both L2N and barite concretes need further work for validation.

  4. Procedures for estimating the radiation dose in the vicinity of uranium mines and mills by direct calculation methodology

    International Nuclear Information System (INIS)

    Coelho, C.P.

    1983-01-01

    A methodology for estimating the radiation doses to the members of the general public, in the vicinity of uranium mines and mills is presented. The data collected in the surveys performed to characterize the neighborhood of the site, and used in this work to estimate the radiation dose, are required by the Regulatory Body, for the purpose of Licensing. Initially, a description is shown of the main processing steps to obtain the uranium concentrate and the critical instalation radionuclides are identified. Following, some studies required to characterize the facility neighborhood are presented, specially those related to geography, demography, metheorology, hydrology and environmental protection. Also, the basic programs for monitoring the facility neighborhood in the pre-operational and operational phases are included. It is then proposed a procedure to estimate inhalation, ingestion and external doses. As an example, the proposed procedure is applied to a hypotetical site. Finally, some aspects related to the applicability of this work are discussed. (Author) [pt

  5. American Society for Metabolic and Bariatric Surgery estimation of metabolic and bariatric procedures performed in the United States in 2016.

    Science.gov (United States)

    English, Wayne J; DeMaria, Eric J; Brethauer, Stacy A; Mattar, Samer G; Rosenthal, Raul J; Morton, John M

    2018-03-01

    Bariatric surgery, despite being the most successful long-lasting treatment for morbid obesity, remains underused as only approximately 1% of all patients who qualify for surgery actually undergo surgery. To determine if patients in need are receiving appropriate therapy, the American Society for Metabolic and Bariatric Surgery created a Numbers Taskforce to specify annual rate of use for obesity treatment interventions. The objective of this study was to determine metabolic and bariatric procedure trends since 2011 and to provide the best estimate of the number of procedures performed in the United States in 2016. United States. We reviewed data from the Metabolic and Bariatric Surgery Accreditation and Quality Improvement Program, National Surgical Quality Improvement Program, Bariatric Outcomes Longitudinal Database, and Nationwide Inpatient Sample. In addition, data from industry and outpatient centers were used to estimate outpatient center activity. Data from 2016 were compared with the previous 5 years of data. Compared with 2015, the total number of metabolic and bariatric procedures performed in 2016 increased from approximately 196,000 to 216,000. The sleeve gastrectomy trend is increasing, and it continues to be the most common procedure. The gastric bypass and gastric band trends continued to decrease as seen in previous years. The percentage of revision procedures and biliopancreatic diversion with duodenal switch procedures increased slightly. Finally, intragastric balloons placement emerged as a significant contributor to the cumulative total number of procedures performed. There is increasing use of metabolic and bariatric procedures performed in the United States from 2011 to 2016, with a nearly 10% increase noted from 2015 to 2016. Copyright © 2018 American Society for Bariatric Surgery. Published by Elsevier Inc. All rights reserved.

  6. Comparison of regional index flood estimation procedures based on the extreme value type I distribution

    DEFF Research Database (Denmark)

    Kjeldsen, Thomas Rodding; Rosbjerg, Dan

    2002-01-01

    the prediction uncertainty and that the presence of intersite correlation tends to increase the uncertainty. A simulation study revealed that in regional index-flood estimation the method of probability weighted moments is preferable to method of moment estimation with regard to bias and RMSE.......A comparison of different methods for estimating T-year events is presented, all based on the Extreme Value Type I distribution. Series of annual maximum flood from ten gauging stations at the New Zealand South island have been used. Different methods of predicting the 100-year event...... and the connected uncertainty have been applied: At-site estimation and regional index-flood estimation with and without accounting for intersite correlation using either the method of moments or the method of probability weighted moments for parameter estimation. Furthermore, estimation at ungauged sites were...

  7. A simple procedure to estimate reactivity with good noise filtering characteristics

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2014-01-01

    Highlights: • A new and simple on-line reactivity estimation method is proposed. • The estimator has robust noise filtering characteristics. • The noise filtering is equivalent to those of conventional reactivity meters. • The new estimator eliminates the burden of selecting optimum filter constants. • The new estimation performance is assessed without and with measurement noise. - Abstract: A new and simple on-line reactivity estimation method is proposed. The estimator has robust noise filtering characteristics without the use of complex filters. The noise filtering capability is equivalent to or better than that of a conventional estimator based on Inverse Point Kinetics (IPK). The new estimator can also eliminate the burden of selecting optimum filter time constants, such as would be required for the IPK-based estimator, or noise covariance matrices, which are needed if the extended Kalman filter (EKF) technique is used. In this paper, the new estimation method is introduced and its performance assessed without and with measurement noise

  8. Procedure manual for the estimation of average indoor radon-daughter concentrations using the radon grab-sampling method

    International Nuclear Information System (INIS)

    George, J.L.

    1986-04-01

    The US Department of Energy (DOE) Office of Remedial Action and Waste Technology established the Technical Measurements Center to provide standardization, calibration, comparability, verification of data, quality assurance, and cost-effectiveness for the measurement requirements of DOE remedial action programs. One of the remedial-action measurement needs is the estimation of average indoor radon-daughter concentration. One method for accomplishing such estimations in support of DOE remedial action programs is the radon grab-sampling method. This manual describes procedures for radon grab sampling, with the application specifically directed to the estimation of average indoor radon-daughter concentration (RDC) in highly ventilated structures. This particular application of the measurement method is for cases where RDC estimates derived from long-term integrated measurements under occupied conditions are below the standard and where the structure being evaluated is considered to be highly ventilated. The radon grab-sampling method requires that sampling be conducted under standard maximized conditions. Briefly, the procedure for radon grab sampling involves the following steps: selection of sampling and counting equipment; sample acquisition and processing, including data reduction; calibration of equipment, including provisions to correct for pressure effects when sampling at various elevations; and incorporation of quality-control and assurance measures. This manual describes each of the above steps in detail and presents an example of a step-by-step radon grab-sampling procedure using a scintillation cell

  9. Theoretical basis, application, reliability, and sample size estimates of a Meridian Energy Analysis Device for Traditional Chinese Medicine Research

    Directory of Open Access Journals (Sweden)

    Ming-Yen Tsai

    Full Text Available OBJECTIVES: The Meridian Energy Analysis Device is currently a popular tool in the scientific research of meridian electrophysiology. In this field, it is generally believed that measuring the electrical conductivity of meridians provides information about the balance of bioenergy or Qi-blood in the body. METHODS AND RESULTS: PubMed database based on some original articles from 1956 to 2014 and the authoŕs clinical experience. In this short communication, we provide clinical examples of Meridian Energy Analysis Device application, especially in the field of traditional Chinese medicine, discuss the reliability of the measurements, and put the values obtained into context by considering items of considerable variability and by estimating sample size. CONCLUSION: The Meridian Energy Analysis Device is making a valuable contribution to the diagnosis of Qi-blood dysfunction. It can be assessed from short-term and long-term meridian bioenergy recordings. It is one of the few methods that allow outpatient traditional Chinese medicine diagnosis, monitoring the progress, therapeutic effect and evaluation of patient prognosis. The holistic approaches underlying the practice of traditional Chinese medicine and new trends in modern medicine toward the use of objective instruments require in-depth knowledge of the mechanisms of meridian energy, and the Meridian Energy Analysis Device can feasibly be used for understanding and interpreting traditional Chinese medicine theory, especially in view of its expansion in Western countries.

  10. Factor structure and reliability of the childhood trauma questionnaire and prevalence estimates of trauma for male and female street youth.

    Science.gov (United States)

    Forde, David R; Baron, Stephen W; Scher, Christine D; Stein, Murray B

    2012-01-01

    This study examines the psychometric properties of the Childhood Trauma Questionnaire short form (CTQ-SF) with street youth who have run away or been expelled from their homes (N = 397). Internal reliability coefficients for the five clinical scales ranged from .65 to .95. Confirmatory Factor Analysis (CFA) was used to test the five-factor structure of the scales yielding acceptable fit for the total sample. Additional multigroup analyses were performed to consider items by gender. Results provided only evidence of weak factorial invariance. Constrained models showed invariance in configuration, factor loadings, and factor covariances but failed for equality of intercepts. Mean trauma scores for street youth tended to fall in the moderate to severe range on all abuse/neglect clinical scales. Females reported higher levels of abuse and neglect. Prevalence of child maltreatment of individual forms was very high with 98% of street youth reporting one or more forms; 27.4% of males and 48.9% of females reported all five forms. Results of this study support the viability of the CTQ-SF for screening maltreatment in a highly vulnerable street population. Caution is recommended when comparing prevalence estimates for male and female street youth given the failure of the strong factorial multigroup model.

  11. Frontiers of reliability

    CERN Document Server

    Basu, Asit P; Basu, Sujit K

    1998-01-01

    This volume presents recent results in reliability theory by leading experts in the world. It will prove valuable for researchers, and users of reliability theory. It consists of refereed invited papers on a broad spectrum of topics in reliability. The subjects covered include Bayesian reliability, Bayesian reliability modeling, confounding in a series system, DF tests, Edgeworth approximation to reliability, estimation under random censoring, fault tree reduction for reliability, inference about changes in hazard rates, information theory and reliability, mixture experiment, mixture of Weibul

  12. Improved Efficiency and Reliability of NGS Amplicon Sequencing Data Analysis for Genetic Diagnostic Procedures Using AGSA Software

    Directory of Open Access Journals (Sweden)

    Axel Poulet

    2016-01-01

    Full Text Available Screening for BRCA mutations in women with familial risk of breast or ovarian cancer is an ideal situation for high-throughput sequencing, providing large amounts of low cost data. However, 454, Roche, and Ion Torrent, Thermo Fisher, technologies produce homopolymer-associated indel errors, complicating their use in routine diagnostics. We developed software, named AGSA, which helps to detect false positive mutations in homopolymeric sequences. Seventy-two familial breast cancer cases were analysed in parallel by amplicon 454 pyrosequencing and Sanger dideoxy sequencing for genetic variations of the BRCA genes. All 565 variants detected by dideoxy sequencing were also detected by pyrosequencing. Furthermore, pyrosequencing detected 42 variants that were missed with Sanger technique. Six amplicons contained homopolymer tracts in the coding sequence that were systematically misread by the software supplied by Roche. Read data plotted as histograms by AGSA software aided the analysis considerably and allowed validation of the majority of homopolymers. As an optimisation, additional 250 patients were analysed using microfluidic amplification of regions of interest (Access Array Fluidigm of the BRCA genes, followed by 454 sequencing and AGSA analysis. AGSA complements a complete line of high-throughput diagnostic sequence analysis, reducing time and costs while increasing reliability, notably for homopolymer tracts.

  13. Machinery safety of lathe machine using SHARP-systemic human action reliability procedure: a pilot case study in academic laboratory

    Science.gov (United States)

    Suryoputro, M. R.; Sari, A. D.; Sugarindra, M.; Arifin, R.

    2017-12-01

    This research aimed to understand the human reliability analysis, to find the SHARP method with its functionality on case study and also emphasize the practice in Lathe machine, continued with identifying improvement that could be made to the existing safety system. SHARP comprises of 7 stages including definition, screening, breakdown, representation, impact assessment, quantification and documentation. These steps were combined and analysed using HIRA, FTA and FMEA. HIRA analysed the lathe at academic laboratory showed the level of the highest risk with a score of 9 for the activities of power transmission parts and a score of 6 for activities which shall mean the moving parts required to take action to reduce the level of risk. Hence, the highest RPN values obtained in the power transmission activities with a value of 18 in the power transmission and then the activities of moving parts is 12 and the activities of the operating point of 8. Thus, this activity has the highest risk of workplace accidents in the operation. On the academic laboratory the improvement made on the engineering control initially with a machine guarding and completed with necessary administrative controls (SOP, work permit, training and routine cleaning) and dedicated PPEs.

  14. A procedure for estimating site specific derived limits for the discharge of radioactive material to the atmosphere

    CERN Document Server

    Hallam, J; Jones, J A

    1983-01-01

    Generalised Derived Limits (GDLs) for the discharge of radioactive material to the atmosphere are evaluated using parameter values to ensure that the exposure of the critical group is unlikely to be underestimated significantly. Where the discharge is greater than about 5% of the GDL, a more rigorous estimate of the derived limit may be warranted. This report describes a procedure for estimating site specific derived limits for discharges of radioactivity to the atmosphere taking into account the conditions of the release and the location and habits of the exposed population. A worksheet is provided to assist in carrying out the required calculations.

  15. Development, test and evaluation of a computerized procedure for using Landsat data to estimate spring small grains acreage

    Science.gov (United States)

    Mohler, R. R. J.; Palmer, W. F.; Smyrski, M. M.; Baker, T. C.; Nazare, C. V.

    1982-01-01

    A number of methods which can provide information concerning crop acreages on the basis of a utilization of multispectral scanner (MSS) data require for their implementation a comparatively large amount of labor. The present investigation is concerned with a project designed to improve the efficiency of analysis through increased automation. The Caesar technique was developed to realize this objective. The processability rates of the Caesar procedure versus the historical state-of-the-art proportion estimation procedures were determined in an experiment. Attention is given to the study site, the aggregation technology, the results of the aggregation test, and questions of error characterization. It is found that the Caesar procedure, which has been developed for the spring small grains region of North America, is highly efficient and provides accurate results.

  16. Chest-wall thickness and percent thoracic fat estimation by B-mode ultrasound: system and procedure review

    International Nuclear Information System (INIS)

    Berger, C.D.; Lane, B.H.; Dunsmore, M.R.

    1983-02-01

    Accurate measurement of chest wall thickness is necessary for estimation of lung burden of transuranic elements in humans. To achieve tis capability, the ORNL Whole Body Counter has acquired a B-mode ultrasonic imaging system for defining the structure within the thorax of the body. This report contains a review of the ultrasound system in use at the ORNL Whole Body Counter, including its theory of operation, and te procedure for use of the system. Future developmental plans are also presented

  17. Evaluation of procedures for estimation of the isosteric heat of adsorption in microporous materials

    NARCIS (Netherlands)

    Krishna, R.

    2014-01-01

    The major objective of this communication is to evaluate procedures for estn. of the isosteric heat of adsorption, Qst, in microporous materials such as zeolites, metal org. frameworks (MOFs)​, and zeolitic imidazolate frameworks (ZIFs)​. For this purpose we have carefully analyzed published exptl.

  18. A MOSUM procedure for the estimation of multiple random change points

    OpenAIRE

    Eichinger, Birte; Kirch, Claudia

    2018-01-01

    In this work, we investigate statistical properties of change point estimators based on moving sum statistics. We extend results for testing in a classical situation with multiple deterministic change points by allowing for random exogenous change points that arise in Hidden Markov or regime switching models among others. To this end, we consider a multiple mean change model with possible time series errors and prove that the number and location of change points are estimated consistently by ...

  19. Design-related influencing factors of the computerized procedure system for inclusion into human reliability analysis of the advanced control room

    International Nuclear Information System (INIS)

    Kim, Jaewhan; Lee, Seung Jun; Jang, Seung Cheol; Ahn, Kwang-Il; Shin, Yeong Cheol

    2013-01-01

    This paper presents major design factors of the computerized procedure system (CPS) by task characteristics/requirements, with individual relative weight evaluated by the analytic hierarchy process (AHP) technique, for inclusion into human reliability analysis (HRA) of the advanced control rooms. Task characteristics/requirements of an individual procedural step are classified into four categories according to the dynamic characteristics of an emergency situation: (1) a single-static step, (2) a single-dynamic and single-checking step, (3) a single-dynamic and continuous-monitoring step, and (4) a multiple-dynamic and continuous-monitoring step. According to the importance ranking evaluation by the AHP technique, ‘clearness of the instruction for taking action’, ‘clearness of the instruction and its structure for rule interpretation’, and ‘adequate provision of requisite information’ were rated as of being higher importance for all the task classifications. Importance of ‘adequacy of the monitoring function’ and ‘adequacy of representation of the dynamic link or relationship between procedural steps’ is dependent upon task characteristics. The result of the present study gives a valuable insight on which design factors of the CPS should be incorporated, with relative importance or weight between design factors, into HRA of the advanced control rooms. (author)

  20. Estimation of absorbed dose and its biological effects in subjects undergoing neuro interventional radiological procedures

    International Nuclear Information System (INIS)

    Basheerudeen, Safa Abdul Syed; Subramanian, Vinodhini; Venkatachalam, Perumal; Joseph, Santosh; Selvam, Paneer; Jose, M.T.; Annalakshmi, O.

    2016-01-01

    Radiological imaging has many applications due to its non-invasiveness, rapid diagnosis of life threatening diseases, and shorter hospital stay which benefit patients of all age groups. However, these procedures are complicated and time consuming, which use repeated imaging views and radiation, thereby increasing patient dose, and collective effective dose to the background at low doses. The effects of high dose radiation are well established. However, the effects of low dose exposure remain to be determined. Therefore, investigating the effect on medically exposed individuals is an alternative source to understand the low dose effects of radiation. The ESD (Entrance Surface Dose) was recorded using Lithium borate based TL dosimeters to measure the doses received by the head, neck and shoulder of the study subjects (n = 70) who underwent procedures like cerebral angiography, coiling, stenting and embolization

  1. Methods of Estimation the Reliability and Increasing the Informativeness of the Laboratory Results (Analysis of the Laboratory Case of Measurement the Indicators of Thyroid Function

    Directory of Open Access Journals (Sweden)

    N A Kovyazina

    2014-06-01

    Full Text Available The goal of the study was to demonstrate the multilevel laboratory quality management system and point at the methods of estimating the reliability and increasing the amount of information content of the laboratory results (on the example of the laboratory case. Results. The article examines the stages of laboratory quality management which has helped to estimate the reliability of the results of determining Free T3, Free T4 and TSH. The measurement results are presented by the expanded uncertainty and the evaluation of the dynamics. Conclusion. Compliance with mandatory measures for laboratory quality management system enables laboratories to obtain reliable results and calculate the parameters that are able to increase the amount of information content of laboratory tests in clinical decision making.

  2. Reliability of the Core Items in the General Social Survey: Estimates from the Three-Wave Panels, 2006–2014

    Directory of Open Access Journals (Sweden)

    Michael Hout

    2016-11-01

    Full Text Available We used standard and multilevel models to assess the reliability of core items in the General Social Survey panel studies spanning 2006 to 2014. Most of the 293 core items scored well on the measure of reliability: 62 items (21 percent had reliability measures greater than 0.85; another 71 (24 percent had reliability measures between 0.70 and 0.85. Objective items, especially facts about demography and religion, were generally more reliable than subjective items. The economic recession of 2007–2009, the slow recovery afterward, and the election of Barack Obama in 2008 altered the social context in ways that may look like unreliability of items. For example, unemployment status, hours worked, and weeks worked have lower reliability than most work-related items, reflecting the consequences of the recession on the facts of peoples lives. Items regarding racial and gender discrimination and racial stereotypes scored as particularly unreliable, accounting for most of the 15 items with reliability coefficients less than 0.40. Our results allow scholars to more easily take measurement reliability into consideration in their own research, while also highlighting the limitations of these approaches.

  3. A Review of Sea State Estimation Procedures Based on Measured Vessel Responses

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2016-01-01

    for shipboard SSE using measured vessel responses, resembling the concept of traditional wave rider buoys. Moreover, newly developed ideas for shipboard sea state estimation are introduced. The presented material is all based on the author’s personal experience, developed within extensive work on the subject......The operation of ships requires careful monitoring of therelated costs while, at the same time, ensuring a high level of safety. A ship’s performance with respect to safety and fuel efficiency may be compromised by the encountered waves. Consequently, it is important to estimate the surrounding...

  4. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    Science.gov (United States)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  5. Procedure manual for the estimation of average indoor radon-daughter concentrations using the filtered alpha-track method

    International Nuclear Information System (INIS)

    George, J.L.

    1988-04-01

    One of the measurement needs of US Department of Energy (DOE) remedial action programs is the estimation of the annual-average indoor radon-daughter concentration (RDC) in structures. The filtered alpha-track method, using a 1-year exposure period, can be used to accomplish RDC estimations for the DOE remedial action programs. This manual describes the procedure used to obtain filtered alpha-track measurements to derive average RDC estimates from the measurrements. Appropriate quality-assurance and quality-control programs are also presented. The ''prompt'' alpha-track method of exposing monitors for 2 to 6 months during specific periods of the year is also briefly discussed in this manual. However, the prompt alpha-track method has been validated only for use in the Mesa County, Colorado, area. 3 refs., 3 figs

  6. The cost of obesity for nonbariatric inpatient operative procedures in the United States: national cost estimates obese versus nonobese patients.

    Science.gov (United States)

    Mason, Rodney J; Moroney, Jolene R; Berne, Thomas V

    2013-10-01

    To evaluate the economic impact of obesity on hospital costs associated with the commonest nonbariatric, nonobstetrical surgical procedures. Health care costs and obesity are both rising. Nonsurgical costs associated with obesity are well documented but surgical costs are not. National cost estimates were calculated from the Healthcare Cost and Utilization Project (HCUP) Nationwide Inpatient Sample (NIS) database, 2005-2009, for the highest volume nonbariatric nonobstetric procedures. Obesity was identified from the HCUP-NIS severity data file comorbidity index. Costs for obese patients were compared with those for nonobese patients. To control for medical complexity, each obese patient was matched one-to-one with a nonobese patient using age, sex, race, and 28 comorbid defined elements. Of 2,309,699 procedures, 439,8129 (19%) were successfully matched into 2 medically equal groups (obese vs nonobese). Adjusted total hospital costs incurred by obese patients were 3.7% higher with a significantly (P cost of $648 (95% confidence interval [CI]: $556-$736) compared with nonobese patients. Of the 2 major components of hospital costs, length of stay was significantly increased in obese patients (mean difference = 0.0253 days, 95% CI: 0.0225-0.0282) and resource utilization determined by costs per day were greater in obese patients due to an increased number of diagnostic and therapeutic procedures needed postoperatively (odds ratio [OR] = 0.94, 95% CI: 0.93-0.96). Postoperative complications were equivalent in both groups (OR = 0.97, 95% CI: 0.93-1.02). Annual national hospital expenditures for the largest volume surgical procedures is an estimated $160 million higher in obese than in a comparative group of nonobese patients.

  7. Estimating soil labile organic carbon and potential turnover rates using a sequential fumigation–incubation procedure.

    Science.gov (United States)

    X.M. Zoua; H.H. Ruanc; Y. Fua; X.D. Yanga; L.Q. Sha

    2005-01-01

    Labile carbon is the fraction of soil organic carbon with most rapid turnover times and its oxidation drives the flux of CO2 between soils and atmosphere. Available chemical and physical fractionation methods for estimating soil labile organic carbon are indirect and lack a clear biological definition. We have modified the well-established Jenkinson and Powlson’s...

  8. Estimation of radiation load of patient during IGRT image radiotherapy procedures

    International Nuclear Information System (INIS)

    Nechvil, K.; Mynarik, J.; Dolezel, M.; Minarikova, I.

    2009-01-01

    In this poster the methods of quantitative estimation of doses for lower imaging modalities used at IGRT are described. They are applied for actual scenario: therapy by IGRT used at radiotherapeutic department RC Multiscan Pardubice. Defined results are compared with values available from literature and evaluation of risk from these doses is made

  9. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  10. Sampling procedure in a willow plantation for estimation of moisture content

    DEFF Research Database (Denmark)

    Nielsen, Henrik Kofoed; Lærke, Poul Erik; Liu, Na

    2015-01-01

    Heating value and fuel quality of wood is closely connected to moisture content. In this work the variation of moisture content (MC) of short rotation coppice (SRC) willow shoots is described for five clones during one harvesting season. Subsequently an appropriate sampling procedure minimising...... labour costs and sampling uncertainty is proposed, where the MC of a single stem section with the length of 10–50 cm corresponds to the mean shoot moisture content (MSMC) with a bias of maximum 11 g kg−1. This bias can be reduced by selecting the stem section according to the particular clone...

  11. System Reliability Engineering

    International Nuclear Information System (INIS)

    Lim, Tae Jin

    2005-02-01

    This book tells of reliability engineering, which includes quality and reliability, reliability data, importance of reliability engineering, reliability and measure, the poisson process like goodness of fit test and the poisson arrival model, reliability estimation like exponential distribution, reliability of systems, availability, preventive maintenance such as replacement policies, minimal repair policy, shock models, spares, group maintenance and periodic inspection, analysis of common cause failure, and analysis model of repair effect.

  12. General guidance and procedures for estimating and reporting national GHG emissions for agriculture

    International Nuclear Information System (INIS)

    Rypdal, K.

    2002-01-01

    Greenhouse gas (GHG) emissions from agriculture account for a large share of total GHG emissions in most countries. Methane from ruminants, animal manure and rice fields, and nitrous oxide from agricultural soils are among the most important sources. In general, these emission estimates also are more uncertain than most other parts of the GHG emission inventory. IPCC has developed guidelines for estimating and reporting emissions of GHG. These guidelines shall be followed to secure complete, consistent, accurate and transparent reporting of emissions. However, the recommended methodologies are tiered, and choice of methods shall preferably reflect national circumstances, the national importance of a source, and different resources to prepare inventories. A country may also apply a national methodology given that it is well documented and not in conflict with good practice. Emission data reported under the United Nation Framework Convention on Climate Change are subject to external control, and the methodologies are reviewed by experts on agricultural inventories. (au)

  13. Heat flux estimation for neutral beam line components using inverse heat conduction procedures

    International Nuclear Information System (INIS)

    Bharathi, P.; Prahlad, V.; Quereshi, K.; Bansal, L.K.; Rambabu, S.; Sharma, S.K.; Parmar, S.; Patel, P.J.; Baruah, U.K.; Patel, Ravi

    2015-01-01

    In this work, we describe and compare the analytical IHCP methods such-as semi-infinite method, finite slab method and a numerical method called Stolz method for estimating the incident heat flux from the experimentally measured temperature data. In case of analytical methods, the finite time response of the sensor is needed to be accounted for an accurate power density estimations. The modified models corrected for the response time of the sensors are also discussed in this paper. Application of these methods using example temperature waveforms obtained on the SST1-NBI test stand is presented and discussed. For choosing the suitable method for the calorimetry on beam line components, the estimated results are also validated using the ANSYS analysis done on these beam Iine components. As a conclusion, the finite slab method corrected for the influence of the sensor response time found out to be the most suitable method for the inversion of temperature data in case of neutral beam line components

  14. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    Directory of Open Access Journals (Sweden)

    Alexander J Taylor

    Full Text Available Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  15. Probe-Specific Procedure to Estimate Sensitivity and Detection Limits for 19F Magnetic Resonance Imaging.

    Science.gov (United States)

    Taylor, Alexander J; Granwehr, Josef; Lesbats, Clémentine; Krupa, James L; Six, Joseph S; Pavlovskaya, Galina E; Thomas, Neil R; Auer, Dorothee P; Meersmann, Thomas; Faas, Henryk M

    2016-01-01

    Due to low fluorine background signal in vivo, 19F is a good marker to study the fate of exogenous molecules by magnetic resonance imaging (MRI) using equilibrium nuclear spin polarization schemes. Since 19F MRI applications require high sensitivity, it can be important to assess experimental feasibility during the design stage already by estimating the minimum detectable fluorine concentration. Here we propose a simple method for the calibration of MRI hardware, providing sensitivity estimates for a given scanner and coil configuration. An experimental "calibration factor" to account for variations in coil configuration and hardware set-up is specified. Once it has been determined in a calibration experiment, the sensitivity of an experiment or, alternatively, the minimum number of required spins or the minimum marker concentration can be estimated without the need for a pilot experiment. The definition of this calibration factor is derived based on standard equations for the sensitivity in magnetic resonance, yet the method is not restricted by the limited validity of these equations, since additional instrument-dependent factors are implicitly included during calibration. The method is demonstrated using MR spectroscopy and imaging experiments with different 19F samples, both paramagnetically and susceptibility broadened, to approximate a range of realistic environments.

  16. Catalase measurement: A new field procedure for rapidly estimating microbial loads in fuels and water-bottoms

    Energy Technology Data Exchange (ETDEWEB)

    Passman, F.J. [Biodeterioration Control Associates, Inc., Chicago, IL (United States); Daniels, D.A. [Basic Fuel Services, Dover, NJ (United States); Chesneau, H.F.

    1995-05-01

    Low-grade microbial infections of fuel and fuel systems generally go undetected until they cause major operational problems. Three interdependent factors contribute to this: mis-diagnosis, incorrect or inadequate sampling procedures and perceived complexity of microbiological testing procedures. After discussing the first two issues, this paper describes a rapid field test for estimating microbial loads in fuels and associated water. The test, adapted from a procedure initially developed to measure microbial loads in metalworking fluids, takes advantage of the nearly universal presence of the enzyme catalase in the microbes that contaminated fuel systems. Samples are reacted with a peroxide-based reagent; liberating oxygen gas. The gas generates a pressure-head in a reaction tube. At fifteen minutes, a patented, electronic pressure-sensing device is used to measure that head-space pressure. The authors present both laboratory and field data from fuels and water-bottoms, demonstrating the excellent correlation between traditional viable test data (acquired after 48-72 hours incubation) and catalase test data (acquired after 15 min.-4 hours). We conclude by recommending procedures for developing a failure analysis data-base to enhance our industry`s understanding of the relationship between uncontrolled microbial contamination and fuel performance problems.

  17. An overview of coefficient alpha and a reliability matrix for estimating adequacy of internal consistency coefficients with psychological research measures.

    Science.gov (United States)

    Ponterotto, Joseph G; Ruckdeschel, Daniel E

    2007-12-01

    The present article addresses issues in reliability assessment that are often neglected in psychological research such as acceptable levels of internal consistency for research purposes, factors affecting the magnitude of coefficient alpha (alpha), and considerations for interpreting alpha within the research context. A new reliability matrix anchored in classical test theory is introduced to help researchers judge adequacy of internal consistency coefficients with research measures. Guidelines and cautions in applying the matrix are provided.

  18. A use-side procedure for estimating trade margins in input-output analysis

    Directory of Open Access Journals (Sweden)

    Marisa Asensio Pardo

    2005-01-01

    Full Text Available According to the National Accounting Systems proposed by United Nations (1993 and Eurostat (1996, use and make (or supply matrices should be measured before goods and services are conveyed to the markets (basic values. Actually, the make table is defined in basic values (excluding trade and transport margins and net commodity taxes whereas the use table is in purchasers’ values (including them. In particular, this paper shows how trade margins can be removed from the use table with the purpose of constructing an input-output table. The proposed approach is based on the use-side procedure from the ESA-95 Input-Output Manual (Eurostat, 2002 and is also being applied to the forthcoming 2000 Andalusian Input-Output Framework.

  19. Procedure to approximately estimate the uncertainty of material ratio parameters due to inhomogeneity of surface roughness

    International Nuclear Information System (INIS)

    Hüser, Dorothee; Thomsen-Schmidt, Peter; Hüser, Jonathan; Rief, Sebastian; Seewig, Jörg

    2016-01-01

    Roughness parameters that characterize contacting surfaces with regard to friction and wear are commonly stated without uncertainties, or with an uncertainty only taking into account a very limited amount of aspects such as repeatability of reproducibility (homogeneity) of the specimen. This makes it difficult to discriminate between different values of single roughness parameters. Therefore uncertainty assessment methods are required that take all relevant aspects into account. In the literature this is rarely performed and examples specific for parameters used in friction and wear are not yet given. We propose a procedure to derive the uncertainty from a single profile employing a statistical method that is based on the statistical moments of the amplitude distribution and the autocorrelation length of the profile. To show the possibilities and the limitations of this method we compare the uncertainty derived from a single profile with that derived from a high statistics experiment. (paper)

  20. Application of virtual reality procedures in radiation protection and dose estimation for workers

    International Nuclear Information System (INIS)

    Blunck, C.; Becker, F.

    2010-01-01

    When people need to work in an environment where radiation fields are present, one has to think about the operation procedure in respect of radiation protection. This is valid for routine as well as for special work situations where radiation protection precautions are necessary. In order to give an advice about the safest way of operation and adequate shielding measures, it is necessary to analyse the radiation field and possible dose exposures at relevant positions in the working area. Since the field can be very inhomogeneous, extensive measurements could be needed for this purpose. In addition it is possible, that the field is not present before the time of work and a measurement could be troublesome or not possible at all. In this case, a simulation of the specific scenario could be an efficient way to analyse the radiation fields and determine possible exposures at different places. If an adequate phantom is used, it is even possible to determine personal doses like H p (10) or H p (0.07). However in most work situations, exposure is not a static scenario. The radiation field varies if the source or its surrounding objects change place. Furthermore people or parts of their bodies are usually in motion. Hence simulations of movements in inhomogeneous time and space variant radiation fields are desirable for dose assessment. In such a ''virtual reality'' working procedures could be trained or analysed without any exposure. We present an approach of simulating hand movements in inhomogeneous beta and photon radiation fields by means of an articulated hand phantom. As an example application, the hand phantom is used to simulate the handling of a Y-90 source. (orig.)

  1. Estimation Procedure of Common Cause Failure Parameters for CAFE-PSA

    International Nuclear Information System (INIS)

    Kang, Dae Il; Hwang, M. J.; Han, S. H.

    2009-03-01

    Detailed common cause failure (CCF) analysis generally needs the data for CCF events from other nuclear power plants because the CCF events rarely occur. Since 2002, KAERI has participated in the international common cause failure data exchange (ICDE) project to get data for CCF events. The operation office of the ICDE project sent about 400 CCF event data for emergency diesel generators, motor operated valves, check valves, pumps, and breakers to KAERI in 2009. However, there was no program available to analyze the ICDE CCF event data. Therefore, we developed the CAFE-PSA (common CAuse Failure Event analysis program for PSA) to estimate CCF parameters by using the ICDE CCF event data. With CAFE-PSA, the CCF events in the ICDE database can be qualitatively and quantitatively analyzed. The qualitative analysis results of the ICDE CCF data, by using the CAFE-PSA, showed that the major root cause of CCF events, for motor operated valves, check valves, and pumps, was the fault of their internal parts, and that for emergency diesel generators and breakers was the inadequacy of design/manufacture or construction. The quantitative analysis results of the ICDE CCF data, by using the CAFE-PSA, showed that the estimated Alpha Factors of components, mentioned above, were lower than those previously used in the PSA for domestic nuclear power plants, but were higher than those in USNRC 2007 CCF data. Through performing qualitative and quantitative analysis of the ICDE CCF data, by using the CAFE-PSA, a plan for coping with CCF events for design and operation of nuclear power plants can be produced and reasonable values for CCF parameters can be estimated. In addition, it is expected that the technical adequacy of PSA can be improved

  2. Monte Carlo based estimation of organ and effective doses to patients undergoing hysterosalpingography and retrograde urethrography fluoroscopy procedures

    Science.gov (United States)

    Ngaile, J. E.; Msaki, P. K.; Kazema, R. R.

    2018-04-01

    Contrast investigations of hysterosalpingography (HSG) and retrograde urethrography (RUG) fluoroscopy procedures remain the dominant diagnostic tools for the investigation of infertility in females and urethral strictures in males, respectively, owing to the scarcity and high cost of services of alternative diagnostic technologies. In light of the radiological risks associated with contrast based investigations of the genitourinary tract systems, there is a need to assess the magnitude of radiation burden imparted to patients undergoing HSG and RUG fluoroscopy procedures in Tanzania. The air kerma area product (KAP), fluoroscopy time, number of images, organ dose and effective dose to patients undergoing HSG and RUG procedures were obtained from four hospitals. The KAP was measured using a flat transmission ionization chamber, while the organ and effective doses were estimated using the knowledge of the patient characteristics, patient related exposure parameters, geometry of examination, KAP and Monte Carlo calculations (PCXMC). The median values of KAP for the HSG and RUG were 2.2 Gy cm2 and 3.3 Gy cm2, respectively. The median organ doses in the present study for the ovaries, urinary bladder and uterus for the HSG procedures, were 1.0 mGy, 4.0 mGy and 1.6 mGy, respectively, while for urinary bladder and testes of the RUG were 3.4 mGy and 5.9 mGy, respectively. The median values of effective doses for the HSG and RUG procedures were 0.65 mSv and 0.59 mSv, respectively. The median values of effective dose per hospital for the HSG and RUG procedures had a range of 1.6-2.8 mSv and 1.9-5.6 mSv, respectively, while the overall differences between individual effective doses across the four hospitals varied by factors of up to 22.0 and 46.7, respectively for the HSG and RUG procedures. The proposed diagnostic reference levels (DRLs) for the HSG and RUG were for KAP 2.8 Gy cm2 and 3.9 Gy cm2, for fluoroscopy time 0.8 min and 0.9 min, and for number of images 5 and 4

  3. Distorted estimates of implicit and explicit learning in applications of the process-dissociation procedure to the SRT task.

    Science.gov (United States)

    Stahl, Christoph; Barth, Marius; Haider, Hilde

    2015-12-01

    We investigated potential biases affecting the validity of the process-dissociation (PD) procedure when applied to sequence learning. Participants were or were not exposed to a serial reaction time task (SRTT) with two types of pseudo-random materials. Afterwards, participants worked on a free or cued generation task under inclusion and exclusion instructions. Results showed that pre-experimental response tendencies, non-associative learning of location frequencies, and the usage of cue locations introduced bias to PD estimates. These biases may lead to erroneous conclusions regarding the presence of implicit and explicit knowledge. Potential remedies for these problems are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Validation and reliability of the sex estimation of the human os coxae using freely available DSP2 software for bioarchaeology and forensic anthropology.

    Science.gov (United States)

    Brůžek, Jaroslav; Santos, Frédéric; Dutailly, Bruno; Murail, Pascal; Cunha, Eugenia

    2017-10-01

    A new tool for skeletal sex estimation based on measurements of the human os coxae is presented using skeletons from a metapopulation of identified adult individuals from twelve independent population samples. For reliable sex estimation, a posterior probability greater than 0.95 was considered to be the classification threshold: below this value, estimates are considered indeterminate. By providing free software, we aim to develop an even more disseminated method for sex estimation. Ten metric variables collected from 2,040 ossa coxa of adult subjects of known sex were recorded between 1986 and 2002 (reference sample). To test both the validity and reliability, a target sample consisting of two series of adult ossa coxa of known sex (n = 623) was used. The DSP2 software (Diagnose Sexuelle Probabiliste v2) is based on Linear Discriminant Analysis, and the posterior probabilities are calculated using an R script. For the reference sample, any combination of four dimensions provides a correct sex estimate in at least 99% of cases. The percentage of individuals for whom sex can be estimated depends on the number of dimensions; for all ten variables it is higher than 90%. Those results are confirmed in the target sample. Our posterior probability threshold of 0.95 for sex estimate corresponds to the traditional sectioning point used in osteological studies. DSP2 software is replacing the former version that should not be used anymore. DSP2 is a robust and reliable technique for sexing adult os coxae, and is also user friendly. © 2017 Wiley Periodicals, Inc.

  5. Data Processing Procedures and Methodology for Estimating Trip Distances for the 1995 American Travel Survey (ATS)

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, H.-L.; Rollow, J.

    2000-05-01

    The 1995 American Travel Survey (ATS) collected information from approximately 80,000 U.S. households about their long distance travel (one-way trips of 100 miles or more) during the year of 1995. It is the most comprehensive survey of where, why, and how U.S. residents travel since 1977. ATS is a joint effort by the U.S. Department of Transportation (DOT) Bureau of Transportation Statistics (BTS) and the U.S. Department of Commerce Bureau of Census (Census); BTS provided the funding and supervision of the project, and Census selected the samples, conducted interviews, and processed the data. This report documents the technical support for the ATS provided by the Center for Transportation Analysis (CTA) in Oak Ridge National Laboratory (ORNL), which included the estimation of trip distances as well as data quality editing and checking of variables required for the distance calculations.

  6. A computer-assisted procedure for estimating patient exposure and fetal dose in radiographic examinations

    International Nuclear Information System (INIS)

    Glaze, S.; Schneiders, N.; Bushong, S.C.

    1982-01-01

    A computer program for calculating patient entrance exposure and fetal dose for 11 common radiographic examinations was developed. The output intensity measured at 70 kVp and a 30-inch (76-cm) source-to-skin distance was entered into the program. The change in output intensity with changing kVp was examined for 17 single-phase and 12 three-phase x-ray units. The relationships obtained from a least squares regression analysis of the data, along with the technique factors for each examination, were used to calculate patient exposure. Fetal dose was estimated using published fetal dose in mrad (10 -5 Gy) per 1,000 mR (258 μC/kg) entrance exposure values. The computations are fully automated and individualized to each radiographic unit. The information provides a ready reference in large institutions and is particularly useful at smaller facilities that do not have available physicists who can make the calculations immediately

  7. A computer-assisted procedure for estimating patient exposure and fetal dose in radiographic examinations

    International Nuclear Information System (INIS)

    Glaze, S.; Schneiders, N.; Bushong, S.C.

    1982-01-01

    A computer program for calculating patient entrance exposure and fetal dose for 11 common radiographic examinations was developed. The output intensity measured at 70 kVp and a 30-inch (76-cm) source-to-skin distance was entered into the program. The change in output intensity with changing kVp was examined for 17 single-phase and 12 three-phase x-ray units. The relationships obtained from a least squares regression analysis of the data, along with the technique factors for each examination, were used to calculate patient exposure. Fetal dose was estimated using published fetal dose in mrad (10(-5) Gy) per 1,000 mR (258 microC/kg) entrance exposure values. The computations are fully automated and individualized to each radiographic unit. The information provides a ready reference in large institutions and is particularly useful at smaller facilities that do not have available physicians who can make the calculations immediately

  8. Feasibility Study of a Simulation Driven Approach for Estimating Reliability of Wind Turbine Fluid Power Pitch Systems

    DEFF Research Database (Denmark)

    Liniger, Jesper; Pedersen, Henrik Clemmensen; N. Soltani, Mohsen

    2018-01-01

    Recent field data indicates that pitch systems account for a substantial part of a wind turbines down time. Reducing downtime means increasing the total amount of energy produced during its lifetime. Both electrical and fluid power pitch systems are employed with a roughly 50/50 distribution. Fluid...... power pitch systems generally show higher reliability and have been favored on larger offshore wind turbines. Still general issues such as leakage, contamination and electrical faults make current systems work sub-optimal. Current field data for wind turbines present overall pitch system reliability...... and the reliability of component groups (valves, accumulators, pumps etc.). However, the failure modes of the components and more importantly the root causes are not evident. The root causes and failure mode probabilities are central for changing current pitch system designs and operational concepts to increase...

  9. A Spatial Allocation Procedure to Downscale Regional Crop Production Estimates from an Integrated Assessment Model

    Science.gov (United States)

    Moulds, S.; Djordjevic, S.; Savic, D.

    2017-12-01

    The Global Change Assessment Model (GCAM), an integrated assessment model, provides insight into the interactions and feedbacks between physical and human systems. The land system component of GCAM, which simulates land use activities and the production of major crops, produces output at the subregional level which must be spatially downscaled in order to use with gridded impact assessment models. However, existing downscaling routines typically consider cropland as a homogeneous class and do not provide information about land use intensity or specific management practices such as irrigation and multiple cropping. This paper presents a spatial allocation procedure to downscale crop production data from GCAM to a spatial grid, producing a time series of maps which show the spatial distribution of specific crops (e.g. rice, wheat, maize) at four input levels (subsistence, low input rainfed, high input rainfed and high input irrigated). The model algorithm is constrained by available cropland at each time point and therefore implicitly balances extensification and intensification processes in order to meet global food demand. It utilises a stochastic approach such that an increase in production of a particular crop is more likely to occur in grid cells with a high biophysical suitability and neighbourhood influence, while a fall in production will occur more often in cells with lower suitability. User-supplied rules define the order in which specific crops are downscaled as well as allowable transitions. A regional case study demonstrates the ability of the model to reproduce historical trends in India by comparing the model output with district-level agricultural inventory data. Lastly, the model is used to predict the spatial distribution of crops globally under various GCAM scenarios.

  10. Microelectronics Reliability

    Science.gov (United States)

    2017-01-17

    inverters  connected in a chain. ................................................. 5  Figure 3  Typical graph showing frequency versus square root of...developing an experimental  reliability estimating methodology that could both illuminate the  lifetime  reliability of advanced devices,  circuits and...or  FIT of the device. In other words an accurate estimate of the device  lifetime  was found and thus the  reliability  that  can  be  conveniently

  11. An investigation into the minimum accelerometry wear time for reliable estimates of habitual physical activity and definition of a standard measurement day in pre-school children.

    Science.gov (United States)

    Hislop, Jane; Law, James; Rush, Robert; Grainger, Andrew; Bulley, Cathy; Reilly, John J; Mercer, Tom

    2014-11-01

    The purpose of this study was to determine the number of hours and days of accelerometry data necessary to provide a reliable estimate of habitual physical activity in pre-school children. The impact of a weekend day on reliability estimates was also determined and standard measurement days were defined for weekend and weekdays.Accelerometry data were collected from 112 children (60 males, 52 females, mean (SD) 3.7 (0.7)yr) over 7 d. The Spearman-Brown Prophecy formula (S-B prophecy formula) was used to predict the number of days and hours of data required to achieve an intraclass correlation coefficient (ICC) of 0.7. The impact of including a weekend day was evaluated by comparing the reliability coefficient (r) for any 4 d of data with data for 4 d including one weekend day.Our observations indicate that 3 d of accelerometry monitoring, regardless of whether it includes a weekend day, for at least 7 h  d(-1) offers sufficient reliability to characterise total physical activity and sedentary behaviour of pre-school children. These findings offer an approach that addresses the underlying tension in epidemiologic surveillance studies between the need to maintain acceptable measurement rigour and retention of a representatively meaningful sample size.

  12. Reliability of tumor volume estimation from MR images in patients with malignant glioma. Results from the American College of Radiology Imaging Network (ACRIN) 6662 Trial

    International Nuclear Information System (INIS)

    Ertl-Wagner, Birgit B.; Blume, Jeffrey D.; Herman, Benjamin; Peck, Donald; Udupa, Jayaram K.; Levering, Anthony; Schmalfuss, Ilona M.

    2009-01-01

    Reliable assessment of tumor growth in malignant glioma poses a common problem both clinically and when studying novel therapeutic agents. We aimed to evaluate two software-systems in their ability to estimate volume change of tumor and/or edema on magnetic resonance (MR) images of malignant gliomas. Twenty patients with malignant glioma were included from different sites. Serial post-operative MR images were assessed with two software systems representative of the two fundamental segmentation methods, single-image fuzzy analysis (3DVIEWNIX-TV) and multi-spectral-image analysis (Eigentool), and with a manual method by 16 independent readers (eight MR-certified technologists, four neuroradiology fellows, four neuroradiologists). Enhancing tumor volume and tumor volume plus edema were assessed independently by each reader. Intraclass correlation coefficients (ICCs), variance components, and prediction intervals were estimated. There were no significant differences in the average tumor volume change over time between the software systems (p > 0.05). Both software systems were much more reliable and yielded smaller prediction intervals than manual measurements. No significant differences were observed between the volume changes determined by fellows/neuroradiologists or technologists.Semi-automated software systems are reliable tools to serve as outcome parameters in clinical studies and the basis for therapeutic decision-making for malignant gliomas, whereas manual measurements are less reliable and should not be the basis for clinical or research outcome studies. (orig.)

  13. An approach toward estimating the safety significance of normal and abnormal operating procedures in nuclear power plants

    International Nuclear Information System (INIS)

    Grant, T.F.; Harris, M.S.

    1989-01-01

    The Nuclear Regulatory Commission's TMI Action Plan calls for a long-term plan to upgrade operating procedures in nuclear power plants. The scope of Generic Issue Human Factors 4.4, which stems from this requirement, includes the recommendation of improvements in nuclear power plant normal and abnormal operating procedures (NOPs and AOPs) and the implementation of appropriate regulatory action. This paper will describe the objectives, methodologies, and results of a Battelle-conducted value impact assessment to determine the costs and benefits of having the NRC implement regulatory action that would specify requirements for the preparation of acceptable NOPs and AOPs by the Commission's nuclear power plant licensees. The results of this value impact assessment are expressed in terms of ten cost/benefit attributes that can be affected by the NRC regulatory action. Five of these attributes require the calculation of change in public risk that could be expected to result from the action which, in this case, required determining the safety significance of NOPs and AOPs. In order to estimate this safety significance, a multi-step methodology was created that relies on an existing Probabilistic Risk Assessment (PRA) to provide a quantitative framework for modeling the role of operating procedures. The purpose of this methodology is to determine what impact the improvement of NOPs and AOPs would have on public health and safety

  14. Development of a procedure for estimating the high cycle fatigue strength of some high temperature structural alloys

    International Nuclear Information System (INIS)

    Soo, P.; Chow, J.G.Y.

    1979-01-01

    The generation of strain controlled fatigue data, for the standard strain rate of 4 x 10 -3 sec -1 , presents a problem when the cycles to failure exceed 10 5 because of the prohibitively long test times involved. In an attempt to circumvent this difficulty an evaluation has been made of a test procedure involving a fast cycling rate (40 Hz) and load controlled conditions. The validity of this procedure for extending current fatigue curves from 10 5 to 10 8 cycles and beyond, hinges upon the selection of an appropriate effective strain value, since the strain usually changes rapidly during the early stage of fatigue. Results from annealed 2 1/4 Cr-1 Mo, type 304 stainless steel, Incoloy 800H and Hastelloy X, tested over a wide range of temperatures, show that the strain measured N/sub f/2 is a reasonable estimate since it gives an excellent correlation between the strain and load controlled tests in the 10 5 cycle range where the data overlap. It seems clear that the differences in cycling rate and early stress-strain history for the two tests do not significantly affect the correlation. It may, therefore, be concluded that such load control test procedures may be used as a valid fast way for extending currently available fatigue curves from 10 5 to 10 8 cycles, and beyond

  15. Constraints on LISA Pathfinder's Self-Gravity: Design Requirements, Estimates and Testing Procedures

    Science.gov (United States)

    Armano, M.; Audley, H.; Auger, G.; Baird, J.; Binetruy, P.; Born, M.; Bortoluzzi, M.; Brandt, Nico; Bursi, Alessandro; Slutsky. J.; hide

    2016-01-01

    LISA Pathfinder satellite was launched on 3 December 2015 toward the Sun Earth first Lagrangian point (L1) where the LISA Technology Package (LTP), which is the main science payload, will be tested. LTP achieves measurements of differential acceleration of free-falling test masses (TMs) with sensitivity below 3 x 10(exp -14) m s(exp -2) Hz(exp - 1/2) within the 130 mHz frequency band in one dimension. The spacecraft itself is responsible for the dominant differential gravitational field acting on the two TMs. Such a force interaction could contribute a significant amount of noise and thus threaten the achievement of the targeted free-fall level. We prevented this by balancing the gravitational forces to the sub nm s(exp -2) level, guided by a protocol based on measurements of the position and the mass of all parts that constitute the satellite, via finite element calculation tool estimates. In this paper, we will introduce the gravitational balance requirements and design, and then discuss our predictions for the balance that will be achieved in flight.

  16. SPSS and SAS procedures for estimating indirect effects in simple mediation models.

    Science.gov (United States)

    Preacher, Kristopher J; Hayes, Andrew F

    2004-11-01

    Researchers often conduct mediation analysis in order to indirectly assess the effect of a proposed cause on some outcome through a proposed mediator. The utility of mediation analysis stems from its ability to go beyond the merely descriptive to a more functional understanding of the relationships among variables. A necessary component of mediation is a statistically and practically significant indirect effect. Although mediation hypotheses are frequently explored in psychological research, formal significance tests of indirect effects are rarely conducted. After a brief overview of mediation, we argue the importance of directly testing the significance of indirect effects and provide SPSS and SAS macros that facilitate estimation of the indirect effect with a normal theory approach and a bootstrap approach to obtaining confidence intervals, as well as the traditional approach advocated by Baron and Kenny (1986). We hope that this discussion and the macros will enhance the frequency of formal mediation tests in the psychology literature. Electronic copies of these macros may be downloaded from the Psychonomic Society's Web archive at www.psychonomic.org/archive/.

  17. Reliability calculations

    International Nuclear Information System (INIS)

    Petersen, K.E.

    1986-03-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)

  18. Cumulative Retrospective Exposure Assessment (REA) as a predictor of amphibole asbestos lung burden: validation procedures and results for industrial hygiene and pathology estimates.

    Science.gov (United States)

    Rasmuson, James O; Roggli, Victor L; Boelter, Fred W; Rasmuson, Eric J; Redinger, Charles F

    2014-01-01

    A detailed evaluation of the correlation and linearity of industrial hygiene retrospective exposure assessment (REA) for cumulative asbestos exposure with asbestos lung burden analysis (LBA) has not been previously performed, but both methods are utilized for case-control and cohort studies and other applications such as setting occupational exposure limits. (a) To correlate REA with asbestos LBA for a large number of cases from varied industries and exposure scenarios; (b) to evaluate the linearity, precision, and applicability of both industrial hygiene exposure reconstruction and LBA; and (c) to demonstrate validation methods for REA. A panel of four experienced industrial hygiene raters independently estimated the cumulative asbestos exposure for 363 cases with limited exposure details in which asbestos LBA had been independently determined. LBA for asbestos bodies was performed by a pathologist by both light microscopy and scanning electron microscopy (SEM) and free asbestos fibers by SEM. Precision, reliability, correlation and linearity were evaluated via intraclass correlation, regression analysis and analysis of covariance. Plaintiff's answers to interrogatories, work history sheets, work summaries or plaintiff's discovery depositions that were obtained in court cases involving asbestos were utilized by the pathologist to provide a summarized brief asbestos exposure and work history for each of the 363 cases. Linear relationships between REA and LBA were found when adjustment was made for asbestos fiber-type exposure differences. Significant correlation between REA and LBA was found with amphibole asbestos lung burden and mixed fiber-types, but not with chrysotile. The intraclass correlation coefficients (ICC) for the precision of the industrial hygiene rater cumulative asbestos exposure estimates and the precision of repeated laboratory analysis were found to be in the excellent range. The ICC estimates were performed independent of specific asbestos

  19. Estimating cross-price elasticity of e-cigarettes using a simulated demand procedure.

    Science.gov (United States)

    Grace, Randolph C; Kivell, Bronwyn M; Laugesen, Murray

    2015-05-01

    Our goal was to measure the cross-price elasticity of electronic cigarettes (e-cigarettes) and simulated demand for tobacco cigarettes both in the presence and absence of e-cigarette availability. A sample of New Zealand smokers (N = 210) completed a Cigarette Purchase Task to indicate their demand for tobacco at a range of prices. They sampled an e-cigarette and rated it and their own-brand tobacco for favorability, and indicated how many e-cigarettes and regular cigarettes they would purchase at 0.5×, 1×, and 2× the current market price for regular cigarettes, assuming that the price of e-cigarettes remained constant. Cross-price elasticity for e-cigarettes was estimated as 0.16, and was significantly positive, indicating that e-cigarettes were partially substitutable for regular cigarettes. Simulated demand for regular cigarettes at current market prices decreased by 42.8% when e-cigarettes were available, and e-cigarettes were rated 81% as favorably as own-brand tobacco. However when cigarettes cost 2× the current market price, significantly more smokers said they would quit (50.2%) if e-cigarettes were not available than if they were available (30.0%). Results show that e-cigarettes are potentially substitutable for regular cigarettes and their availability will reduce tobacco consumption. However, e-cigarettes may discourage smokers from quitting entirely as cigarette price increases, so policy makers should consider maintaining a constant relative price differential between e-cigarettes and tobacco cigarettes. © The Author 2014. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Online Identification with Reliability Criterion and State of Charge Estimation Based on a Fuzzy Adaptive Extended Kalman Filter for Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Zhongwei Deng

    2016-06-01

    Full Text Available In the field of state of charge (SOC estimation, the Kalman filter has been widely used for many years, although its performance strongly depends on the accuracy of the battery model as well as the noise covariance. The Kalman gain determines the confidence coefficient of the battery model by adjusting the weight of open circuit voltage (OCV correction, and has a strong correlation with the measurement noise covariance (R. In this paper, the online identification method is applied to acquire the real model parameters under different operation conditions. A criterion based on the OCV error is proposed to evaluate the reliability of online parameters. Besides, the equivalent circuit model produces an intrinsic model error which is dependent on the load current, and the property that a high battery current or a large current change induces a large model error can be observed. Based on the above prior knowledge, a fuzzy model is established to compensate the model error through updating R. Combining the positive strategy (i.e., online identification and negative strategy (i.e., fuzzy model, a more reliable and robust SOC estimation algorithm is proposed. The experiment results verify the proposed reliability criterion and SOC estimation method under various conditions for LiFePO4 batteries.

  1. Efficacy, Reliability, and Safety of Completely Autologous Fibrin Glue in Neurosurgical Procedures: Single-Center Retrospective Large-Number Case Study.

    Science.gov (United States)

    Nakayama, Noriyuki; Yano, Hirohito; Egashira, Yusuke; Enomoto, Yukiko; Ohe, Naoyuki; Kanemura, Nobuhiro; Kitagawa, Junichi; Iwama, Toru

    2018-01-01

    Commercially available fibrin glue (Com-FG), which is used commonly worldwide, is produced with pooled human plasma from multiple donors. However, it has added bovine aprotinin, which involves the risk of infection, allogenic immunity, and allergic reactions. We evaluate the efficacy, reliability, and safety of completely autologous fibrin glue (CAFG). From August 2014 to February 2016, prospective data were collected and analyzed from 153 patients. CAFG was prepared with the CryoSeal System using autologous blood and was applied during neurosurgical procedures. Using CAFG-soaked oxidized regenerated cellulose and/or polyglycolic acid sheets, we performed a pinpoint hemostasis, transposed the offending vessels in a microvascular decompression, and covered the dural incision to prevent cerebrospinal fluid leakage. The CryoSeal System had generated up to a mean of 4.51 mL (range, 3.0-8.4 mL) of CAFG from 400 mL autologous blood. Com-FG products were not used in our procedures. Only 6 patients required an additional allogeneic blood transfusion. The hemostatic effective rate was 96.1% (147 of 153 patients). Only 1 patient who received transsphenoidal surgery for a pituitary adenoma presented with the complication of delayed postoperative cerebrospinal fluid leakage (0.65%). No patient developed allergic reactions or systemic complications associated with the use of CAFG. CAFG effectively provides hemostatic, adhesive, and safety performance. The timing and three-dimensional shape of CAFG-soaked oxidized regenerated cellulose and/or polyglycolic acid sheets solidification can be controlled with slow fibrin formation. The cost to prepare CAFG is similar compared with Com-FG products, and it can therefore be easily used at most institutions. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. An iterative procedure for estimating areally averaged heat flux using planetary boundary layer mixed layer height and locally measured heat flux

    Energy Technology Data Exchange (ETDEWEB)

    Coulter, R. L.; Gao, W.; Lesht, B. M.

    2000-04-04

    Measurements at the central facility of the Southern Great Plains (SGP) Cloud and Radiation Testbed (CART) are intended to verify, improve, and develop parameterizations in radiative flux models that are subsequently used in General Circulation Models (GCMs). The reliability of this approach depends upon the representativeness of the local measurements at the central facility for the site as a whole or on how these measurements can be interpreted so as to accurately represent increasingly large scales. The variation of surface energy budget terms over the SGP CART site is extremely large. Surface layer measurements of the sensible heat flux (H) often vary by a factor of 2 or more at the CART site (Coulter et al. 1996). The Planetary Boundary Layer (PBL) effectively integrates the local inputs across large scales; because the mixed layer height (h) is principally driven by H, it can, in principal, be used for estimates of surface heat flux over scales on the order of tens of kilometers. By combining measurements of h from radiosondes or radar wind profiles with a one-dimensional model of mixed layer height, they are investigating the ability of diagnosing large-scale heat fluxes. The authors have developed a procedure using the model described by Boers et al. (1984) to investigate the effect of changes in surface sensible heat flux on the mixed layer height. The objective of the study is to invert the sense of the model.

  3. Procedures for the estimation of regional scale atmospheric emissions—An example from the North West Region of England

    Science.gov (United States)

    Lindley, S. J.; Longhurst, J. W. S.; Watson, A. F. R.; Conlan, D. E.

    This paper considers the value of applying an alternative pro rata methodology to the estimation of atmospheric emissions from a given regional or local area. Such investigations into less time and resource intensive means of providing estimates in comparison to traditional methods are important due to the potential role of new methods in the development of air quality management plans. A pro rata approach is used here to estimate emissions of SO 2, NO x, CO, CO 2, VOCs and black smoke from all sources and Pb from transportation for the North West region of England. This method has the advantage of using readily available data as well as being an easily repeatable procedure which provides a good indication of emissions to be expected from a particular geographical region. This can then provide the impetus for further emission studies and ultimately a regional/local air quality management plan. Results suggest that between 1987 and 1991 trends in the emissions of the pollutants considered have been less favourable in the North West region than in the nation as a whole.

  4. Reliability engineering

    International Nuclear Information System (INIS)

    Lee, Chi Woo; Kim, Sun Jin; Lee, Seung Woo; Jeong, Sang Yeong

    1993-08-01

    This book start what is reliability? such as origin of reliability problems, definition of reliability and reliability and use of reliability. It also deals with probability and calculation of reliability, reliability function and failure rate, probability distribution of reliability, assumption of MTBF, process of probability distribution, down time, maintainability and availability, break down maintenance and preventive maintenance design of reliability, design of reliability for prediction and statistics, reliability test, reliability data and design and management of reliability.

  5. Evaluation of F/E·DOI method as an approximate estimate of skin dose during percutaneous coronary intervention procedure

    International Nuclear Information System (INIS)

    Nakahara, Makoto; Yoshino, Akira; Kitano, K.; Yamaguchi, M.; Morone, Takayuki; Tani, K.

    2005-01-01

    The purpose of this study was to evaluate the efficacy of fluoroscopy time/total exposure times exposure times · in direction of interest (F/E·DOI) method as an approximate estimate of skin dose during percutaneous coronary intervention (PCI) procedure. Up to March 10, 2004, fifty-seven patients (male: 46 cases, female: 11 cases, age range 38-85 years; mean age 67±11 years) had undergone PCI and 157 directions of exposure was measured using X-ray films (KONICA MINOLTA SR-DUP) placed under the back of each patient during the procedure. The fluoroscopy time (minutes), the times of exposure in each direction during the procedure, and the thickness of chest (cm) was recorded. The relation of the skin dose to fluoroscopic time, exposure times in direction of interest, and F/E·DOI was assessed. The relationship between fluoroscopy time and skin dose was shown as y=0.02x+0.22 (r=0.54, p<0.0001, m.e=0.00±0.71 Gy, e.a=-2.19∼l.53 Gy). In addition, the relation of skin dose to exposure times in the direction of interest was y=0.07x+0.27 (r=0.77, p<0.0001, m.e=-0.00±0.53 Gy, e.a=-2.45∼1.76 Gy). The relationship between skin dose and F/E·DOI was y=0.06x+0.30 (r=0.85, p<0.0001, m.e=-0.00±0.44 Gy, e.a=-1.28∼1.06 Gy). Moreover, the relationship between skin dose and (F/E·DOI x 0.06+0.30) x coefficient of direction x coefficient in thickness of chest was y=0.99x-0.02 (r=0.89, p<0.0001, m.e=0.00±0.38 Gy, e.a=-1.12∼l.27 Gy). The calculated results corresponded to the skin dose during the procedure. F/E·DOI method was simple and effective, moreover, that enabled us to inform the skin dose during the PCI procedure to the interventionalist easily. (authors)

  6. Estimation of dose in skin through the use of radiochromic and radiographic films in patients subjected to interventional procedures

    International Nuclear Information System (INIS)

    Campos Garcia, Juan Pablo

    2014-01-01

    Radiation doses in skin of patients subjected to interventional procedures is estimated from the utilization and analysis of GAFCHROMIC® XR-RV2 radiochromic films and KODAK® X-Omat films with aid of the ImageJ software. The distribution of the radiation fields in the films is generated to obtain the distribution of dose in skin and to find peaks of dose by isodose curves using ImageJ software. The calibration curves are realized from GAFCHROMIC® XR-RV2 radiochromic films, through the use of a densitometer and two types of scanners (reflection scanner and transmission scanner). The reflection scanner has digitalized color images of 48 bit in TIFF format. The scanner transmission has digitalized in grayscale images to 16 bit in TIFF format. Each method has determined the points with maximum dose in skin. The images of the areas of regions with maximum doses are obtained of the scanner. The quantified doses are compared in the radiochromic films with the band of doses supplied by the manufacturer. The methodologies for the estimation of the doses obtained are compared of the radiochromic films with those obtained with the KODAK® X-Omat films. The procedure of obtaining of the doses is validated in patients with KODAK® X-Omat films. The doses obtained have covered a range from the 0,1Gy to 9 Gy. Radiographic films have allowed an assessment of the doses to 900 cGy due to the saturation thereof, the doses found in that range have been consistent with the doses in radiochromic films [es

  7. Estimation of effective dose during hysterosalpingography procedures; Estimación de dosis efectiva durante los procedimientos hysterosalpingography

    Energy Technology Data Exchange (ETDEWEB)

    Alzimamil, K.; Babikir, E.; Alkhorayef, M. [King Saud University, College of Applied Medical Sciences, Radiological Sciences Department, P. O. Box 10219, Riyadh 11433, (Saudi Arabia); Sulieman, A. [Salman bin Abdulaziz University, College of Applied Medical Sciences, Radiology and Medical Imaging Department, P. O. Box 422, Alkharj (Saudi Arabia); Alsafi, K. [King Abdulaziz University, Faculty of Medicine, Radiology Department, Jeddah 22254 (Saudi Arabia); Omer, H., E-mail: kalzimami@ksu.edu.sa [Dammam University, Faculty of Medicine, Dammam Khobar Coastal Rd, Khobar 31982 (Saudi Arabia)

    2014-08-15

    Hysterosalpingography (HSG) is the most frequently used diagnostic tool to evaluate the endometrial cavity and fallopian tube by using conventional x-ray or fluoroscopy. Determination of the patient radiation doses values from x-ray examinations provides useful guidance on where best to concentrate efforts on patient dose reduction in order to optimize the protection of the patients. The aims of this study were to measure the patients entrance surface air kerma doses (ESA K), effective doses and to compare practices between different hospitals in Sudan. ESA K were measured for patient using calibrated thermo luminance dosimeters (TLDs, Gr-200A). Effective doses were estimated using National Radiological Protection Board (NRPB) software. This study was conducted in five radiological departments: Two Teaching Hospitals (A and D), two private hospitals (B and C) and one University Hospital (E). The mean ESD was 20.1 mGy, 28.9 mGy, 13.6 mGy, 58.65 mGy, 35.7, 22.4 and 19.6 mGy for hospitals A,B,C,D, and E), respectively. The mean effective dose was 2.4 mSv, 3.5 mSv, 1.6 mSv, 7.1 mSv and 4.3 mSv in the same order. The study showed wide variations in the ESDs with three of the hospitals having values above the internationally reported values. Number of x-ray images, fluoroscopy time, operator skills x-ray machine type and clinical complexity of the procedures were shown to be major contributors to the variations reported. Results demonstrated the need for standardization of technique throughout the hospital. The results also suggest that there is a need to optimize the procedures. Local DRLs were proposed for the entire procedures. (author)

  8. Standard error of measurement of 5 health utility indexes across the range of health for use in estimating reliability and responsiveness.

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M; Feeny, David; Cherepanov, Dasha; Fryback, Dennis G

    2011-01-01

    Standard errors of measurement (SEMs) of health-related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics, and is a component of reliability. To estimate the SEM of 5 HRQoL indexes. The National Health Measurement Study (NHMS) was a population-based survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures. A total of 3844 randomly selected adults from the noninstitutionalized population aged 35 to 89 y in the contiguous United States and 265 cataract patients. The SF6-36v2™, QWB-SA, EQ-5D, HUI2, and HUI3 were included. An item-response theory approach captured joint variation in indexes into a composite construct of health (theta). The authors estimated 1) the test-retest standard deviation (SEM-TR) from COMHS, 2) the structural standard deviation (SEM-S) around theta from NHMS, and 3) reliability coefficients. SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2), and 0.134 (HUI3), whereas SEM-S was 0.071, 0.094, 0.084, 0.074, and 0.117, respectively. These yield reliability coefficients 0.66 (COMHS) and 0.71 (NHMS) for SF-6D, 0.59 and 0.64 for QWB-SA, 0.61 and 0.70 for EQ-5D, 0.64 and 0.80 for HUI2, and 0.75 and 0.77 for HUI3, respectively. The SEM varied across levels of health, especially for HUI2, HUI3, and EQ-5D, and was influenced by ceiling effects. Limitations. Repeated measures were 5 mo apart, and estimated theta contained measurement error. The 2 types of SEM are similar and substantial for all the indexes and vary across health.

  9. Reliability of Circumplex Axes

    Directory of Open Access Journals (Sweden)

    Micha Strack

    2013-06-01

    Full Text Available We present a confirmatory factor analysis (CFA procedure for computing the reliability of circumplex axes. The tau-equivalent CFA variance decomposition model estimates five variance components: general factor, axes, scale-specificity, block-specificity, and item-specificity. Only the axes variance component is used for reliability estimation. We apply the model to six circumplex types and 13 instruments assessing interpersonal and motivational constructs—Interpersonal Adjective List (IAL, Interpersonal Adjective Scales (revised; IAS-R, Inventory of Interpersonal Problems (IIP, Impact Messages Inventory (IMI, Circumplex Scales of Interpersonal Values (CSIV, Support Action Scale Circumplex (SAS-C, Interaction Problems With Animals (IPI-A, Team Role Circle (TRC, Competing Values Leadership Instrument (CV-LI, Love Styles, Organizational Culture Assessment Instrument (OCAI, Customer Orientation Circle (COC, and System for Multi-Level Observation of Groups (behavioral adjectives; SYMLOG—in 17 German-speaking samples (29 subsamples, grouped by self-report, other report, and metaperception assessments. The general factor accounted for a proportion ranging from 1% to 48% of the item variance, the axes component for 2% to 30%; and scale specificity for 1% to 28%, respectively. Reliability estimates varied considerably from .13 to .92. An application of the Nunnally and Bernstein formula proposed by Markey, Markey, and Tinsley overestimated axes reliabilities in cases of large-scale specificities but otherwise works effectively. Contemporary circumplex evaluations such as Tracey’s RANDALL are sensitive to the ratio of the axes and scale-specificity components. In contrast, the proposed model isolates both components.

  10. Concurrent validity and reliability of torso-worn inertial measurement unit for jump power and height estimation.

    Science.gov (United States)

    Rantalainen, Timo; Gastin, Paul B; Spangler, Rhys; Wundersitz, Daniel

    2018-09-01

    The purpose of the present study was to evaluate the concurrent validity and test-retest repeatability of torso-worn IMU-derived power and jump height in a counter-movement jump test. Twenty-seven healthy recreationally active males (age, 21.9 [SD 2.0] y, height, 1.76 [0.7] m, mass, 73.7 [10.3] kg) wore an IMU and completed three counter-movement jumps a week apart. A force platform and a 3D motion analysis system were used to concurrently measure the jumps and subsequently derive power and jump height (based on take-off velocity and flight time). The IMU significantly overestimated power (mean difference = 7.3 W/kg; P jump heights exhibited poorer concurrent validity (ICC = 0.72 to 0.78) and repeatability (ICC = 0.68) than flight-time-derived jump heights, which exhibited excellent validity (ICC = 0.93 to 0.96) and reliability (ICC = 0.91). Since jump height and power are closely related, and flight-time-derived jump height exhibits excellent concurrent validity and reliability, flight-time-derived jump height could provide a more desirable measure compared to power when assessing athletic performance in a counter-movement jump with IMUs.

  11. A guide to reliability data collection, validation and storage

    International Nuclear Information System (INIS)

    Stevens, B.

    1986-01-01

    The EuReDatA Working Group produced a basic document that addressed many of the problems associated with the design of a suitable data collection scheme to achieve pre-defined objectives. The book that resulted from this work describes the need for reliability data, data sources and collection procedures, component description and classification, form design, data management, updating and checking procedures, the estimation of failure rates, availability and utilisation factors, and uncertainties in reliability parameters. (DG)

  12. CONTROL PROCEDURES OF VOLUME OF ESTIMATED AND HARVESTED WOOD IN A PLANTATION OF Pinus spp. IN PARANÁ STATE

    Directory of Open Access Journals (Sweden)

    Silvane Vatraz

    2014-06-01

    Full Text Available http://dx.doi.org/10.5902/1980509814585The objective of this research was to improve the operating procedures of control of the volume of timber estimated by the forest inventory and the effectively harvested volume in order to reduce inconsistencies in the forest planning practiced in a forestry plantation of Pinus spp. in Paraná state. Accordingly, we used the tools of quality: storming and PDCA Cycle through an exploratory research project to study together. The study showed an inconsistency initial volume – 24,73% of the volume estimated by the inventory and the effectively harvested wood. This inconsistency was composed of operational failures in the activities of Forest Inventory (+13,84%, Forest Harvesting (+15,62% and Expedition Wood (-3,08%. The application of quality tools helped in the identification of inconsistency, as well as the revelation of operational failures, which suggested some routine monitoring and checking each of the activities involved in managing operational forestry.  

  13. The influence of different error estimates in the detection of postoperative cognitive dysfunction using reliable change indices with correction for practice effects.

    Science.gov (United States)

    Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A

    2007-02-01

    The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.

  14. Reliability engineering

    International Nuclear Information System (INIS)

    Nieuwhof, G.W.E.

    1976-01-01

    Failure of systems is undesirable, but also inevitable. The consequences of failure can be reduced if the failure mode can be anticipated and repair procedures planned in advance. The fault tree analysis is one method of identifying the most probable failure modes and determining system failure rates. From these rates, repair frequency can be estimated. (author)

  15. A variation of the housing unit method for estimating the age and gender distribution of small, rural areas: A case study of the local expert procedure

    International Nuclear Information System (INIS)

    Carlson, J.F.; Roe, L.K.; Williams, C.A.; Swanson, D.A.

    1993-01-01

    This paper describes the methodologies used in the development of a demographic data base established in support of the Yucca Mountain Site Characterization Project Radiological Monitoring Plan (RadMP). It also examines the suitability of a survey-based procedure for estimating population in small, rural areas. The procedure is a variation of the Housing Unit Method. It employs the use of local experts enlisted to provide information about the demographic characteristics of households randomly selected from residential units sample frames developed from utility records. The procedure is nonintrusive and less costly than traditional survey data collection efforts. Because the procedure is based on random sampling, confidence intervals can be constructed around the population estimated by the technique. The results of a case study are provided in which the total population, and age and gender of the population, is estimated for three unincorporated communities in rural, southern Nevada

  16. Assessment of the reliability of human corneal endothelial cell-density estimates using a noncontact specular microscope.

    Science.gov (United States)

    Doughty, M J; Müller, A; Zaman, M L

    2000-03-01

    We sought to determine the variance in endothelial cell density (ECD) estimates for human corneal endothelia. Noncontact specular micrographs were obtained from white subjects without any history of contact lens wear, or major eye disease or surgery; subjects were within four age groups (children, young adults, older adults, senior citizens). The endothelial image was scanned, and the areas from > or =75 cells measured from an overlay by planimetry. The cell-area values were used to calculate the ECD repeatedly so that the intra- and intersubject variation in an average ECD estimate could be made by using different numbers of cells (5, 10, 15, etc.). An average ECD of 3,519 cells/mm2 (range, 2,598-5,312 cells/mm2) was obtained of counts of 75 cells/ endothelium from individuals aged 6-83 years. Average ECD estimates in each age group were 4,124, 3,457, 3,360, and 3,113 cells/mm2, respectively. Analysis of intersubject variance revealed that ECD estimates would be expected to be no better than +/-10% if only 25 cells were measured per endothelium, but approach +/-2% if 75 cells are measured. In assessing the corneal endothelium by noncontact specular microscopy, cell count should be given, and this should be > or =75/ endothelium for an expected variance to be at a level close to that recommended for monitoring age-, stress-, or surgery-related changes.

  17. Proposal of resolution to create an inquiry commission on the french nuclear power plants reliability in case or earthquakes and on the safety, information and warning procedures in case of incidents

    International Nuclear Information System (INIS)

    2003-01-01

    This short paper presents the reasons of the creation of parliamentary inquiry commission of 30 members, on the reliability of the nuclear power plants in France in case of earthquakes and on the safety, information and warning procedures in case of accidents. (A.L.B.)

  18. The reliability of grazing rate estimates from dilution experiments: Have we over-estimated rates of organic carbon consumption by microzooplankton?

    Directory of Open Access Journals (Sweden)

    J. R. Dolan,

    2005-01-01

    Full Text Available According to a recent global analysis, microzooplankton grazing is surprisingly invariant, ranging only between 59 and 74% of phytoplankton primary production across systems differing in seasonality, trophic status, latitude, or salinity. Thus an important biological process in the world ocean, the daily consumption of recently fixed carbon, appears nearly constant. We believe this conclusion is an artefact because dilution experiments are 1 prone to providing over-estimates of grazing rates and 2 unlikely to furnish evidence of low grazing rates. In our view the overall average rate of microzooplankton grazing probably does not exceed 50% of primary production and may be even lower in oligotrophic systems.

  19. Digital photography provides a fast, reliable, and noninvasive method to estimate anthocyanin pigment concentration in reproductive and vegetative plant tissues.

    Science.gov (United States)

    Del Valle, José C; Gallardo-López, Antonio; Buide, Mª Luisa; Whittall, Justen B; Narbona, Eduardo

    2018-03-01

    Anthocyanin pigments have become a model trait for evolutionary ecology as they often provide adaptive benefits for plants. Anthocyanins have been traditionally quantified biochemically or more recently using spectral reflectance. However, both methods require destructive sampling and can be labor intensive and challenging with small samples. Recent advances in digital photography and image processing make it the method of choice for measuring color in the wild. Here, we use digital images as a quick, noninvasive method to estimate relative anthocyanin concentrations in species exhibiting color variation. Using a consumer-level digital camera and a free image processing toolbox, we extracted RGB values from digital images to generate color indices. We tested petals, stems, pedicels, and calyces of six species, which contain different types of anthocyanin pigments and exhibit different pigmentation patterns. Color indices were assessed by their correlation to biochemically determined anthocyanin concentrations. For comparison, we also calculated color indices from spectral reflectance and tested the correlation with anthocyanin concentration. Indices perform differently depending on the nature of the color variation. For both digital images and spectral reflectance, the most accurate estimates of anthocyanin concentration emerge from anthocyanin content-chroma ratio, anthocyanin content-chroma basic, and strength of green indices. Color indices derived from both digital images and spectral reflectance strongly correlate with biochemically determined anthocyanin concentration; however, the estimates from digital images performed better than spectral reflectance in terms of r 2 and normalized root-mean-square error. This was particularly noticeable in a species with striped petals, but in the case of striped calyces, both methods showed a comparable relationship with anthocyanin concentration. Using digital images brings new opportunities to accurately quantify the

  20. A novel multi-wavelength procedure for blood pressure estimation using opto-physiological sensor at peripheral arteries and capillaries

    Science.gov (United States)

    Scardulla, Francesco; Hu, Sijung; D'Acquisto, Leonardo; Pasta, Salvatore; Barrett, Laura; Blanos, Panagiotis; Yan, Liangwen

    2018-02-01

    In this study, the Carelight multi-wavelength opto-electronic patch sensor (OEPS) was adopted to assess the effectiveness of a new approach for estimating the systolic blood pressure (SBP) through the changes in the morphology of the OEPS signal. Specifically, the SBP was estimated by changing the pressure exerted on an inflatable cuff placed around the left upper arm. Pressure acquisitions were performed both with gold standard (i.e. electronic sphygmomanometer), and Carelight sensor (experimental procedure), on subjects from a multiethnic cohort (aged 28 +/- 7). The OEPS sensor was applied together with a manual inflatable cuff, going slightly above the level of the SBP with increases of +10mmHg and subsequently deflated by 10mmHg until reaching full deflation. The OEPS signals were captured using four wavelength illumination sources (i.e., green 525 nm, orange 595 nm, red 650 nm and IR 870 nm) on three different measuring sites, namely forefinger, radial artery and wrist. The implemented algorithm provides information on the instant when the SBP was reached and the signal is lost since the vessel is completely blocked. Similarly, it detected the signal resumption when the external pressure dropped below the SBP. The findings demonstrated a good correlation between the variation of the pressure and the corresponding OEPS signal with the most accurate result achieved in the fingertip among all wavelengths, with a temporal identification error of 8.07 %. Further studies will improve the clinical relevance on a cohort of patients diagnosed with hyper- or hypotension, in order to develop a wearable blood-pressure device.

  1. Teaching Confirmatory Factor Analysis to Non-Statisticians: A Case Study for Estimating Composite Reliability of Psychometric Instruments

    Science.gov (United States)

    Gajewski, Byron J.; Jiang, Yu; Yeh, Hung-Wen; Engelman, Kimberly; Teel, Cynthia; Choi, Won S.; Greiner, K. Allen; Daley, Christine Makosky

    2013-01-01

    Texts and software that we are currently using for teaching multivariate analysis to non-statisticians lack in the delivery of confirmatory factor analysis (CFA). The purpose of this paper is to provide educators with a complement to these resources that includes CFA and its computation. We focus on how to use CFA to estimate a “composite reliability” of a psychometric instrument. This paper provides guidance for introducing, via a case-study, the non-statistician to CFA. As a complement to our instruction about the more traditional SPSS, we successfully piloted the software R for estimating CFA on nine non-statisticians. This approach can be used with healthcare graduate students taking a multivariate course, as well as modified for community stakeholders of our Center for American Indian Community Health (e.g. community advisory boards, summer interns, & research team members). The placement of CFA at the end of the class is strategic and gives us an opportunity to do some innovative teaching: (1) build ideas for understanding the case study using previous course work (such as ANOVA); (2) incorporate multi-dimensional scaling (that students already learned) into the selection of a factor structure (new concept); (3) use interactive data from the students (active learning); (4) review matrix algebra and its importance to psychometric evaluation; (5) show students how to do the calculation on their own; and (6) give students access to an actual recent research project. PMID:24772373

  2. The transverse diameter of the chest on routine radiographs reliably estimates gestational age and weight in premature infants.

    Science.gov (United States)

    Dietz, Kelly R; Zhang, Lei; Seidel, Frank G

    2015-08-01

    Prior to digital radiography it was possible for a radiologist to easily estimate the size of a patient on an analog film. Because variable magnification may be applied at the time of processing an image, it is now more difficult to visually estimate an infant's size on the monitor. Since gestational age and weight significantly impact the differential diagnosis of neonatal diseases and determine the expected size of kidneys or appearance of the brain by MRI or US, this information is useful to a pediatric radiologist. Although this information may be present in the electronic medical record, it is frequently not readily available to the pediatric radiologist at the time of image interpretation. To determine if there was a correlation between gestational age and weight of a premature infant with their transverse chest diameter (rib to rib) on admission chest radiographs. This retrospective study was approved by the institutional review board, which waived informed consent. The maximum transverse chest diameter outer rib to outer rib was measured on admission portable chest radiographs of 464 patients admitted to the neonatal intensive care unit (NICU) during the 2010 calendar year. Regression analysis was used to investigate the association between chest diameter and gestational age/birth weight. Quadratic term of chest diameter was used in the regression model. Chest diameter was statistically significantly associated with both gestational age (P chest diameter on digital chest radiograph with the tables and graphs in our study.

  3. Reliable and Damage-Free Estimation of Resistivity of ZnO Thin Films for Photovoltaic Applications Using Photoluminescence Technique

    Directory of Open Access Journals (Sweden)

    N. Poornima

    2013-01-01

    Full Text Available This work projects photoluminescence (PL as an alternative technique to estimate the order of resistivity of zinc oxide (ZnO thin films. ZnO thin films, deposited using chemical spray pyrolysis (CSP by varying the deposition parameters like solvent, spray rate, pH of precursor, and so forth, have been used for this study. Variation in the deposition conditions has tremendous impact on the luminescence properties as well as resistivity. Two emissions could be recorded for all samples—the near band edge emission (NBE at 380 nm and the deep level emission (DLE at ~500 nm which are competing in nature. It is observed that the ratio of intensities of DLE to NBE (/ can be reduced by controlling oxygen incorporation in the sample. - measurements indicate that restricting oxygen incorporation reduces resistivity considerably. Variation of / and resistivity for samples prepared under different deposition conditions is similar in nature. / was always less than resistivity by an order for all samples. Thus from PL measurements alone, the order of resistivity of the samples can be estimated.

  4. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity

    Directory of Open Access Journals (Sweden)

    Michael O. Harris-Love

    2016-02-01

    Full Text Available Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT and the Free Hand Tool (FHT are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years. Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs and the standard error of the measurement (SEM. Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2. Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001. Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001 using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection

  5. Estimation technique of corrective effects for forecasting of reliability of the designed and operated objects of the generating systems

    Science.gov (United States)

    Truhanov, V. N.; Sultanov, M. M.

    2017-11-01

    In the present article researches of statistical material on the refusals and malfunctions influencing operability of heat power installations have been conducted. In this article the mathematical model of change of output characteristics of the turbine depending on number of the refusals revealed in use has been presented. The mathematical model is based on methods of mathematical statistics, probability theory and methods of matrix calculation. The novelty of this model is that it allows to predict the change of the output characteristic in time, and the operating influences have been presented in an explicit form. As desirable dynamics of change of the output characteristic (function, reliability) the law of distribution of Veybull which is universal is adopted since at various values of parameters it turns into other types of distributions (for example, exponential, normal, etc.) It should be noted that the choice of the desirable law of management allows to determine the necessary management parameters with use of the saved-up change of the output characteristic in general. The output characteristic can be changed both on the speed of change of management parameters, and on acceleration of change of management parameters. In this article the technique of an assessment of the pseudo-return matrix has been stated in detail by the method of the smallest squares and the standard Microsoft Excel functions. Also the technique of finding of the operating effects when finding restrictions both for the output characteristic, and on management parameters has been considered. In the article the order and the sequence of finding of management parameters has been stated. A concrete example of finding of the operating effects in the course of long-term operation of turbines has been shown.

  6. Reliability of a self-report Italian version of the AUDIT-C questionnaire, used to estimate alcohol consumption by pregnant women in an obstetric setting.

    Science.gov (United States)

    Bazzo, Stefania; Battistella, Giuseppe; Riscica, Patrizia; Moino, Giuliana; Dal Pozzo, Giuseppe; Bottarel, Mery; Geromel, Mariasole; Czerwinsky, Loredana

    2015-01-01

    Alcohol consumption during pregnancy can result in a range of harmful effects on the developing foetus and newborn, called Fetal Alcohol Spectrum Disorders (FASD). The identification of pregnant women who use alcohol enables to provide information, support and treatment for women and the surveillance of their children. The AUDIT-C (the shortened consumption version of the Alcohol Use Disorders Identification Test) is used for investigating risky drinking with different populations, and has been applied to estimate alcohol use and risky drinking also in antenatal clinics. The aim of the study was to investigate the reliability of a self-report Italian version of the AUDIT-C questionnaire to detect alcohol consumption during pregnancy, regardless of its use as a screening tool. The questionnaire was filled in by two independent consecutive series of pregnant women at the 38th gestation week visit in the two birth locations of the Local Health Authority of Treviso (Italy), during the years 2010 and 2011 (n=220 and n=239). Reliability analysis was performed using internal consistency, item-total score correlations, and inter-item correlations. The "discriminatory power" of the test was also evaluated. Results. Overall, about one third of women recalled alcohol consumption at least once during the current pregnancy. The questionnaire had an internal consistency of 0.565 for the group of the year 2010, of 0.516 for the year 2011, and of 0.542 for the overall group. The highest item total correlations' coefficient was 0.687 and the highest inter-item correlations' coefficient was 0.675. As for the discriminatory power of the questionnaire, the highest Ferguson's delta coefficient was 0.623. These findings suggest that the Italian self-report version of the AUDIT-C possesses unsatisfactory reliability to estimate alcohol consumption during pregnancy when used as self-report questionnaire in an obstetric setting.

  7. The reliability and accuracy of estimating heart-rates from RGB video recorded on a consumer grade camera

    Science.gov (United States)

    Eaton, Adam; Vincely, Vinoin; Lloyd, Paige; Hugenberg, Kurt; Vishwanath, Karthik

    2017-03-01

    Video Photoplethysmography (VPPG) is a numerical technique to process standard RGB video data of exposed human skin and extracting the heart-rate (HR) from the skin areas. Being a non-contact technique, VPPG has the potential to provide estimates of subject's heart-rate, respiratory rate, and even the heart rate variability of human subjects with potential applications ranging from infant monitors, remote healthcare and psychological experiments, particularly given the non-contact and sensor-free nature of the technique. Though several previous studies have reported successful correlations in HR obtained using VPPG algorithms to HR measured using the gold-standard electrocardiograph, others have reported that these correlations are dependent on controlling for duration of the video-data analyzed, subject motion, and ambient lighting. Here, we investigate the ability of two commonly used VPPG-algorithms in extraction of human heart-rates under three different laboratory conditions. We compare the VPPG HR values extracted across these three sets of experiments to the gold-standard values acquired by using an electrocardiogram or a commercially available pulseoximeter. The two VPPG-algorithms were applied with and without KLT-facial feature tracking and detection algorithms from the Computer Vision MATLAB® toolbox. Results indicate that VPPG based numerical approaches have the ability to provide robust estimates of subject HR values and are relatively insensitive to the devices used to record the video data. However, they are highly sensitive to conditions of video acquisition including subject motion, the location, size and averaging techniques applied to regions-of-interest as well as to the number of video frames used for data processing.

  8. Design and validation of new genotypic tools for easy and reliable estimation of HIV tropism before using CCR5 antagonists.

    Science.gov (United States)

    Poveda, Eva; Seclén, Eduardo; González, María del Mar; García, Federico; Chueca, Natalia; Aguilera, Antonio; Rodríguez, Jose Javier; González-Lahoz, Juan; Soriano, Vincent

    2009-05-01

    Genotypic tools may allow easier and less expensive estimation of HIV tropism before prescription of CCR5 antagonists compared with the Trofile assay (Monogram Biosciences, South San Francisco, CA, USA). Paired genotypic and Trofile results were compared in plasma samples derived from the maraviroc expanded access programme (EAP) in Europe. A new genotypic approach was built to improve the sensitivity to detect X4 variants based on an optimization of the webPSSM algorithm. Then, the new tool was validated in specimens from patients included in the ALLEGRO trial, a multicentre study conducted in Spain to assess the prevalence of R5 variants in treatment-experienced HIV patients. A total of 266 specimens from the maraviroc EAP were tested. Overall geno/pheno concordance was above 72%. A high specificity was generally seen for the detection of X4 variants using genotypic tools (ranging from 58% to 95%), while sensitivity was low (ranging from 31% to 76%). The PSSM score was then optimized to enhance the sensitivity to detect X4 variants changing the original threshold for R5 categorization. The new PSSM algorithms, PSSM(X4R5-8) and PSSM(SINSI-6.4), considered as X4 all V3 scoring values above -8 or -6.4, respectively, increasing the sensitivity to detect X4 variants up to 80%. The new algorithms were then validated in 148 specimens derived from patients included in the ALLEGRO trial. The sensitivity/specificity to detect X4 variants was 93%/69% for PSSM(X4R5-8) and 93%/70% for PSSM(SINSI-6.4). PSSM(X4R5-8) and PSSM(SINSI-6.4) may confidently assist therapeutic decisions for using CCR5 antagonists in HIV patients, providing an easier and rapid estimation of tropism in clinical samples.

  9. Standard error of measurement of five health utility indexes across the range of health for use in estimating reliability and responsiveness

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M.; Feeny, David; Cherepanov, Dasha; Fryback, Dennis

    2011-01-01

    Background Standard errors of measurement (SEMs) of health related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics and provides guidance on using indexes on the individual and group level. SEM is also a component of reliability. Purpose To estimate SEM of five HRQoL indexes. Design The National Health Measurement Study (NHMS) was a population based telephone survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures 1 and 6 months post cataract surgery. Subjects 3844 randomly selected adults from the non-institutionalized population 35 to 89 years old in the contiguous United States and 265 cataract patients. Measurements The SF6-36v2™, QWB-SA, EQ-5D, HUI2 and HUI3 were included. An item-response theory (IRT) approach captured joint variation in indexes into a composite construct of health (theta). We estimated: (1) the test-retest standard deviation (SEM-TR) from COMHS, (2) the structural standard deviation (SEM-S) around the composite construct from NHMS and (3) corresponding reliability coefficients. Results SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2) and 0.134 (HUI3), while SEM-S was 0.071, 0.094, 0.084, 0.074 and 0.117, respectively. These translate into reliability coefficients for SF-6D: 0.66 (COMHS) and 0.71 (NHMS), for QWB: 0.59 and 0.64, for EQ-5D: 0.61 and 0.70 for HUI2: 0.64 and 0.80, and for HUI3: 0.75 and 0.77, respectively. The SEM varied considerably across levels of health, especially for HUI2, HUI3 and EQ-5D, and was strongly influenced by ceiling effects. Limitations Repeated measures were five months apart and estimated theta contain measurement error. Conclusions The two types of SEM are similar and substantial for all the indexes, and vary across the range of health. PMID:20935280

  10. Evaluation of structural reliability using simulation methods

    Directory of Open Access Journals (Sweden)

    Baballëku Markel

    2015-01-01

    Full Text Available Eurocode describes the 'index of reliability' as a measure of structural reliability, related to the 'probability of failure'. This paper is focused on the assessment of this index for a reinforced concrete bridge pier. It is rare to explicitly use reliability concepts for design of structures, but the problems of structural engineering are better known through them. Some of the main methods for the estimation of the probability of failure are the exact analytical integration, numerical integration, approximate analytical methods and simulation methods. Monte Carlo Simulation is used in this paper, because it offers a very good tool for the estimation of probability in multivariate functions. Complicated probability and statistics problems are solved through computer aided simulations of a large number of tests. The procedures of structural reliability assessment for the bridge pier and the comparison with the partial factor method of the Eurocodes have been demonstrated in this paper.

  11. A fast-reliable methodology to estimate the concentration of rutile or anatase phases of TiO2

    Directory of Open Access Journals (Sweden)

    A. R. Zanatta

    2017-07-01

    Full Text Available Titanium-dioxide (TiO2 is a low-cost, chemically inert material that became the basis of many modern applications ranging from, for example, cosmetics to photovoltaics. TiO2 exists in three different crystal phases − Rutile, Anatase and, less commonly, Brookite − and, in most of the cases, the presence or relative amount of these phases are essential to decide the TiO2 final application and its related efficiency. Traditionally, X-ray diffraction has been chosen to study TiO2 and provides both the phases identification and the Rutile-to-Anatase ratio. Similar information can be achieved from Raman scattering spectroscopy that, additionally, is versatile and involves rather simple instrumentation. Motivated by these aspects this work took into account various TiO2 Rutile+Anatase powder mixtures and their corresponding Raman spectra. Essentially, the method described here was based upon the fact that the Rutile and Anatase crystal phases have distinctive phonon features, and therefore, the composition of the TiO2 mixtures can be readily assessed from their Raman spectra. The experimental results clearly demonstrate the suitability of Raman spectroscopy in estimating the concentration of Rutile or Anatase in TiO2 and is expected to influence the study of TiO2-related thin films, interfaces, systems with reduced dimensions, and devices like photocatalytic and solar cells.

  12. A reliable bioassay procedure to evaluate per os toxicity of Bacillus thuringiensis strains against the rice delphacid, Tagosodes orizicolus (Homoptera: Delphacidae

    Directory of Open Access Journals (Sweden)

    Rebeca Mora

    2007-06-01

    Full Text Available A reliable bioassay procedure was developed to test ingested Bacillus thuringiensis (Bt toxins on the rice delphacid Tagosodes orizicolus. Initially, several colonies were established under greenhouse conditions, using rice plants to nurture the insect. For the bioassay, an in vitro feeding system was developed for third to fourth instar nymphs. Insects were fed through Parafilm membranes on sugar (10 % sucrose and honey bee (1:48 vol/vol solutions, observing a natural mortality of 10-15 % and 0-5 %, respectively. Results were reproducible under controlled conditions during the assay (18±0.1 °C at night and 28±0.1 °C during the day, 80 % RH and a 12:12 day:light photoperiod. In addition, natural mortality was quantified on insect colonies, collected from three different geographic areas of Costa Rica, with no significant differences between colonies under controlled conditions. Finally, bioassays were performed to evaluate the toxicity of a Bt collection on T. orizicolus. A preliminary sample of twenty-seven Bt strains was evaluated on coarse bioassays using three loops of sporulated colonies in 9 ml of liquid diet, the strains that exhibited higher percentages of T. orizicolus mortality were further analyzed in bioassays using lyophilized spores and crystals (1 mg/ml. As a result, strains 26-O-to, 40-X-m, 43S-d and 23-O-to isolated from homopteran insects showed mortalities of 74, 96, 44 and 82 % respectively while HD-137, HD-1 and Bti showed 19, 83 and 95 % mortalities. Controls showed mortalities between 0 and 10 % in all bioassays. This is the first report of a reliable bioassay procedure to evaluate per os toxicity for a homopteran species using Bacillus thuringiensis strains. Rev. Biol. Trop. 55 (2: 373-383. Epub 2007 June, 29.Se desarrolló una metodología de bioensayo para evaluar toxinas de Bacillus thuringiensis (Bt ingeridas por Tagosodes orizicolus, plaga del arroz y vector del virus de la hoja blanca. Se establecieron colonias

  13. Using Length of Stay to Control for Unobserved Heterogeneity When Estimating Treatment Effect on Hospital Costs with Observational Data: Issues of Reliability, Robustness, and Usefulness.

    Science.gov (United States)

    May, Peter; Garrido, Melissa M; Cassel, J Brian; Morrison, R Sean; Normand, Charles

    2016-10-01

    To evaluate the sensitivity of treatment effect estimates when length of stay (LOS) is used to control for unobserved heterogeneity when estimating treatment effect on cost of hospital admission with observational data. We used data from a prospective cohort study on the impact of palliative care consultation teams (PCCTs) on direct cost of hospital care. Adult patients with an advanced cancer diagnosis admitted to five large medical and cancer centers in the United States between 2007 and 2011 were eligible for this study. Costs were modeled using generalized linear models with a gamma distribution and a log link. We compared variability in estimates of PCCT impact on hospitalization costs when LOS was used as a covariate, as a sample parameter, and as an outcome denominator. We used propensity scores to account for patient characteristics associated with both PCCT use and total direct hospitalization costs. We analyzed data from hospital cost databases, medical records, and questionnaires. Our propensity score weighted sample included 969 patients who were discharged alive. In analyses of hospitalization costs, treatment effect estimates are highly sensitive to methods that control for LOS, complicating interpretation. Both the magnitude and significance of results varied widely with the method of controlling for LOS. When we incorporated intervention timing into our analyses, results were robust to LOS-controls. Treatment effect estimates using LOS-controls are not only suboptimal in terms of reliability (given concerns over endogeneity and bias) and usefulness (given the need to validate the cost-effectiveness of an intervention using overall resource use for a sample defined at baseline) but also in terms of robustness (results depend on the approach taken, and there is little evidence to guide this choice). To derive results that minimize endogeneity concerns and maximize external validity, investigators should match and analyze treatment and comparison arms

  14. Reliability Engineering

    International Nuclear Information System (INIS)

    Lee, Sang Yong

    1992-07-01

    This book is about reliability engineering, which describes definition and importance of reliability, development of reliability engineering, failure rate and failure probability density function about types of it, CFR and index distribution, IFR and normal distribution and Weibull distribution, maintainability and movability, reliability test and reliability assumption in index distribution type, normal distribution type and Weibull distribution type, reliability sampling test, reliability of system, design of reliability and functionality failure analysis by FTA.

  15. 40 CFR Appendix B to Part 76 - Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Procedures and Methods for Estimating Costs of Nitrogen Oxides Controls Applied to Group 1, Boilers B Appendix B to Part 76 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) ACID RAIN NITROGEN OXIDES...

  16. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity.

    Science.gov (United States)

    Harris-Love, Michael O; Seamon, Bryant A; Teixeira, Carla; Ismail, Catheeja

    2016-01-01

    Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R (2)). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97-.99, p Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection methods, respectively. Comparatively, the SEM values were .72 and .81 grayscale levels, respectively, when using the RMT and FHT ROI selection methods in ImageJ. Uniform coefficients of determination (R (2) = .96

  17. Reliability of hospital cost profiles in inpatient surgery.

    Science.gov (United States)

    Grenda, Tyler R; Krell, Robert W; Dimick, Justin B

    2016-02-01

    With increased policy emphasis on shifting risk from payers to providers through mechanisms such as bundled payments and accountable care organizations, hospitals are increasingly in need of metrics to understand their costs relative to peers. However, it is unclear whether Medicare payments for surgery can reliably compare hospital costs. We used national Medicare data to assess patients undergoing colectomy, pancreatectomy, and open incisional hernia repair from 2009 to 2010 (n = 339,882 patients). We first calculated risk-adjusted hospital total episode payments for each procedure. We then used hierarchical modeling techniques to estimate the reliability of total episode payments for each procedure and explored the impact of hospital caseload on payment reliability. Finally, we quantified the number of hospitals meeting published reliability benchmarks. Mean risk-adjusted total episode payments ranged from $13,262 (standard deviation [SD] $14,523) for incisional hernia repair to $25,055 (SD $22,549) for pancreatectomy. The reliability of hospital episode payments varied widely across procedures and depended on sample size. For example, mean episode payment reliability for colectomy (mean caseload, 157) was 0.80 (SD 0.18), whereas for pancreatectomy (mean caseload, 13) the mean reliability was 0.45 (SD 0.27). Many hospitals met published reliability benchmarks for each procedure. For example, 90% of hospitals met reliability benchmarks for colectomy, 40% for pancreatectomy, and 66% for incisional hernia repair. Episode payments for inpatient surgery are a reliable measure of hospital costs for commonly performed procedures, but are less reliable for lower volume operations. These findings suggest that hospital cost profiles based on Medicare claims data may be used to benchmark efficiency, especially for more common procedures. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Improving machinery reliability

    CERN Document Server

    Bloch, Heinz P

    1998-01-01

    This totally revised, updated and expanded edition provides proven techniques and procedures that extend machinery life, reduce maintenance costs, and achieve optimum machinery reliability. This essential text clearly describes the reliability improvement and failure avoidance steps practiced by best-of-class process plants in the U.S. and Europe.

  19. Reliable Function Approximation and Estimation

    Science.gov (United States)

    2016-08-16

    compressed sensing results to a wide class of infinite -dimensional problems. We discuss four key application domains for the methods developed in this... infinite -dimensional problems. We discuss four key findings arising from this project, as related to uncertainty quantification, image processing, matrix...compressed sensing results to a wide class of infinite -dimensional problems. We discuss four key application domains for the methods developed in this project

  20. Secondary dentine as a sole parameter for age estimation: Comparison and reliability of qualitative and quantitative methods among North Western adult Indians

    Directory of Open Access Journals (Sweden)

    Jasbir Arora

    2016-06-01

    Full Text Available The indestructible nature of teeth against most of the environmental abuses makes its use in disaster victim identification (DVI. The present study has been undertaken to examine the reliability of Gustafson’s qualitative method and Kedici’s quantitative method of measuring secondary dentine for age estimation among North Western adult Indians. 196 (M = 85; F = 111 single rooted teeth were collected from the Department of Oral Health Sciences, PGIMER, Chandigarh. Ground sections were prepared and the amount of secondary dentine formed was scored qualitatively according to Gustafson’s (0–3 scoring system (method 1 and quantitatively following Kedici’s micrometric measurement method (method 2. Out of 196 teeth 180 samples (M = 80; F = 100 were found to be suitable for measuring secondary dentine following Kedici’s method. Absolute mean error of age was calculated by both methodologies. Results clearly showed that in pooled data, method 1 gave an error of ±10.4 years whereas method 2 exhibited an error of approximately ±13 years. A statistically significant difference was noted in absolute mean error of age between two methods of measuring secondary dentine for age estimation. Further, it was also revealed that teeth extracted for periodontal reasons severely decreased the accuracy of Kedici’s method however, the disease had no effect while estimating age by Gustafson’s method. No significant gender differences were noted in the absolute mean error of age by both methods which suggest that there is no need to separate data on the basis of gender.

  1. Study of systematic errors in the determination of total Hg levels in the range -5% in inorganic and organic matrices with two reliable spectrometrical determination procedures

    International Nuclear Information System (INIS)

    Kaiser, G.; Goetz, D.; Toelg, G.; Max-Planck-Institut fuer Metallforschung, Stuttgart; Knapp, G.; Maichin, B.; Spitzy, H.

    1978-01-01

    In the determiniation of Hg at ng/g and pg/g levels systematic errors are due to faults in the analytical methods such as intake, preparation and decomposition of a sample. The sources of these errors have been studied both with 203 Hg-radiotracer techniques and two multi-stage procedures developed for the determiniation of trace levels. The emission spectrometrie (OES-MIP) procedure includes incineration of the sample in a microwave induced oxygen plasma (MIP), the isolation and enrichment on a gold absorbent and its excitation in an argon plasma (MIP). The emitted Hg-radiation (253,7 nm) is evaluated photometrically with a semiconductor element. The detection limit of the OES-MIP procedure was found to be 0,01 ng, the coefficient of variation 5% for 1 ng Hg. The second procedure combines a semi-automated wet digestion method (HCLO 3 /HNO 3 ) with a reduction-aeration (ascorbic acid/SnCl 2 ), and the flameless atomic absorption technique (253,7 nm). The detection limit of this procedure was found to be 0,5 ng, the coefficient of variation 5% for 5 ng Hg. (orig.) [de

  2. A flexible and coherent test/estimation procedure based on restricted mean survival times for censored time-to-event data in randomized clinical trials.

    Science.gov (United States)

    Horiguchi, Miki; Cronin, Angel M; Takeuchi, Masahiro; Uno, Hajime

    2018-04-22

    In randomized clinical trials where time-to-event is the primary outcome, almost routinely, the logrank test is prespecified as the primary test and the hazard ratio is used to quantify treatment effect. If the ratio of 2 hazard functions is not constant, the logrank test is not optimal and the interpretation of hazard ratio is not obvious. When such a nonproportional hazards case is expected at the design stage, the conventional practice is to prespecify another member of weighted logrank tests, eg, Peto-Prentice-Wilcoxon test. Alternatively, one may specify a robust test as the primary test, which can capture various patterns of difference between 2 event time distributions. However, most of those tests do not have companion procedures to quantify the treatment difference, and investigators have fallen back on reporting treatment effect estimates not associated with the primary test. Such incoherence in the "test/estimation" procedure may potentially mislead clinicians/patients who have to balance risk-benefit for treatment decision. To address this, we propose a flexible and coherent test/estimation procedure based on restricted mean survival time, where the truncation time τ is selected data dependently. The proposed procedure is composed of a prespecified test and an estimation of corresponding robust and interpretable quantitative treatment effect. The utility of the new procedure is demonstrated by numerical studies based on 2 randomized cancer clinical trials; the test is dramatically more powerful than the logrank, Wilcoxon tests, and the restricted mean survival time-based test with a fixed τ, for the patterns of difference seen in these cancer clinical trials. Copyright © 2018 John Wiley & Sons, Ltd.

  3. 42 CFR 431.814 - Sampling plan and procedures.

    Science.gov (United States)

    2010-10-01

    ... reliability of the reduced sample. (4) The sample selection procedure. Systematic random sampling is... sampling, and yield estimates with the same or better precision than achieved in systematic random sampling... 42 Public Health 4 2010-10-01 2010-10-01 false Sampling plan and procedures. 431.814 Section 431...

  4. Stochastic procedures for extreme wave induced responses in flexible ships

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher; Andersen, Ingrid Marie Vincent; Seng, Sopheak

    2014-01-01

    Different procedures for estimation of the extreme global wave hydroelastic responses in ships are discussed. Firstly, stochastic procedures for application in detailed numerical studies (CFD) are outlined. The use of the First Order Reliability Method (FORM) to generate critical wave episodes...

  5. Assessment of dietary intake of flavouring substances within the procedure for their safety evaluation: advantages and limitations of estimates obtained by means of a per capita method.

    Science.gov (United States)

    Arcella, D; Leclercq, C

    2005-01-01

    The procedure for the safety evaluation of flavourings adopted by the European Commission in order to establish a positive list of these substances is a stepwise approach which was developed by the Joint FAO/WHO Expert Committee on Food Additives (JECFA) and amended by the Scientific Committee on Food. Within this procedure, a per capita amount based on industrial poundage data of flavourings, is calculated to estimate the dietary intake by means of the maximised survey-derived daily intake (MSDI) method. This paper reviews the MSDI method in order to check if it can provide conservative intake estimates as needed at the first steps of a stepwise procedure. Scientific papers and opinions dealing with the MSDI method were reviewed. Concentration levels reported by the industry were compared with estimates obtained with the MSDI method. It appeared that, in some cases, these estimates could be orders of magnitude (up to 5) lower than those calculated considering concentration levels provided by the industry and regular consumption of flavoured foods and beverages. A critical review of two studies which had been used to support the statement that MSDI is a conservative method for assessing exposure to flavourings among high consumers was performed. Special attention was given to the factors that affect exposure at high percentiles, such as brand loyalty and portion sizes. It is concluded that these studies may not be suitable to validate the MSDI method used to assess intakes of flavours by European consumers due to shortcomings in the assumptions made and in the data used. Exposure assessment is an essential component of risk assessment. The present paper suggests that the MSDI method is not sufficiently conservative. There is therefore a clear need for either using an alternative method to estimate exposure to flavourings in the procedure or for limiting intakes to the levels at which the safety was assessed.

  6. Procedures of amino acid sequencing of peptides in natural proteins collection of knowledge and intelligence for construction of reliable chemical inference system

    OpenAIRE

    Kudo, Yoshihiro; Kanaya, Shigehiko

    1994-01-01

    In order to establish a reliable chemical inference system on amino acid sequencing of natural peptides, as various kinds of relevant knowledge and intelligence as possible are collected. Topics are on didemnins, dolastatin 3, TL-119 and/or A-3302-B, mycosubtilin, patellamide A, duramycin (and cinnamycin), bottoromycin A 2, A19009, galantin I, vancomycin, stenothricin, calf speleen profilin, neocarzinostatin, pancreatic spasmolytic polypeptide, cerebratulus toxin B-IV, RNAase U 2, ferredoxin ...

  7. External validation of a forest inventory and analysis volume equation and comparisons with estimates from multiple stem-profile models

    Science.gov (United States)

    Christopher M. Oswalt; Adam M. Saunders

    2009-01-01

    Sound estimation procedures are desideratum for generating credible population estimates to evaluate the status and trends in resource conditions. As such, volume estimation is an integral component of the U.S. Department of Agriculture, Forest Service, Forest Inventory and Analysis (FIA) program's reporting. In effect, reliable volume estimation procedures are...

  8. Subset simulation for structural reliability sensitivity analysis

    International Nuclear Information System (INIS)

    Song Shufang; Lu Zhenzhou; Qiao Hongwei

    2009-01-01

    Based on two procedures for efficiently generating conditional samples, i.e. Markov chain Monte Carlo (MCMC) simulation and importance sampling (IS), two reliability sensitivity (RS) algorithms are presented. On the basis of reliability analysis of Subset simulation (Subsim), the RS of the failure probability with respect to the distribution parameter of the basic variable is transformed as a set of RS of conditional failure probabilities with respect to the distribution parameter of the basic variable. By use of the conditional samples generated by MCMC simulation and IS, procedures are established to estimate the RS of the conditional failure probabilities. The formulae of the RS estimator, its variance and its coefficient of variation are derived in detail. The results of the illustrations show high efficiency and high precision of the presented algorithms, and it is suitable for highly nonlinear limit state equation and structural system with single and multiple failure modes

  9. A reliable procedure for decontamination before thawing of human specimens cryostored in liquid nitrogen: three washes with sterile liquid nitrogen (SLN2).

    Science.gov (United States)

    Parmegiani, Lodovico; Accorsi, Antonio; Bernardi, Silvia; Arnone, Alessandra; Cognigni, Graciela Estela; Filicori, Marco

    2012-10-01

    To report a washing procedure, to be performed as frozen specimens are taken out of cryobanks, to minimize the risk of hypothetical culture contamination during thawing. Basic research. Private assisted reproduction center. Two batches of liquid nitrogen (LN(2)) were experimentally contaminated, one with bacteria (Pseudomonas aeruginosa, Escherichia coli, Stenotrophomonas maltophilia) and the other with fungi (Aspergillus niger). Two hundred thirty-two of the most common human gamete/embryo vitrification carriers (Cryotop, Cryoleaf, Cryopette) were immersed in the contaminated LN(2) (117 in the bacteria and 25 in the fungi-contaminated LN(2)). The carriers were tested microbiologically, one group without washing (control) and the other after three subsequent washings in certified ultraviolet sterile liquid nitrogen (SLN(2)). The carriers were randomly allocated to the "three-wash procedure" (three-wash group, 142 carriers) or "no-wash" (control group, 90 carriers) using a specific software tool. Assessment of microorganism growth. In the no-wash control group, 78.6% of the carriers were contaminated by the bacteria and 100% by the fungi. No carriers were found to be contaminated, either by bacteria or fungi, after the three-wash procedure. The three-wash procedure with SLN(2) produced an efficient decontamination of carriers in extreme experimental conditions. For this reason, this procedure could be routinely performed in IVF laboratories for safe thawing of human specimens that are cryostored in nonhermetical cryocontainers, particularly in the case of open or single-straw closed vitrification systems. Copyright © 2012 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  10. SU-G-IeP3-05: Effects of Image Receptor Technology and Dose Reduction Software On Radiation Dose Estimates for Fluoroscopically-Guided Interventional (FGI) Procedures

    Energy Technology Data Exchange (ETDEWEB)

    Merritt, Z; Dave, J; Eschelman, D; Gonsalves, C [Thomas Jefferson University, Philadelphia, PA (United States)

    2016-06-15

    Purpose: To investigate the effects of image receptor technology and dose reduction software on radiation dose estimates for most frequently performed fluoroscopically-guided interventional (FGI) procedures at a tertiary health care center. Methods: IRB approval was obtained for retrospective analysis of FGI procedures performed in the interventional radiology suites between January-2011 and December-2015. This included procedures performed using image-intensifier (II) based systems which were subsequently replaced, flat-panel-detector (FPD) based systems which were later upgraded with ClarityIQ dose reduction software (Philips Healthcare) and relatively new FPD system already equipped with ClarityIQ. Post procedure, technologists entered system-reported cumulative air kerma (CAK) and kerma-area product (KAP; only KAP for II based systems) in RIS; these values were analyzed. Data pre-processing included correcting typographical errors and cross-verifying CAK and KAP. The most frequent high and low dose FGI procedures were identified and corresponding CAK and KAP values were compared. Results: Out of 27,251 procedures within this time period, most frequent high and low dose procedures were chemo/immuno-embolization (n=1967) and abscess drainage (n=1821). Mean KAP for embolization and abscess drainage procedures were 260,657, 310,304 and 94,908 mGycm{sup 2}, and 14,497, 15,040 and 6307 mGycm{sup 2} using II-, FPD- and FPD with ClarityIQ- based systems, respectively. Statistically significant differences were observed in KAP values for embolization procedures with respect to different systems but for abscess drainage procedures significant differences were only noted between systems with FPD and FPD with ClarityIQ (p<0.05). Mean CAK reduced significantly from 823 to 308 mGy and from 43 to 21 mGy for embolization and abscess drainage procedures, respectively, in transitioning to FPD systems with ClarityIQ (p<0.05). Conclusion: While transitioning from II- to FPD- based

  11. The European industry reliability data bank EIReDA

    International Nuclear Information System (INIS)

    Procaccia, H.; Aufort, P.; Arsenis, S.

    1997-01-01

    EIReDA and the computerized version EIReDA.PC are living data bases aiming to satisfy the requirements of risk, safety, and availability studies on industrial systems for documented estimates of reliability parameters of mechanical, electrical, and instrumentation components. The data updating procedure is based on Bayesian techniques implemented in a specific software: FIABAYES. Estimates are mostly based on the operational experience of EDF components, but an effort has been made to bring together estimates of equivalent components published in the open literature, and so establish generic tables of reliability parameters. (author)

  12. Procedure for independently estimating blanks and uncertainties for measured values of 90Sr and 137Cs concentrations in the Atlantic Ocean

    International Nuclear Information System (INIS)

    Kupferman, S.L.; Livingston, H.D.

    1979-01-01

    A procedure has been developed for independently estimating blanks and measurement uncertainties for measured values of 90 Sr and 137 CS concentrations in the Atlantic Ocean. The procedure depends on delineation of a region in the Atlantic Ocean which has never contained measurable quantities of these fission products. Such a region is defined. A simple model, with supporting data, is used to show that reported 137 Cs inventories in deep ocean sediments could have accumulated without ever raising concentrations of 137 Cs in this tracer-free volume above minimum detectable limits. Several examples are presented to show that the use of the procedure results in a substantial improvement in the quality of 90 Sr and 137 Cs data. The method is applicable to any laboratory that has determined 9 πSr and 137 Cs concentrations in samples collected from within the tracer-free volume. 10 refs

  13. Diagnosing alcoholism in high-risk drinking drivers: comparing different diagnostic procedures with estimated prevalence of hazardous alcohol use

    NARCIS (Netherlands)

    Korzec, A.; Bär, M.; Koeter, M. W.; de Kieviet, W.

    2001-01-01

    In several European countries, drivers under influence (DUI), suspected of an alcohol use disorder (AUD, 'alcoholism') are referred for diagnostic examination. The accuracy of diagnostic procedures used in diagnosing AUD in the DUI population is unknown. The aim of this study was to compare three

  14. Intraclass Correlation Coefficients in Hierarchical Design Studies with Discrete Response Variables: A Note on a Direct Interval Estimation Procedure

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2015-01-01

    A latent variable modeling procedure that can be used to evaluate intraclass correlation coefficients in two-level settings with discrete response variables is discussed. The approach is readily applied when the purpose is to furnish confidence intervals at prespecified confidence levels for these coefficients in setups with binary or ordinal…

  15. Reliability of thermal interface materials: A review

    International Nuclear Information System (INIS)

    Due, Jens; Robinson, Anthony J.

    2013-01-01

    Thermal interface materials (TIMs) are used extensively to improve thermal conduction across two mating parts. They are particularly crucial in electronics thermal management since excessive junction-to-ambient thermal resistances can cause elevated temperatures which can negatively influence device performance and reliability. Of particular interest to electronic package designers is the thermal resistance of the TIM layer at the end of its design life. Estimations of this allow the package to be designed to perform adequately over its entire useful life. To this end, TIM reliability studies have been performed using accelerated stress tests. This paper reviews the body of work which has been performed on TIM reliability. It focuses on the various test methodologies with commentary on the results which have been obtained for the different TIM materials. Based on the information available in the open literature, a test procedure is proposed for TIM selection based on beginning and end of life performance. - Highlights: ► This paper reviews the body of work which has been performed on TIM reliability. ► Test methodologies for reliability testing are outlined. ► Reliability results for the different TIM materials are discussed. ► A test procedure is proposed for TIM selection BOLife and EOLife performance.

  16. Efficiency criteria for high reliability measured system structures

    International Nuclear Information System (INIS)

    Sal'nikov, N.L.

    2012-01-01

    The procedures of structural redundancy are usually used to develop high reliability measured systems. To estimate efficiency of such structures the criteria to compare different systems has been developed. So it is possible to develop more exact system by inspection of redundant system data unit stochastic characteristics in accordance with the developed criteria [ru

  17. An Inequality Constrained Least-Squares Approach as an Alternative Estimation Procedure for Atmospheric Parameters from VLBI Observations

    Science.gov (United States)

    Halsig, Sebastian; Artz, Thomas; Iddink, Andreas; Nothnagel, Axel

    2016-12-01

    On its way through the atmosphere, radio signals are delayed and affected by bending and attenuation effects relative to a theoretical path in vacuum. In particular, the neutral part of the atmosphere contributes considerably to the error budget of space-geodetic observations. At the same time, space-geodetic techniques become more and more important in the understanding of the Earth's atmosphere, because atmospheric parameters can be linked to the water vapor content in the atmosphere. The tropospheric delay is usually taken into account by applying an adequate model for the hydrostatic component and by additionally estimating zenith wet delays for the highly variable wet component. Sometimes, the Ordinary Least Squares (OLS) approach leads to negative estimates, which would be equivalent to negative water vapor in the atmosphere and does, of course, not reflect meteorological and physical conditions in a plausible way. To cope with this phenomenon, we introduce an Inequality Constrained Least Squares (ICLS) method from the field of convex optimization and use inequality constraints to force the tropospheric parameters to be non-negative allowing for a more realistic tropospheric parameter estimation in a meteorological sense. Because deficiencies in the a priori hydrostatic modeling are almost fully compensated by the tropospheric estimates, the ICLS approach urgently requires suitable a priori hydrostatic delays. In this paper, we briefly describe the ICLS method and validate its impact with regard to station positions.

  18. Improving wastewater-based epidemiology to estimate cannabis use : focus on the initial aspects of the analytical procedure

    NARCIS (Netherlands)

    Causanilles, A.; Baz-Lomba, J.A.; Burgard, D.A.; Emke, E.; González-Mariño, I.; Krizman-Matasic, I.; Li, A.; Löve, A.S.C.; McKall, A.K.; Montes, R.; van Nuijs, A.L.N.; Ort, C.; Quintana, J.B.; Senta, I.; Terzic, S.; Hernández, F.; de Voogt, P.; Bijlsma, L.

    2017-01-01

    Wastewater-based epidemiology is a promising and complementary tool for estimating drug use by the general population, based on the quantitative analysis of specific human metabolites of illicit drugs in urban wastewater. Cannabis is the most commonly used illicit drug and of high interest for

  19. Reliability Of A Novel Intracardiac Electrogram Method For AV And VV Delay Optimization And Comparability To Echocardiography Procedure For Determining Optimal Conduction Delays In CRT Patients

    Directory of Open Access Journals (Sweden)

    N Reinsch

    2009-03-01

    Full Text Available Background: Echocardiography is widely used to optimize CRT programming. A novel intracardiac electrogram method (IEGM was recently developed as an automated programmer-based method, designed to calculate optimal atrioventricular (AV and interventricular (VV delays and provide optimized delay values as an alternative to standard echocardiographic assessment.Objective: This study was aimed at determining the reliability of this new method. Furthermore the comparability of IEGM to existing echocardiographic parameters for determining optimal conduction delays was verified.Methods: Eleven patients (age 62.9± 8.7; 81% male; 73% ischemic, previously implanted with a cardiac resynchronisation therapy defibrillator (CRT-D underwent both echocardiographic and IEGM-based delay optimization.Results: Applying the IEGM method, concordance of three consecutively performed measurements was found in 3 (27% patients for AV delay and in 5 (45% patients for VV delay. Intra-individual variation between three measurements as assessed by the IEGM technique was up to 20 ms (AV: n=6; VV: n=4. E-wave, diastolic filling time and septal-to-lateral wall motion delay emerged as significantly different between the echo and IEGM optimization techniques (p < 0.05. The final AV delay setting was significantly different between both methods (echo: 126.4 ± 29.4 ms, IEGM: 183.6 ± 16.3 ms; p < 0.001; correlation: R = 0.573, p = 0.066. VV delay showed significant differences for optimized delays (echo: 46.4 ± 23.8 ms, IEGM: 10.9 ± 7.0 ms; p <0.01; correlation: R = -0.278, p = 0.407.Conclusion: The automated programmer-based IEGM-based method provides a simple and safe method to perform CRT optimization. However, the reliability of this method appears to be limited. Thus, it remains difficult for the examiner to determine the optimal hemodynamic settings. Additionally, as there was no correlation between the optimal AV- and VV-delays calculated by the IEGM method and the echo

  20. Investigation of the possibility of a calculative reactor safety estimation in the licence procedure for nuclear reactors

    International Nuclear Information System (INIS)

    Adler, B.; Kampf, T.

    1975-12-01

    Up to now it is impossible to calculate completely the safety of nuclear reactors. Therefore the authors have collected and employed a number of at a high degree independent safety parameters for mathematical evaluation of the reactor safety. By means of computer programs such parameters from about 400 research reactors have been analysed and the fluctuation ranges of their greatest density were determined. The limits of these fluctuation ranges are quickly available and can be used as recommended values for the layout and for the safety estimation of research reactors. A comparison of the existing layout recommendations and the determined fluctuation ranges in most cases shows a good agreement. In some cases corrections and new layout recommendations have been proposed. The determined fluctuation ranges found their first practical application in the estimation of the Rossendorf Equipment for Critical Experiments (RAKE). (author)