WorldWideScience

Sample records for dependability assessment methods

  1. Systems dependability assessment

    CERN Document Server

    Aubry, Jean-François

    2015-01-01

    Presents recent developments of probabilistic assessment of systems dependability based on stochastic models, including graph theory, finite state automaton and language theory, for both dynamic and hybrid contexts.

  2. Atmospheric modeling to assess wind dependence in tracer dilution method measurements of landfill methane emissions.

    Science.gov (United States)

    Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T

    2018-03-01

    The short-term temporal variability of landfill methane emissions is not well understood due to uncertainty in measurement methods. Significant variability is seen over short-term measurement campaigns with the tracer dilution method (TDM), but this variability may be due in part to measurement error rather than fluctuations in the actual landfill emissions. In this study, landfill methane emissions and TDM-measured emissions are simulated over a real landfill in Delaware, USA using the Weather Research and Forecasting model (WRF) for two emissions scenarios. In the steady emissions scenario, a constant landfill emissions rate is prescribed at each model grid point on the surface of the landfill. In the unsteady emissions scenario, emissions are calculated at each time step as a function of the local surface wind speed, resulting in variable emissions over each 1.5-h measurement period. The simulation output is used to assess the standard deviation and percent error of the TDM-measured emissions. Eight measurement periods are simulated over two different days to look at different conditions. Results show that standard deviation of the TDM- measured emissions does not increase significantly from the steady emissions simulations to the unsteady emissions scenarios, indicating that the TDM may have inherent errors in its prediction of emissions fluctuations. Results also show that TDM error does not increase significantly from the steady to the unsteady emissions simulations. This indicates that introducing variability to the landfill emissions does not increase errors in the TDM at this site. Across all simulations, TDM errors range from -15% to 43%, consistent with the range of errors seen in previous TDM studies. Simulations indicate diurnal variations of methane emissions when wind effects are significant, which may be important when developing daily and annual emissions estimates from limited field data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Assessment of endothelium: Dependent vasodilation with a non-invasive method in patients with preeclampsia compared to normotensive pregnant women

    Directory of Open Access Journals (Sweden)

    Seyedeh Zahra Allameh

    2014-01-01

    Full Text Available Background: To assess the endothelial function via noninvasive method, in pregnant women with preeclampsia compared to to normotensive pregnant women. Materials and Methods: Brachial artery diameter was measured via ultrasound, in 28 women with preeclampcia in case group and normotensive pregnant women in control group, at rest, after inflation of sphygmomanometer cuff up to 250-300 mmHg, immediately after deflation of the cuff, 60-90 minutes later and 5 min after administration of sublingual trinitroglycerin (TNG. Results of these measurements as well as demographic characteristics of participants in both groups were recorded in special forms. Data were analyzed via Statistical Package for Social Sciences (SPSS version 16, using t-test and repeated measures analysis of variance (ANOVA. P-value < 0.05 was considered statistically significant. The results were presented as mean ± standard deviation (SD. Results: The mean of brachial artery diameter at rest in the case and control groups was 4.49 ± 0.39 and 4.08 ± 0.38 mm, respectively (P = 0.1. Also the results showed that the brachial artery diameter, immediately after deflation of the cuff, was 4.84 ± 0.4 and 4.37 ± 0.30 mm in the case and control groups (P < 0.001, respectively. The mean brachial artery diameter, 60-90 s after deflation of the cuff, was 4.82 ± 0.41 and 4.42 ± 0.38 mm in the case and control groups (P < 0.00, respectively. The brachial artery diameter, 5 min after sublingual NO administration, was 4.95 ± 0.6 and 4.40 ± 0.45 mm in case and control groups (P < 0.001, respectively. Applying of repeated measures ANOVA showed that the mean difference between case and control groups was statistically significant (P < 0.001. Conclusion: Current study concluded that there is no difference in endothelium-dependent vasodilation between women with preeclampsia and pregnant women with normal blood pressure.

  4. Dependent Human Error Probability Assessment

    International Nuclear Information System (INIS)

    Simic, Z.; Mikulicic, V.; Vukovic, I.

    2006-01-01

    This paper presents an assessment of the dependence between dynamic operator actions modeled in a Nuclear Power Plant (NPP) PRA and estimate the associated impact on Core damage frequency (CDF). This assessment was done improve HEP dependencies implementation inside existing PRA. All of the dynamic operator actions modeled in the NPP PRA are included in this assessment. Determining the level of HEP dependence and the associated influence on CDF are the major steps of this assessment. A decision on how to apply the results, i.e., should permanent HEP model changes be made, is based on the resulting relative CDF increase. Some CDF increase was selected as a threshold based on the NPP base CDF value and acceptance guidelines from the Regulatory Guide 1.174. HEP dependence resulting in a CDF increase of > 5E-07 would be considered potential candidates for specific incorporation into the baseline model. The approach used to judge the level of dependence between operator actions is based on dependency level categories and conditional probabilities developed in the Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications NUREG/CR-1278. To simplify the process, NUREG/CR-1278 identifies five levels of dependence: ZD (zero dependence), LD (low dependence), MD (moderate dependence), HD (high dependence), and CD (complete dependence). NUREG/CR-1278 also identifies several qualitative factors that could be involved in determining the level of dependence. Based on the NUREG/CR-1278 information, Time, Function, and Spatial attributes were judged to be the most important considerations when determining the level of dependence between operator actions within an accident sequence. These attributes were used to develop qualitative criteria (rules) that were used to judge the level of dependence (CD, HD, MD, LD, ZD) between the operator actions. After the level of dependence between the various HEPs is judged, quantitative values associated with the

  5. Using Educational Data Mining Methods to Assess Field-Dependent and Field-Independent Learners' Complex Problem Solving

    Science.gov (United States)

    Angeli, Charoula; Valanides, Nicos

    2013-01-01

    The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…

  6. Application of the Method of Statistical Equations of Dependences to Assess the Dynamics of Regional Human Development Index for Khmelnytsk Region

    Directory of Open Access Journals (Sweden)

    R. О.

    2017-12-01

    Full Text Available A statistical approach to assessment of the factor values required to have necessary (planned, predicted levels of a resulting indicator achieved, including for purposes of regional socio-economic programs, is developed, by applying the method of statistical equations of dependences. The main problems that can be solved by use of the method of statistical equations of dependences are: direct and inverse problem; computing the factors’ contributions in the resulting indicator; constructing graphs of multiple relation and computing the shares of influence (the weights of selected factors; analysis of functional and correlation dependences, etc. The developed approach is used to assess the dynamics of Regional Human Development Index (RHDI for Khmelnytsk region (Ukraine and its constituent factors, in 2011–2015. The computations show that the factors with the largest contribution in RHDI of Khmelnytsk region are as follows: “number of minimal food baskets that can be purchased for average per capita income in the region” (62.91%, “housing in cities (square area per person” (20.27%, and “total birth rate” (5.33%. The contributions of factors like “planned capacity of ambulatories and policlinics per 10 thousand population” or “coverage of children in school age by secondary education” range from 5.26 to 0.14%. It is concluded that the proposed approach to the application of the method of statistical equations of dependences for modeling of factor and resulting indicators contributing to human development parameters at regional level can be used for assessments at sectoral level, with modifying the nomenclature of indicators measuring the socio-economic development and the financial and economic performance of business entities in an economic sector.

  7. Time dependent view factor methods

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.

    1998-03-01

    View factors have been used for treating radiation transport between opaque surfaces bounding a transparent medium for several decades. However, in recent years they have been applied to problems involving intense bursts of radiation in enclosed volumes such as in the laser fusion hohlraums. In these problems, several aspects require treatment of time dependence

  8. Energy consumption assessment methods

    Energy Technology Data Exchange (ETDEWEB)

    Sutherland, K S

    1975-01-01

    The why, what, and how-to aspects of energy audits for industrial plants, and the application of energy accounting methods to a chemical plant in order to assess energy conservation possibilities are discussed. (LCL)

  9. WATER CHEMISTRY ASSESSMENT METHODS

    Science.gov (United States)

    This section summarizes and evaluates the surfce water column chemistry assessment methods for USEPA/EMAP-SW, USGS-NAQA, USEPA-RBP, Oho EPA, and MDNR-MBSS. The basic objective of surface water column chemistry assessment is to characterize surface water quality by measuring a sui...

  10. Methods for assessing geodiversity

    Science.gov (United States)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2017-04-01

    The accepted systematics of geodiversity assessment methods will be presented in three categories: qualitative, quantitative and qualitative-quantitative. Qualitative methods are usually descriptive methods that are suited to nominal and ordinal data. Quantitative methods use a different set of parameters and indicators to determine the characteristics of geodiversity in the area being researched. Qualitative-quantitative methods are a good combination of the collection of quantitative data (i.e. digital) and cause-effect data (i.e. relational and explanatory). It seems that at the current stage of the development of geodiversity research methods, qualitative-quantitative methods are the most advanced and best assess the geodiversity of the study area. Their particular advantage is the integration of data from different sources and with different substantive content. Among the distinguishing features of the quantitative and qualitative-quantitative methods for assessing geodiversity are their wide use within geographic information systems, both at the stage of data collection and data integration, as well as numerical processing and their presentation. The unresolved problem for these methods, however, is the possibility of their validation. It seems that currently the best method of validation is direct filed confrontation. Looking to the next few years, the development of qualitative-quantitative methods connected with cognitive issues should be expected, oriented towards ontology and the Semantic Web.

  11. Guidance on Dependence Assessment in SPAR-H

    Energy Technology Data Exchange (ETDEWEB)

    April M. Whaley

    2012-06-01

    As part of the effort to develop the SPAR-H user guidance, particular attention was paid to the assessment of dependence in order to address user questions about proper application of dependence. This paper presents a discussion of dependence from a psychological perspective and provides guidance on applying this information during the qualitative analysis of dependence to ensure more realistic and appropriate dependence assessments with the SPAR-H method. While this guidance was developed with SPAR-H in mind, it may be informative to other human reliability analysis methods that also use a THERP-based dependence approach, particularly if applied at the human failure event level.

  12. Assessment methods for rehabilitation.

    Science.gov (United States)

    Biefang, S; Potthoff, P

    1995-09-01

    Diagnostics and evaluation in medical rehabilitation should be based on methods that are as objective as possible. In this context quantitative methods are an important precondition. We conducted for the German Pensions Insurance Institutions (which are in charge of the medical and vocational rehabilitation of workers and employees) a survey on assessment methods for rehabilitation which included an evaluation of American literature, with the aim to indicate procedures that can be considered for adaptation in Germany and to define further research requirements. The survey identified: (1) standardized procedures and instrumented tests for the assessment of musculoskeletal, cardiopulmonary and neurophysiological function; (2) personality, intelligence, achievement, neuropsychological and alcoholism screening tests for the assessment of mental or cognitive function; (3) rating scales and self-administered questionnaires for the assessment of Activities of Daily Living and Instrumental Activities of Daily Living (ADL/IADL Scales); (4) generic profiles and indexes as well as disease-specific measures for the assessment of health-related quality of life and health status; and (5) rating scales for vocational assessment. German equivalents or German versions exist only for a part of the procedures identified. Translation and testing of Anglo-Saxon procedures should have priority over the development of new German methods. The following procedures will be taken into account: (a) instrumented tests for physical function, (b) IADL Scales, (c) generic indexes of health-related quality of life, (d) specific quality of life and health status measures for disorders of the circulatory system, metabolic system, digestive organs, respiratory tract and for cancer, and (e) vocational rating scales.

  13. Assessment of Dependency, Agreeableness, and Their Relationship

    Science.gov (United States)

    Lowe, Jennifer Ruth; Edmundson, Maryanne; Widiger, Thomas A.

    2009-01-01

    Agreeableness is central to the 5-factor model conceptualization of dependency. However, 4 meta-analyses of the relationship of agreeableness with dependency have failed to identify a consistent relationship. It was the hypothesis of the current study that these findings might be due in part to an emphasis on the assessment of adaptive, rather…

  14. Time-dependent problems and difference methods

    CERN Document Server

    Gustafsson, Bertil; Oliger, Joseph

    2013-01-01

    Praise for the First Edition "". . . fills a considerable gap in the numerical analysis literature by providing a self-contained treatment . . . this is an important work written in a clear style . . . warmly recommended to any graduate student or researcher in the field of the numerical solution of partial differential equations."" -SIAM Review Time-Dependent Problems and Difference Methods, Second Edition continues to provide guidance for the analysis of difference methods for computing approximate solutions to partial differential equations for time-de

  15. DEPEND-HRA-A method for consideration of dependency in human reliability analysis

    International Nuclear Information System (INIS)

    Cepin, Marko

    2008-01-01

    A consideration of dependencies between human actions is an important issue within the human reliability analysis. A method was developed, which integrates the features of existing methods and the experience from a full scope plant simulator. The method is used on real plant-specific human reliability analysis as a part of the probabilistic safety assessment of a nuclear power plant. The method distinguishes dependency for pre-initiator events from dependency for initiator and post-initiator events. The method identifies dependencies based on scenarios, where consecutive human actions are modeled, and based on a list of minimal cut sets, which is obtained by running the minimal cut set analysis considering high values of human error probabilities in the evaluation. A large example study, which consisted of a large number of human failure events, demonstrated the applicability of the method. Comparative analyses that were performed show that both selection of dependency method and selection of dependency levels within the method largely impact the results of probabilistic safety assessment. If the core damage frequency is not impacted much, the listings of important basic events in terms of risk increase and risk decrease factors may change considerably. More efforts are needed on the subject, which will prepare the background for more detailed guidelines, which will remove the subjectivity from the evaluations as much as it is possible

  16. Dependability Analysis Methods For Configurable Software

    International Nuclear Information System (INIS)

    Dahll, Gustav; Pulkkinen, Urho

    1996-01-01

    Configurable software systems are systems which are built up by standard software components in the same way as a hardware system is built up by standard hardware components. Such systems are often used in the control of NPPs, also in safety related applications. A reliability analysis of such systems is therefore necessary. This report discusses what configurable software is, and what is particular with respect to reliability assessment of such software. Two very commonly used techniques in traditional reliability analysis, viz. failure mode, effect and criticality analysis (FMECA) and fault tree analysis are investigated. A real example is used to illustrate the discussed methods. Various aspects relevant to the assessment of the software reliability in such systems are discussed. Finally some models for quantitative software reliability assessment applicable on configurable software systems are described. (author)

  17. Information System Quality Assessment Methods

    OpenAIRE

    Korn, Alexandra

    2014-01-01

    This thesis explores challenging topic of information system quality assessment and mainly process assessment. In this work the term Information System Quality is defined as well as different approaches in a quality definition for different domains of information systems are outlined. Main methods of process assessment are overviewed and their relationships are described. Process assessment methods are divided into two categories: ISO standards and best practices. The main objective of this w...

  18. Methods of risk assessment

    International Nuclear Information System (INIS)

    Jones, D.R.

    1981-01-01

    The subject is discussed under the headings: introduction (identification, quantification of risk); some approaches to risk evaluation (use of the 'no risk' principle; the 'acceptable risk' method; risk balancing; comparison of risks, benefits and other costs); cost benefit analysis; an alternative approach (tabulation and display; description and reduction of the data table); identification of potential decision sets consistent with the constraints. Some references are made to nuclear power. (U.K.)

  19. Analysis of dependent failures in risk assessment and reliability evaluation

    International Nuclear Information System (INIS)

    Fleming, K.N.; Mosleh, A.; Kelley, A.P. Jr.; Gas-Cooled Reactors Associates, La Jolla, CA)

    1983-01-01

    The ability to estimate the risk of potential reactor accidents is largely determined by the ability to analyze statistically dependent multiple failures. The importance of dependent failures has been indicated in recent probabilistic risk assessment (PRA) studies as well as in reports of reactor operating experiences. This article highlights the importance of several different types of dependent failures from the perspective of the risk and reliability analyst and provides references to the methods and data available for their analysis. In addition to describing the current state of the art, some recent advances, pitfalls, misconceptions, and limitations of some approaches to dependent failure analysis are addressed. A summary is included of the discourse on this subject, which is presented in the Institute of Electrical and Electronics Engineers/American Nuclear Society PRA Procedures Guide

  20. Chemical Dependency Regional Needs Assessment: Northeastern Minnesota.

    Science.gov (United States)

    Stone, Marylee

    The Minnesota Model of Chemical Dependency Treatment, which evolved from a combination of the grassroots Alcoholics Anonymous movement and the State Mental Health Services in the 1960s has made Minnesota an international leader in chemical dependency treatment efforts. Northeastern Minnesota has shared this reputation with the state. In spite of…

  1. Methods of sperm vitality assessment.

    Science.gov (United States)

    Moskovtsev, Sergey I; Librach, Clifford L

    2013-01-01

    Sperm vitality is a reflection of the proportion of live, membrane-intact spermatozoa determined by either dye exclusion or osmoregulatory capacity under hypo-osmotic conditions. In this chapter we address the two most common methods of sperm vitality assessment: eosin-nigrosin staining and the hypo-osmotic swelling test, both utilized in clinical Andrology laboratories.

  2. Assessment of the chestnut production weather dependence

    Science.gov (United States)

    Pereira, Mário; Caramelo, Liliana; Gouveia, Célia; Gomes-Laranjo, José

    2010-05-01

    The vegetative cycle of chestnut trees is highly dependent on weather. Photosynthesis and pollen germination are mainly conditioned by the air temperature while heavy precipitation and strong wind have significant impacts during the flushing phase period (Gomes-Laranjo et al., 2005, 2006). In Portugal, chestnut tree orchads are located in mountainous areas of the Northeast region of Trás-os-Montes, between 600 and 1000 m of altitude. Topography controls the atmospheric environment and assures adequate conditions for the chestnut production. In the above mentioned context, remote sensing plays an important role because of its ability to monitor and characterise vegetation dynamics. A number of studies, based on remote sensing, have been conducted in Europe to analyse the year-to-year variations in European vegetation greenness as a function of precipitation and temperature (Gouveia et al., 2008). A previous study focusing on the relationship between meteorological variables and chestnut productivity provides indication that simulation models may benefit from the incorporation of such kind of relationships. The aim of the present work is to provide a detailed description of recent developments, in particular of the added value that may be brought by using satellite data. We have relied on regional fields of the Normalized Difference Vegetation Index (NDVI) dataset, at 8-km resolution, provided by the Global Inventory Monitoring and Modelling System (GIMMS) group. The data are derived from the Advanced Very High Resolution Radiometers (AVHRR), and cover the period from 1982 to 2006. Additionally we have used the chestnut productivity dataset, which includes the annual values of chestnut production and area of production provided by INE, the National Institute of Statistics of Portugal and the meteorological dataset which includes values of several variables from different providers (Meteorod, NCEP/NCAR, ECA&D and national Meteorological Institute). Results show that

  3. Time dependent variational method in quantum mechanics

    International Nuclear Information System (INIS)

    Torres del Castillo, G.F.

    1987-01-01

    Using the fact that the solutions to the time-dependent Schodinger equation can be obtained from a variational principle, by restricting the evolution of the state vector to some surface in the corresponding Hilbert space, approximations to the exact solutions can be obtained, which are determined by equations similar to Hamilton's equations. It is shown that, in order for the approximate evolution to be well defined on a given surface, the imaginary part of the inner product restricted to the surface must be non-singular. (author)

  4. Dependencies, human interactions and uncertainties in probabilistic safety assessment

    International Nuclear Information System (INIS)

    Hirschberg, S.

    1990-01-01

    In the context of Probabilistic Safety Assessment (PSA), three areas were investigated in a 4-year Nordic programme: dependencies with special emphasis on common cause failures, human interactions and uncertainty aspects. The approach was centered around comparative analyses in form of Benchmark/Reference Studies and retrospective reviews. Weak points in available PSAs were identified and recommendations were made aiming at improving consistency of the PSAs. The sensitivity of PSA-results to basic assumptions was demonstrated and the sensitivity to data assignment and to choices of methods for analysis of selected topics was investigated. (author)

  5. A computational Bayesian approach to dependency assessment in system reliability

    International Nuclear Information System (INIS)

    Yontay, Petek; Pan, Rong

    2016-01-01

    Due to the increasing complexity of engineered products, it is of great importance to develop a tool to assess reliability dependencies among components and systems under the uncertainty of system reliability structure. In this paper, a Bayesian network approach is proposed for evaluating the conditional probability of failure within a complex system, using a multilevel system configuration. Coupling with Bayesian inference, the posterior distributions of these conditional probabilities can be estimated by combining failure information and expert opinions at both system and component levels. Three data scenarios are considered in this study, and they demonstrate that, with the quantification of the stochastic relationship of reliability within a system, the dependency structure in system reliability can be gradually revealed by the data collected at different system levels. - Highlights: • A Bayesian network representation of system reliability is presented. • Bayesian inference methods for assessing dependencies in system reliability are developed. • Complete and incomplete data scenarios are discussed. • The proposed approach is able to integrate reliability information from multiple sources at multiple levels of the system.

  6. LNG Safety Assessment Evaluation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Muna, Alice Baca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); LaFleur, Angela Christine [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-05-01

    Sandia National Laboratories evaluated published safety assessment methods across a variety of industries including Liquefied Natural Gas (LNG), hydrogen, land and marine transportation, as well as the US Department of Defense (DOD). All the methods were evaluated for their potential applicability for use in the LNG railroad application. After reviewing the documents included in this report, as well as others not included because of repetition, the Department of Energy (DOE) Hydrogen Safety Plan Checklist is most suitable to be adapted to the LNG railroad application. This report was developed to survey industries related to rail transportation for methodologies and tools that can be used by the FRA to review and evaluate safety assessments submitted by the railroad industry as a part of their implementation plans for liquefied or compressed natural gas storage ( on-board or tender) and engine fueling delivery systems. The main sections of this report provide an overview of various methods found during this survey. In most cases, the reference document is quoted directly. The final section provides discussion and a recommendation for the most appropriate methodology that will allow efficient and consistent evaluations to be made. The DOE Hydrogen Safety Plan Checklist was then revised to adapt it as a methodology for the Federal Railroad Administration’s use in evaluating safety plans submitted by the railroad industry.

  7. Advanced methods of fatigue assessment

    CERN Document Server

    Radaj, Dieter

    2013-01-01

    The book in hand presents advanced methods of brittle fracture and fatigue assessment. The Neuber concept of fictitious notch rounding is enhanced with regard to theory and application. The stress intensity factor concept for cracks is extended to pointed and rounded corner notches as well as to locally elastic-plastic material behaviour. The averaged strain energy density within a circular sector volume around the notch tip is shown to be suitable for strength-assessments. Finally, the various implications of cyclic plasticity on fatigue crack growth are explained with emphasis being laid on the DJ-integral approach.   This book continues the expositions of the authors’ well known reference work in German language ‘Ermüdungsfestigkeit – Grundlagen für Ingenieure’ (Fatigue strength – fundamentals for engineers).

  8. Clinical Considerations in the Assessment of Adolescent Chemical Dependency.

    Science.gov (United States)

    Winters, Ken

    1990-01-01

    Discusses relevant research findings of clinical assessment of adolescent chemical dependency so that service providers can better address these concerns. Three major issues are discussed: the definition of adolescent chemical dependency, clinical domains of assessment (chemical use problem severity, precipitating and perpetuating risk factors,…

  9. Improved power performance assessment methods

    Energy Technology Data Exchange (ETDEWEB)

    Frandsen, S; Antoniou, I; Dahlberg, J A [and others

    1999-03-01

    The uncertainty of presently-used methods for retrospective assessment of the productive capacity of wind farms is unacceptably large. The possibilities of improving the accuracy have been investigated and are reported. A method is presented that includes an extended power curve and site calibration. In addition, blockage effects with respect to reference wind speed measurements are analysed. It is found that significant accuracy improvements are possible by the introduction of more input variables such as turbulence and wind shear, in addition to mean wind speed and air density. Also, the testing of several or all machines in the wind farm - instead of only one or two - may provide a better estimate of the average performance. (au)

  10. Time-dependent reliability analysis and condition assessment of structures

    International Nuclear Information System (INIS)

    Ellingwood, B.R.

    1997-01-01

    Structures generally play a passive role in assurance of safety in nuclear plant operation, but are important if the plant is to withstand the effect of extreme environmental or abnormal events. Relative to mechanical and electrical components, structural systems and components would be difficult and costly to replace. While the performance of steel or reinforced concrete structures in service generally has been very good, their strengths may deteriorate during an extended service life as a result of changes brought on by an aggressive environment, excessive loading, or accidental loading. Quantitative tools for condition assessment of aging structures can be developed using time-dependent structural reliability analysis methods. Such methods provide a framework for addressing the uncertainties attendant to aging in the decision process

  11. Dependence assessment in human reliability analysis based on D numbers and AHP

    International Nuclear Information System (INIS)

    Zhou, Xinyi; Deng, Xinyang; Deng, Yong; Mahadevan, Sankaran

    2017-01-01

    Highlights: • D numbers and AHP are combined to implement dependence assessment in HRA. • A new tool, called D numbers, is used to deal with the uncertainty in HRA. • The proposed method can well address the fuzziness and subjectivity in linguistic assessment. • The proposed method is well applicable in dependence assessment which inherently has a linguistic assessment process. - Abstract: Since human errors always cause heavy loss especially in nuclear engineering, human reliability analysis (HRA) has attracted more and more attention. Dependence assessment plays a vital role in HRA, measuring the dependence degree of human errors. Many researches have been done while still have improvement space. In this paper, a dependence assessment model based on D numbers and analytic hierarchy process (AHP) is proposed. Firstly, identify the factors used to measure the dependence level of two human operations. Besides, in terms of the suggested dependence level, determine and quantify the anchor points for each factor. Secondly, D numbers and AHP are adopted in model. Experts evaluate the dependence level of human operations for each factor. Then, the evaluation results are presented as D numbers and fused by D number’s combination rule that can obtain the dependence probability of human operations for each factor. The weights of factors can be determined by AHP. Thirdly, based on the dependence probability for each factor and its corresponding weight, the dependence probability of two human operations and its confidence can be obtained. The proposed method can well address the fuzziness and subjectivity in linguistic assessment. The proposed method is well applicable to assess the dependence degree of human errors in HRA which inherently has a linguistic assessment process.

  12. Dependence assessment in human reliability analysis based on D numbers and AHP

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xinyi; Deng, Xinyang [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); Deng, Yong, E-mail: ydeng@swu.edu.cn [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 610054 (China); Mahadevan, Sankaran [School of Engineering, Vanderbilt University, Nashville, TN 37235 (United States)

    2017-03-15

    Highlights: • D numbers and AHP are combined to implement dependence assessment in HRA. • A new tool, called D numbers, is used to deal with the uncertainty in HRA. • The proposed method can well address the fuzziness and subjectivity in linguistic assessment. • The proposed method is well applicable in dependence assessment which inherently has a linguistic assessment process. - Abstract: Since human errors always cause heavy loss especially in nuclear engineering, human reliability analysis (HRA) has attracted more and more attention. Dependence assessment plays a vital role in HRA, measuring the dependence degree of human errors. Many researches have been done while still have improvement space. In this paper, a dependence assessment model based on D numbers and analytic hierarchy process (AHP) is proposed. Firstly, identify the factors used to measure the dependence level of two human operations. Besides, in terms of the suggested dependence level, determine and quantify the anchor points for each factor. Secondly, D numbers and AHP are adopted in model. Experts evaluate the dependence level of human operations for each factor. Then, the evaluation results are presented as D numbers and fused by D number’s combination rule that can obtain the dependence probability of human operations for each factor. The weights of factors can be determined by AHP. Thirdly, based on the dependence probability for each factor and its corresponding weight, the dependence probability of two human operations and its confidence can be obtained. The proposed method can well address the fuzziness and subjectivity in linguistic assessment. The proposed method is well applicable to assess the dependence degree of human errors in HRA which inherently has a linguistic assessment process.

  13. Introduction to numerical methods for time dependent differential equations

    CERN Document Server

    Kreiss, Heinz-Otto

    2014-01-01

    Introduces both the fundamentals of time dependent differential equations and their numerical solutions Introduction to Numerical Methods for Time Dependent Differential Equations delves into the underlying mathematical theory needed to solve time dependent differential equations numerically. Written as a self-contained introduction, the book is divided into two parts to emphasize both ordinary differential equations (ODEs) and partial differential equations (PDEs). Beginning with ODEs and their approximations, the authors provide a crucial presentation of fundamental notions, such as the t

  14. Spectral methods for time dependent partial differential equations

    Science.gov (United States)

    Gottlieb, D.; Turkel, E.

    1983-01-01

    The theory of spectral methods for time dependent partial differential equations is reviewed. When the domain is periodic Fourier methods are presented while for nonperiodic problems both Chebyshev and Legendre methods are discussed. The theory is presented for both hyperbolic and parabolic systems using both Galerkin and collocation procedures. While most of the review considers problems with constant coefficients the extension to nonlinear problems is also discussed. Some results for problems with shocks are presented.

  15. Dependent failure analysis research for the US NRC Risk Methods Integration and Evaluation Program

    International Nuclear Information System (INIS)

    Bohn, M.P.; Stack, D.W.; Campbell, D.J.; Rooney, J.J.; Rasmuson, D.M.

    1985-01-01

    The Risk Methods Integration and Evaluation Program (RMIEP), which is being performed for the Nuclear Regulatory Commission by Sandia National Laboratories, has the goals of developing new risk assessment methods and integrating the new and existing methods in a uniform procedure for performing an in-depth probabilistic risk assessment (PRA) with consistent levels of analysis for internal, external, and dependent failure scenarios. An important part of RMIEP is the recognition of the crucial importance of dependent common cause failures (CCFs) and the pressing need to develop effective methods for analyzing CCFs as part of a PRA. The NRC-sponsored Integrated Dependent Failure Methodology Program at Sandia is addressing this need. This paper presents a preliminary approach for analyzing CCFs as part of a PRA. A nine-step procedure for efficiently screening and analyzing dependent failure scenarios is presented, and each step is discussed

  16. Methodical treatment of dependent failures in risk analyses

    International Nuclear Information System (INIS)

    Hennings, W.; Mertens, J.

    1987-06-01

    In this report the state-of-the-art regarding dependent failures is compiled and commented on. Among others the following recommendations are infered: The term 'common mode failures' should be restricted to failures of redundant, similar components; the generic term is 'dependent failures' with the subsets 'causal failures' and 'common cause failures'. In risk studies, dependent failures should be covered as far as possible by 'explicit methods'. Nevertheless an uncovered rest remains, which should be accounted for by sensitivity analyses using 'implicit methods'. For this the homogeneous Marshall-Olkin model is recommended. Because the available reports on operating experiences only record 'common mode failures' systematically, it is recommended to additionally apply other methods, e.g. carry out a 'precursor study'. (orig.) [de

  17. Evidential analytic hierarchy process dependence assessment methodology in human reliability analysis

    International Nuclear Information System (INIS)

    Chen, Lu Yuan; Zhou, Xinyi; Xiao, Fuyuan; Deng, Yong; Mahadevan, Sankaran

    2017-01-01

    In human reliability analysis, dependence assessment is an important issue in risky large complex systems, such as operation of a nuclear power plant. Many existing methods depend on an expert's judgment, which contributes to the subjectivity and restrictions of results. Recently, a computational method, based on the Dempster-Shafer evidence theory and analytic hierarchy process, has been proposed to handle the dependence in human reliability analysis. The model can deal with uncertainty in an analyst's judgment and reduce the subjectivity in the evaluation process. However, the computation is heavy and complicated to some degree. The most important issue is that the existing method is in a positive aspect, which may cause an underestimation of the risk. In this study, a new evidential analytic hierarchy process dependence assessment methodology, based on the improvement of existing methods, has been proposed, which is expected to be easier and more effective

  18. Evidential Analytic Hierarchy Process Dependence Assessment Methodology in Human Reliability Analysis

    Directory of Open Access Journals (Sweden)

    Luyuan Chen

    2017-02-01

    Full Text Available In human reliability analysis, dependence assessment is an important issue in risky large complex systems, such as operation of a nuclear power plant. Many existing methods depend on an expert's judgment, which contributes to the subjectivity and restrictions of results. Recently, a computational method, based on the Dempster–Shafer evidence theory and analytic hierarchy process, has been proposed to handle the dependence in human reliability analysis. The model can deal with uncertainty in an analyst's judgment and reduce the subjectivity in the evaluation process. However, the computation is heavy and complicated to some degree. The most important issue is that the existing method is in a positive aspect, which may cause an underestimation of the risk. In this study, a new evidential analytic hierarchy process dependence assessment methodology, based on the improvement of existing methods, has been proposed, which is expected to be easier and more effective.

  19. Evidential analytic hierarchy process dependence assessment methodology in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Lu Yuan; Zhou, Xinyi; Xiao, Fuyuan; Deng, Yong [School of Computer and Information Science, Southwest University, Chongqing (China); Mahadevan, Sankaran [School of Engineering, Vanderbilt University, Nashville (United States)

    2017-02-15

    In human reliability analysis, dependence assessment is an important issue in risky large complex systems, such as operation of a nuclear power plant. Many existing methods depend on an expert's judgment, which contributes to the subjectivity and restrictions of results. Recently, a computational method, based on the Dempster-Shafer evidence theory and analytic hierarchy process, has been proposed to handle the dependence in human reliability analysis. The model can deal with uncertainty in an analyst's judgment and reduce the subjectivity in the evaluation process. However, the computation is heavy and complicated to some degree. The most important issue is that the existing method is in a positive aspect, which may cause an underestimation of the risk. In this study, a new evidential analytic hierarchy process dependence assessment methodology, based on the improvement of existing methods, has been proposed, which is expected to be easier and more effective.

  20. ASSESSMENT METHODS OF INTERNAL AUDIT

    Directory of Open Access Journals (Sweden)

    Elena RUSE

    2015-12-01

    Full Text Available Internal audit services are more and more needed within economic entities, because on one hand they are directly subordinated to the general manager, on the other hand there is an increase in credit to its recommendations, estimating that internal audit is more than just a simple compliance check based on an established referral system. Our research focuses on evaluating the impact of theory and practice in the application of internal audit process. The added value brought by internal audit function to the economic entity it is pretty difficult to establish and requires effective ways and criteria of measured. In this regard, we will try to present ways to analyze internal audit’s activity by reference to some performance indicators or other specific methods. We used as research techniques: literature review, applied research and constructive research.

  1. Energy-dependent applications of the transfer matrix method

    International Nuclear Information System (INIS)

    Oeztunali, O.I.; Aronson, R.

    1975-01-01

    The transfer matrix method is applied to energy-dependent neutron transport problems for multiplying and nonmultiplying media in one-dimensional plane geometry. Experimental cross sections are used for total, elastic, and inelastic scattering and fission. Numerical solutions are presented for the problem of a unit point isotropic source in an infinite medium of water and for the problem of the critical 235 U slab with finite water reflectors. No iterations were necessary in this method. Numerical results obtained are consistent with physical considerations and compare favorably with the moments method results for the problem of the unit point isotropic source in an infinite water medium. (U.S.)

  2. Improved methods for dependent failure analysis in PSA

    International Nuclear Information System (INIS)

    Ballard, G.M.; Games, A.M.

    1988-01-01

    The basic design principle used in ensuring the safe operation of nuclear power plant is defence in depth. This normally takes the form of redundant equipment and systems which provide protection even if a number of equipment failures occur. Such redundancy is particularly effective in ensuring that multiple, independent equipment failures with the potential for jeopardising reactor safety will be rare events. However the achievement of high reliability has served to highlight the potentially dominant role of multiple, dependent failures of equipment and systems. Analysis of reactor operating experience has shown that dependent failure events are the major contributors to safety system failures and reactor incidents and accidents. In parallel PSA studies have shown that the results of a safety analysis are sensitive to assumptions made about the dependent failure (CCF) probability for safety systems. Thus a Westinghouse Analysis showed that increasing system dependent failure probabilities by a factor of 5 led to a factor 4 increase in core. This paper particularly refers to the engineering concepts underlying dependent failure assessment touching briefly on aspects of data. It is specifically not the intent of our work to develop a new mathematical model of CCF but to aid the use of existing models

  3. Enhancing Institutional Assessment Efforts through Qualitative Methods

    Science.gov (United States)

    Van Note Chism, Nancy; Banta, Trudy W.

    2007-01-01

    Qualitative methods can do much to describe context and illuminate the why behind patterns encountered in institutional assessment. Alone, or in combination with quantitative methods, they should be the approach of choice for many of the most important assessment questions. (Contains 1 table.)

  4. A method to assess maritime resilience

    NARCIS (Netherlands)

    Rypkema, J.A.; Beek, F.A. van der; Schraagen, J.M.C.; Winkelman, J.W.; Wijngaarden, M. van

    2015-01-01

    The aim of this study is to develop a multi-level resilience analysis method (RAM) to assess risk and performance variability in current maritime socio-technical systems (STSs). The method integrates Hollnagel’s four resilience abilities to assess a system’s ability to effectively cope with

  5. Development on Vulnerability Assessment Methods of PPS

    Institute of Scientific and Technical Information of China (English)

    MIAO; Qiang; ZHANG; Wen-liang; BU; Li-xin; YIN; Hong-he; LI; Xin-jun; FANG; Xin

    2013-01-01

    Through investigating information from domestic and abroad,joint the domestic assessment experience,we present a set of physical protection system(PPS)vulnerability assessment methods for on-operating nuclear power plants and for on-designing nuclear facilities.The methods will help to strengthen and upgrade the security measures of the nuclear facilities,improve the effectiveness and

  6. Analysis of Feature Extraction Methods for Speaker Dependent Speech Recognition

    Directory of Open Access Journals (Sweden)

    Gurpreet Kaur

    2017-02-01

    Full Text Available Speech recognition is about what is being said, irrespective of who is saying. Speech recognition is a growing field. Major progress is taking place on the technology of automatic speech recognition (ASR. Still, there are lots of barriers in this field in terms of recognition rate, background noise, speaker variability, speaking rate, accent etc. Speech recognition rate mainly depends on the selection of features and feature extraction methods. This paper outlines the feature extraction techniques for speaker dependent speech recognition for isolated words. A brief survey of different feature extraction techniques like Mel-Frequency Cepstral Coefficients (MFCC, Linear Predictive Coding Coefficients (LPCC, Perceptual Linear Prediction (PLP, Relative Spectra Perceptual linear Predictive (RASTA-PLP analysis are presented and evaluation is done. Speech recognition has various applications from daily use to commercial use. We have made a speaker dependent system and this system can be useful in many areas like controlling a patient vehicle using simple commands.

  7. Assessment of the Tobacco Dependence Screener Among Smokeless Tobacco Users.

    Science.gov (United States)

    Mushtaq, Nasir; Beebe, Laura A

    2016-05-01

    Variants of the Fagerström Tolerance Questionnaire and Fagerström Test for Nicotine Dependence (FTND) are widely used to study dependence among smokeless tobacco (ST) users. However, there is a need for a dependence measure which is based on the clinical definition of dependence and is easy to administer. The Tobacco Dependence Screener (TDS), a self-administered 10-item scale, is based on the Diagnostic and Statistical Manual, fourth edition (DSM-IV) and ICD-10 definitions of dependence. It is commonly used as a tobacco dependence screening tool in cigarette smoking studies but it has not been evaluated for dependence in ST users. The purpose of this study is to evaluate the TDS as a measure of tobacco dependence among ST users. Data collected from a community-based sample of exclusive ST users living in Oklahoma (n = 95) was used for this study. TDS was adapted to be used for ST dependence as the references for smoking were changed to ST use. Concurrent validity and reliability of TDS were evaluated. Salivary cotinine concentration was used as a criterion variable. Overall accuracy of the TDS was assessed by receiver's operating characteristic (ROC) curve and optimal cutoff scores for dependence diagnosis were evaluated. There was no floor or ceiling effect in TDS score (mean = 5.42, SD = 2.61). Concurrent validity of TDS as evaluated by comparing it with FTND-ST was affirmative. Study findings showed significant association between TDS and salivary cotinine concentration. The internal consistency assessed by Cronbach's alpha indicated that TDS had acceptable reliability (α = 0.765). TDS was negatively correlated with time to first chew/dip and positively correlated with frequency (number of chews per day) and years of ST use. Results of logistic regression analysis showed that at an optimal cutoff score of TDS 5+, ST users classified as dependent had significantly higher cotinine concentration and FTND-ST scores. TDS demonstrated acceptable reliability and

  8. Assessment methods for the evaluation of vitiligo.

    Science.gov (United States)

    Alghamdi, K M; Kumar, A; Taïeb, A; Ezzedine, K

    2012-12-01

    There is no standardized method for assessing vitiligo. In this article, we review the literature from 1981 to 2011 on different vitiligo assessment methods. We aim to classify the techniques available for vitiligo assessment as subjective, semi-objective or objective; microscopic or macroscopic; and as based on morphometry or colorimetry. Macroscopic morphological measurements include visual assessment, photography in natural or ultraviolet light, photography with computerized image analysis and tristimulus colorimetry or spectrophotometry. Non-invasive micromorphological methods include confocal laser microscopy (CLM). Subjective methods include clinical evaluation by a dermatologist and a vitiligo disease activity score. Semi-objective methods include the Vitiligo Area Scoring Index (VASI) and point-counting methods. Objective methods include software-based image analysis, tristimulus colorimetry, spectrophotometry and CLM. Morphometry is the measurement of the vitiliginous surface area, whereas colorimetry quantitatively analyses skin colour changes caused by erythema or pigment. Most methods involve morphometry, except for the chromameter method, which assesses colorimetry. Some image analysis software programs can assess both morphometry and colorimetry. The details of these programs (Corel Draw, Image Pro Plus, AutoCad and Photoshop) are discussed in the review. Reflectance confocal microscopy provides real-time images and has great potential for the non-invasive assessment of pigmentary lesions. In conclusion, there is no single best method for assessing vitiligo. This review revealed that VASI, the rule of nine and Wood's lamp are likely to be the best techniques available for assessing the degree of pigmentary lesions and measuring the extent and progression of vitiligo in the clinic and in clinical trials. © 2012 The Authors. Journal of the European Academy of Dermatology and Venereology © 2012 European Academy of Dermatology and Venereology.

  9. Thevenin Equivalent Method for Dynamic Contingency Assessment

    DEFF Research Database (Denmark)

    Møller, Jakob Glarbo; Jóhannsson, Hjörtur; Østergaard, Jacob

    2015-01-01

    A method that exploits Thevenin equivalent representation for obtaining post-contingency steady-state nodal voltages is integrated with a method of detecting post-contingency aperiodic small-signal instability. The task of integrating stability assessment with contingency assessment is challenged...... by the cases of unstable post-contingency conditions. For unstable postcontingency conditions there exists no credible steady-state which can be used for basis of a stability assessment. This paper demonstrates how Thevenin Equivalent methods can be applied in algebraic representation of such bifurcation...... points which may be used in assessment of post-contingency aperiodic small-signal stability. The assessment method is introduced with a numeric example....

  10. Dependability modeling and assessment in UML-based software development.

    Science.gov (United States)

    Bernardi, Simona; Merseguer, José; Petriu, Dorina C

    2012-01-01

    Assessment of software nonfunctional properties (NFP) is an important problem in software development. In the context of model-driven development, an emerging approach for the analysis of different NFPs consists of the following steps: (a) to extend the software models with annotations describing the NFP of interest; (b) to transform automatically the annotated software model to the formalism chosen for NFP analysis; (c) to analyze the formal model using existing solvers; (d) to assess the software based on the results and give feedback to designers. Such a modeling→analysis→assessment approach can be applied to any software modeling language, be it general purpose or domain specific. In this paper, we focus on UML-based development and on the dependability NFP, which encompasses reliability, availability, safety, integrity, and maintainability. The paper presents the profile used to extend UML with dependability information, the model transformation to generate a DSPN formal model, and the assessment of the system properties based on the DSPN results.

  11. Assessment of tobacco dependence curricula in Italian dental hygiene schools.

    Science.gov (United States)

    Pizzo, Giuseppe; Davis, Joan M; Licata, Maria E; Giuliana, Giovanna

    2013-08-01

    The aim of this study was to assess the level of tobacco dependence education offered by Italian dental hygiene programs. A fifty-question survey was mailed to the thirty-one active public and private dental hygiene programs in Italy during the 2008-09 academic year. The survey assessed faculty confidence in teaching tobacco treatment, which courses contained tobacco dependence content, the number of minutes spent on specific content areas, and the level of clinical competence that dental hygiene graduates should be able to demonstrate. Surveys were returned by sixteen programs for a response rate of 52 percent. Respondents indicated tobacco dependence education was included in clinic or clinic seminar (56 percent), periodontics (44 percent), oral pathology (31 percent), and prevention (19 percent). All programs reported including the effects of tobacco on general and oral diseases in courses. However, more in-depth topics received less curriculum time; these included tobacco treatment strategies (63 percent) and discussion of cessation medications (31 percent). Interestingly, 62 percent of the respondents indicated they expected dental hygiene graduates to demonstrate a tobacco treatment competency level of a moderate intervention or higher (counseling, discussion of medications, follow-up) rather than a brief intervention in which patients are advised to quit then referred to a quitline. The results of this study indicated that Italian dental hygiene students are not currently receiving adequate instruction in tobacco treatment techniques nor are they being adequately assessed. This unique overview of Italian dental hygiene tobacco dependence education provides a basis for further discussion towards a national competency-based curriculum.

  12. Comparing DIF methods for data with dual dependency

    Directory of Open Access Journals (Sweden)

    Ying Jin

    2016-09-01

    Full Text Available Abstract Background The current study compared four differential item functioning (DIF methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic regression accounting neither person nor item clustering effect, hierarchical logistic regression accounting for person clustering effect, the testlet model accounting for the item clustering effect, and the multilevel testlet model accounting for both person and item clustering effects. The secondary goal of the current study was to evaluate the trade-off between simple models and complex models for the accuracy of DIF detection. An empirical example analyzing the 2011 TIMSS Mathematics data was also included to demonstrate the differential performances of the four DIF methods. A number of DIF analyses have been done on the TIMSS data, and rarely had these analyses accounted for the dual dependence of the data. Results Results indicated the complex models did not outperform simple models under certain conditions, especially when DIF parameters were considered in addition to significance tests. Conclusions Results of the current study could provide supporting evidence for applied researchers in selecting the appropriate DIF methods under various conditions.

  13. Correlation model to analyze dependent failures for probabilistic risk assessment

    International Nuclear Information System (INIS)

    Dezfuli, H.

    1985-01-01

    A methodology is formulated to study the dependent (correlated) failures of various abnormal events in nuclear power plants. This methodology uses correlation analysis is a means for predicting and quantifying the dependent failures. Appropriate techniques are also developed to incorporate the dependent failure in quantifying fault trees and accident sequences. The uncertainty associated with each estimation in all of the developed techniques is addressed and quantified. To identify the relative importance of the degree of dependency (correlation) among events and to incorporate these dependencies in the quantification phase of PRA, the interdependency between a pair of events in expressed with the aid of the correlation coefficient. For the purpose of demonstrating the methodology, the data base used in the Accident Sequence Precursor Study (ASP) was adopted and simulated to obtain distributions for the correlation coefficients. A computer program entitled Correlation Coefficient Generator (CCG) was developed to generate a distribution for each correlation coefficient. The method of bootstrap technique was employed in the CCG computer code to determine confidence limits of the estimated correlation coefficients. A second computer program designated CORRELATE was also developed to obtain probability intervals for both fault trees and accident sequences with statistically correlated failure data

  14. Risk assessment theory, methods, and applications

    CERN Document Server

    Rausand, Marvin

    2011-01-01

    With its balanced coverage of theory and applications along with standards and regulations, Risk Assessment: Theory, Methods, and Applications serves as a comprehensive introduction to the topic. The book serves as a practical guide to current risk analysis and risk assessment, emphasizing the possibility of sudden, major accidents across various areas of practice from machinery and manufacturing processes to nuclear power plants and transportation systems. The author applies a uniform framework to the discussion of each method, setting forth clear objectives and descriptions, while also shedding light on applications, essential resources, and advantages and disadvantages. Following an introduction that provides an overview of risk assessment, the book is organized into two sections that outline key theory, methods, and applications. * Introduction to Risk Assessment defines key concepts and details the steps of a thorough risk assessment along with the necessary quantitative risk measures. Chapters outline...

  15. USING COPULAS TO MODEL DEPENDENCE IN SIMULATION RISK ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    Dana L. Kelly

    2007-11-01

    Typical engineering systems in applications with high failure consequences such as nuclear reactor plants often employ redundancy and diversity of equipment in an effort to lower the probability of failure and therefore risk. However, it has long been recognized that dependencies exist in these redundant and diverse systems. Some dependencies, such as common sources of electrical power, are typically captured in the logic structure of the risk model. Others, usually referred to as intercomponent dependencies, are treated implicitly by introducing one or more statistical parameters into the model. Such common-cause failure models have limitations in a simulation environment. In addition, substantial subjectivity is associated with parameter estimation for these models. This paper describes an approach in which system performance is simulated by drawing samples from the joint distributions of dependent variables. The approach relies on the notion of a copula distribution, a notion which has been employed by the actuarial community for ten years or more, but which has seen only limited application in technological risk assessment. The paper also illustrates how equipment failure data can be used in a Bayesian framework to estimate the parameter values in the copula model. This approach avoids much of the subjectivity required to estimate parameters in traditional common-cause failure models. Simulation examples are presented for failures in time. The open-source software package R is used to perform the simulations. The open-source software package WinBUGS is used to perform the Bayesian inference via Markov chain Monte Carlo sampling.

  16. ASSESSMENT OF THE EFFICIENCY OF DISINFECTION METHOD ...

    African Journals Online (AJOL)

    eobe

    ABSTRACT. The efficiencies of three disinfection methods namely boiling, water guard and pur purifier were assessed. ... Water is an indispensable resource for supporting life systems [2- ...... developing country context: improving decisions.

  17. Scientific method, adversarial system, and technology assessment

    Science.gov (United States)

    Mayo, L. H.

    1975-01-01

    A basic framework is provided for the consideration of the purposes and techniques of scientific method and adversarial systems. Similarities and differences in these two techniques of inquiry are considered with reference to their relevance in the performance of assessments.

  18. Assessment of procurement methods used for executing ...

    African Journals Online (AJOL)

    Assessment of procurement methods used for executing maintenance works in Lagos state. ... Ethiopian Journal of Environmental Studies and Management ... Others are risk allocation, price competition and flexibility of contract. Finally, better ...

  19. Considerations on assessment of different time depending models adequacy

    International Nuclear Information System (INIS)

    Constantinescu, C.

    2015-01-01

    The operating period of nuclear power plants can be prolonged if it can be shown that their safety has remained on a high level, and for this, it is necessary to estimate how the aged systems, structures and components (SSCs) influence the NPP reliability and safety. To emphasize the ageing aspects the case study presented in this paper will assess different time depending models for rate of occurrence of failures with the goal to obtain the best fitting model. A sensitivity analysis for the impact of burn-in failures was performed to improve the result of the goodness of fit test. Based on the analysis results, a conclusion about the existence or the absence of an ageing trend could be developed. A sensitivity analysis regarding of the reliability parameters was performed, and the results were used to observe the impact over the time-dependent rate of occurrence of failures. (authors)

  20. Personality, Assessment Methods and Academic Performance

    Science.gov (United States)

    Furnham, Adrian; Nuygards, Sarah; Chamorro-Premuzic, Tomas

    2013-01-01

    This study examines the relationship between personality and two different academic performance (AP) assessment methods, namely exams and coursework. It aimed to examine whether the relationship between traits and AP was consistent across self-reported versus documented exam results, two different assessment techniques and across different…

  1. The STIG : A new SDI assessment method

    NARCIS (Netherlands)

    Nushi, B.; Van Loenen, B.; Crompvoets, J.

    2015-01-01

    To stimulate the Spatial Data Infrastructures (SDI) development effectively and efficiently, it is key to assess the progress and benefits of the SDI. Currently, several SDI assessment methods exist. However, these are still in an infant stage and none of these appear to meet the requirements of

  2. Formal Method of Description Supporting Portfolio Assessment

    Science.gov (United States)

    Morimoto, Yasuhiko; Ueno, Maomi; Kikukawa, Isao; Yokoyama, Setsuo; Miyadera, Youzou

    2006-01-01

    Teachers need to assess learner portfolios in the field of education. However, they need support in the process of designing and practicing what kind of portfolios are to be assessed. To solve the problem, a formal method of describing the relations between the lesson forms and portfolios that need to be collected and the relations between…

  3. Assessment of early bronchiectasis in young children with cystic fibrosis is dependent on lung volume

    NARCIS (Netherlands)

    M.P.L. Bard (Martin); K. Graniel (Karla); J. Park (Judy); N.H. de Klerk (Nicholas); P.D. Sly; C.P. Murray (Conor); H.A.W.M. Tiddens (Harm); S. Stick

    2013-01-01

    textabstractObjective: The aim of this study was to determine whether assessment of early CT scan-detected bronchiectasis in young children with cystic fibrosis (CF) depends on lung volume. Methods: This study, approved by the hospital ethics committee, included 40 young children with CF from a

  4. A particle method for history-dependent materials

    Energy Technology Data Exchange (ETDEWEB)

    Sulsky, D.; Chen, Z.; Schreyer, H.L. [New Mexico Univ., Albuquerque, NM (United States)

    1993-06-01

    A broad class of engineering problems including penetration, impact and large rotations of solid bodies causes severe numerical problems. For these problems, the constitutive equations are history dependent so material points must be followed; this is difficult to implement in an Eulerian scheme. On the other hand, purely Lagrangian methods typically result in severe mesh distortion and the consequence is ill conditioning of the element stiffness matrix leading to mesh lockup or entanglement. Remeshing prevents the lockup and tangling but then interpolation must be performed for history dependent variables, a process which can introduce errors. Proposed here is an extension of the particle-in-cell method in which particles are interpreted to be material points that are followed through the complete loading process. A fixed Eulerian grid provides the means for determining a spatial gradient. Because the grid can also be interpreted as an updated Lagrangian frame, the usual convection term in the acceleration associated with Eulerian formulations does not appear. With the use of maps between material points and the grid, the advantages of both Eulerian and Lagrangian schemes are utilized so that mesh tangling is avoided while material variables are tracked through the complete deformation history. Example solutions in two dimensions are given to illustrate the robustness of the proposed convection algorithm and to show that typical elastic behavior can be reproduced. Also, it is shown that impact with no slip is handled without any special algorithm for bodies governed by elasticity and strain hardening plasticity.

  5. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  6. Ground assessment methods for nuclear power plant

    International Nuclear Information System (INIS)

    1985-01-01

    It is needless to say that nuclear power plant must be constructed on the most stable and safe ground. Reliable assessment method is required for the purpose. The Ground Integrity Sub-committee of the Committee of Civil Engineering of Nuclear Power Plant started five working groups, the purpose of which is to systematize the assessment procedures including geological survey, ground examination and construction design. The works of working groups are to establishing assessment method of activities of faults, standardizing the rock classification method, standardizing assessment and indication method of ground properties, standardizing test methods and establishing the application standard for design and construction. Flow diagrams for the procedures of geological survey, for the investigation on fault activities and ground properties of area where nuclear reactor and important outdoor equipments are scheduled to construct, were established. And further, flow diagrams for applying investigated results to design and construction of plant, and for determining procedure of liquidification nature of ground etc. were also established. These systematized and standardized methods of investigation are expected to yield reliable data for assessment of construction site of nuclear power plant and lead to the safety of construction and operation in the future. In addition, the execution of these systematized and detailed preliminary investigation for determining the construction site of nuclear power plant will make much contribution for obtaining nation-wide understanding and faith for the project. (Ishimitsu, A.)

  7. A copula method for modeling directional dependence of genes

    Directory of Open Access Journals (Sweden)

    Park Changyi

    2008-05-01

    Full Text Available Abstract Background Genes interact with each other as basic building blocks of life, forming a complicated network. The relationship between groups of genes with different functions can be represented as gene networks. With the deposition of huge microarray data sets in public domains, study on gene networking is now possible. In recent years, there has been an increasing interest in the reconstruction of gene networks from gene expression data. Recent work includes linear models, Boolean network models, and Bayesian networks. Among them, Bayesian networks seem to be the most effective in constructing gene networks. A major problem with the Bayesian network approach is the excessive computational time. This problem is due to the interactive feature of the method that requires large search space. Since fitting a model by using the copulas does not require iterations, elicitation of the priors, and complicated calculations of posterior distributions, the need for reference to extensive search spaces can be eliminated leading to manageable computational affords. Bayesian network approach produces a discretely expression of conditional probabilities. Discreteness of the characteristics is not required in the copula approach which involves use of uniform representation of the continuous random variables. Our method is able to overcome the limitation of Bayesian network method for gene-gene interaction, i.e. information loss due to binary transformation. Results We analyzed the gene interactions for two gene data sets (one group is eight histone genes and the other group is 19 genes which include DNA polymerases, DNA helicase, type B cyclin genes, DNA primases, radiation sensitive genes, repaire related genes, replication protein A encoding gene, DNA replication initiation factor, securin gene, nucleosome assembly factor, and a subunit of the cohesin complex by adopting a measure of directional dependence based on a copula function. We have compared

  8. Personnel β-dosimetry method for reducing energy dependence

    International Nuclear Information System (INIS)

    Gesell, T.F.; Jones, D.E.; Gupta, V.P.; Kalbeitzer, F.L.; Cusimano, J.P.

    1979-03-01

    Current practices for the measurement of skin dose are reviewed and found to be inadequate. The current INEL dosimeter was examined for systematic and random error. Systematic (i.e., variation with energy) error was found to range over a factor of 10 while the random error was reasonably small for the larger β/γ ratios. Other designs with thicker windows as is more common would show even larger systematic errors. Various methods for improving beta dosimetry were reviewed. A new dosimeter design utilizing three chips, each having a window of difference thickness, was proposed. According to the calculations, this dosimeter should markedly reduce systematic error but will introduce somewhat more random error. Preliminary measurements were carried out related to angular dependence and charged particle equilibrium. The proposed dosimeter design was tested with betas from a uranium slab. The average of the seven results was in excellent agreement with the known dose rate and the standard deviation of the result was 16%

  9. The time-dependent density matrix renormalisation group method

    Science.gov (United States)

    Ma, Haibo; Luo, Zhen; Yao, Yao

    2018-04-01

    Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.

  10. Digital integrated protection system: Quantitative methods for dependability evaluation

    International Nuclear Information System (INIS)

    Krotoff, H.; Benski, C.

    1986-01-01

    The inclusion of programmed digital techniques in the SPIN system provides the used with the capability of performing sophisticated processing operations. However, it causes the quantitative evaluation of the overall failure probabilities to become somewhat more intricate by reason that: A single component may be involved in several functions; Self-tests may readily be incorporated for the purpose of monitoring the dependable operation of the equipment at all times. This paper describes the methods as implemented by MERLIN GERIN for the purpose of evaluating: The probabilities for the protective actions not to be initiated (dangerous failures); The probabilities for such protective actions to be initiated accidentally. Although the communication is focused on the programmed portion of the SPIN (UAIP) it will also deal with the evaluation performed within the scope of study works that do not exclusively cover the UAIPs

  11. Personality Assessment Inventory scale characteristics and factor structure in the assessment of alcohol dependency.

    Science.gov (United States)

    Schinka, J A

    1995-02-01

    Individual scale characteristics and the inventory structure of the Personality Assessment Inventory (PAI; Morey, 1991) were examined by conducting internal consistency and factor analyses of item and scale score data from a large group (N = 301) of alcohol-dependent patients. Alpha coefficients, mean inter-item correlations, and corrected item-total scale correlations for the sample paralleled values reported by Morey for a large clinical sample. Minor differences in the scale factor structure of the inventory from Morey's clinical sample were found. Overall, the findings support the use of the PAI in the assessment of personality and psychopathology of alcohol-dependent patients.

  12. Clinical management methods for out-patients with alcohol dependence

    Directory of Open Access Journals (Sweden)

    Boulze Isabelle

    2006-02-01

    Full Text Available Abstract Background In France outpatient centres for the care of alcoholics are healthcare establishments providing medical, psychological and social support. Although they meet the practical needs of these patients, their degree of use in each of these domains and the respective mobilisation of different skills by the care team are not well understood. Our aim was therefore to determine in detail the management involved as a function of the severity of alcohol dependence. For this purpose, all the procedures involved were compiled in a thesaurus describing its type (psychological, medical, social, reception, its scheduled or unscheduled nature, its method (face-to-face, telephone, letter and its duration. The severity of dependence was evaluated using the Addiction Severity Index (ASI. Results 45 patients were included and followed-up during 291 ± 114 days. The mean initial ASI scores (± SD were: medical (M = 0.39 ± 0.3, working-income (ER = 0.5 ± 0.3, alcohol (A = 0.51 ± 0.2, illicit drugs (D = 0.07 ± 0.08, legal (L = 0.06 ± 0.13, familial and social environment (FS = 0.34 ± 0.26, psychological (P = 0.39 ± 0.22. The total number of procedures was 1341 (29.8 per patient corresponding to 754.4 hours (16.7 per patient. The intensity of management peaked during the first month of treatment, and then declined rapidly; the maximum incidence of abstinence was observed during the 3rd month of management. Interviews with patients, group therapy and staff meetings represented 68.7%, 9.9% and 13.9% of all procedures, respectively. In patients with severe dependence, as compared to moderate, management was twice as intense in the psychological and social domains, but not in the medical domain. The ASI questionnaire was completed a second time by 24 patients, after an average of 3.2 months. The improvement was significant in the M, A, D and P domains only. Conclusion This study provided an overview of the methods employed in managing a sample of

  13. Approaches and methods of risk assessment

    International Nuclear Information System (INIS)

    Rowe, W.D.

    1983-01-01

    The classification system of risk assessment includes the categories: 1) risk comparisons, 2) cost-effectiveness of risk reduction, 3) balancing of costs, risks and benefits against one another, 4. Metasystems. An overview of methods and systems reveals that no single method can be applied to all cases and situations. The visibility of the process and the absolute consideration of all aspects of judging are, however, of first and fore most importance. (DG) [de

  14. Methods for assessing Phytophthora ramorum chlamydospore germination

    Science.gov (United States)

    Joyce Eberhart; Elilzabeth Stamm; Jennifer Parke

    2013-01-01

    Germination of chlamydospores is difficult to accurately assess when chlamydospores are attached to remnants of supporting hyphae. We developed two approaches for closely observing and rigorously quantifying the frequency of chlamydospore germination in vitro. The plate marking and scanning method was useful for quantifying germination of large...

  15. Morphology Dependent Assessment of Resilience for Urban Areas

    Directory of Open Access Journals (Sweden)

    Kai Fischer

    2018-05-01

    Full Text Available The formation of new threats and the increasing complexity of urban built infrastructures underline the need for more robust and sustainable systems, which are able to cope with adverse events. Achieving sustainability requires the strengthening of resilience. Currently, a comprehensive approach for the quantification of resilience of urban infrastructure is missing. Within this paper, a new generalized mathematical framework is presented. A clear definition of terms and their interaction builds the basis of this resilience assessment scheme. Classical risk-based as well as additional components are aligned along the timeline before, during and after disruptive events, to quantify the susceptibility, the vulnerability and the response and recovery behavior of complex systems for multiple threat scenarios. The approach allows the evaluation of complete urban surroundings and enables a quantitative comparison with other development plans or cities. A comprehensive resilience framework should cover at least preparation, prevention, protection, response and recovery. The presented approach determines respective indicators and provides decision support, which enhancement measures are more effective. Hence, the framework quantifies for instance, if it is better to avoid a hazardous event or to tolerate an event with an increased robustness. An application example is given to assess different urban forms, i.e., morphologies, with consideration of multiple adverse events, like terrorist attacks or earthquakes, and multiple buildings. Each urban object includes a certain number of attributes, like the object use, the construction type, the time-dependent number of persons and the value, to derive different performance targets. The assessment results in the identification of weak spots with respect to single resilience indicators. Based on the generalized mathematical formulation and suitable combination of indicators, this approach can quantify the

  16. Clinical experimental stress studies: methods and assessment.

    Science.gov (United States)

    Bali, Anjana; Jaggi, Amteshwar Singh

    2015-01-01

    Stress is a state of threatened homeostasis during which a variety of adaptive processes are activated to produce physiological and behavioral changes. Stress induction methods are pivotal for understanding these physiological or pathophysiological changes in the body in response to stress. Furthermore, these methods are also important for the development of novel pharmacological agents for stress management. The well-described methods to induce stress in humans include the cold pressor test, Trier Social Stress Test, Montreal Imaging Stress Task, Maastricht Acute Stress Test, CO2 challenge test, Stroop test, Paced Auditory Serial Addition Task, noise stress, and Mannheim Multicomponent Stress Test. Stress assessment in humans is done by measuring biochemical markers such as cortisol, cortisol awakening response, dexamethasone suppression test, salivary α-amylase, plasma/urinary norepinephrine, norepinephrine spillover rate, and interleukins. Physiological and behavioral changes such as galvanic skin response, heart rate variability, pupil size, and muscle and/or skin sympathetic nerve activity (microneurography) and cardiovascular parameters such as heart rate, blood pressure, and self-reported anxiety are also monitored to assess stress response. This present review describes these commonly employed methods to induce stress in humans along with stress assessment methods.

  17. The dispersal of contaminants in heterogeneous aquifers: a review of methods of estimating scale dependent parameters

    International Nuclear Information System (INIS)

    Farmer, C.L.

    1986-02-01

    The design and assessment of underground waste disposal options requires modelling the dispersal of contaminants within aquifers. The logical structure of the development and application of disposal models is discussed. In particular we examine the validity and interpretation of the gradient diffusion model. The effective dispersion parameters in such a model seem to depend upon the scale on which they are measured. This phenomenon is analysed and methods for modelling scale dependent parameters are reviewed. Specific recommendations regarding the modelling of contaminant dispersal are provided. (author)

  18. A quantitative method for assessing resilience of interdependent infrastructures

    International Nuclear Information System (INIS)

    Nan, Cen; Sansavini, Giovanni

    2017-01-01

    The importance of understanding system resilience and identifying ways to enhance it, especially for interdependent infrastructures our daily life depends on, has been recognized not only by academics, but also by the corporate and public sectors. During recent years, several methods and frameworks have been proposed and developed to explore applicable techniques to assess and analyze system resilience in a comprehensive way. However, they are often tailored to specific disruptive hazards/events, or fail to properly include all the phases such as absorption, adaptation, and recovery. In this paper, a quantitative method for the assessment of the system resilience is proposed. The method consists of two components: an integrated metric for system resilience quantification and a hybrid modeling approach for representing the failure behavior of infrastructure systems. The feasibility and applicability of the proposed method are tested using an electric power supply system as the exemplary infrastructure. Simulation results highlight that the method proves effective in designing, engineering and improving the resilience of infrastructures. Finally, system resilience is proposed as a proxy to quantify the coupling strength between interdependent infrastructures. - Highlights: • A method for quantifying resilience of interdependent infrastructures is proposed. • It combines multi-layer hybrid modeling and a time-dependent resilience metric. • The feasibility of the proposed method is tested on the electric power supply system. • The method provides insights to decision-makers for strengthening system resilience. • Resilience capabilities can be used to engineer interdependencies between subsystems.

  19. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    Science.gov (United States)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  20. New method for assessing risks of email

    Science.gov (United States)

    Raja, Seyyed H.; Afrooz, Farzad

    2013-03-01

    E-mail technology, has become one of the requirements of human lives for correspondence between individuals. Given this, the important point is that the messages, server and client of e-mail and correspondences that exchanged between different people have acceptable security, to make people sure to use of this technology. In the information age, many of financial and non financial transactions are done electronically, data exchange takes place via the internet and theft and manipulation of data can make exorbitant cost in terms of integrity, financial, political, economic and culture. E-mail correspondence in there is same and it is very important. With review took place, a method that will focus on email system for risks assessment is not provided. We are examining ways of assessing for other systems and their strengths and weaknesses, then we use Mr Convery method for assessing email risks which it is for assessing network risks. At the end of paper we have offered special table for email risk assessment.

  1. Modelling and assessment of dependent performance shaping factors through Analytic Network Process

    International Nuclear Information System (INIS)

    De Ambroggi, Massimiliano; Trucco, Paolo

    2011-01-01

    Despite continuous progresses in research and applications, one of the major weaknesses of current HRA methods dwells in their limited capability of modelling the mutual influences between performance shaping factors (PSFs). Indeed at least two types of dependencies between PSFs can be defined: (i) dependency between the states of the PSFs; (ii) dependency between the influences (impacts) of the PSFs on the human performance. This paper introduces a method, based on Analytic Network Process (ANP), for the quantification of the latter, where the overall contribution of each PSF (weight) to the human error probability (HEP) is eventually returned. The core of the method is the modelling process, articulated into two steps: firstly, a qualitative network of dependencies between PSFs is identified, then, the importance of each PSF is quantitatively assessed using ANP. The model allows to distinguish two components of the PSF influence: direct influence that is the influence that the considered PSF is able to express by itself, notwithstanding the presence of other PSFs and indirect influence that is the incremental influence of the considered PSF through its influence on other PSFs. A case study in Air Traffic Control is presented where the proposed approach is integrated into the cognitive simulator PROCOS. The results demonstrated a significant modification of the influence of PSFs over the operator performance when dependencies are taken into account, underlining the importance of considering not only the possible correlation between the states of PSFs but also their mutual dependency in affecting human performance in complex systems.

  2. Statistical methods for assessment of blend homogeneity

    DEFF Research Database (Denmark)

    Madsen, Camilla

    2002-01-01

    In this thesis the use of various statistical methods to address some of the problems related to assessment of the homogeneity of powder blends in tablet production is discussed. It is not straight forward to assess the homogeneity of a powder blend. The reason is partly that in bulk materials......, it is shown how to set up parametric acceptance criteria for the batch that gives a high confidence that future samples with a probability larger than a specified value will pass the USP threeclass criteria. Properties and robustness of proposed changes to the USP test for content uniformity are investigated...

  3. Geophysics Methods in Electrometric Assessment of Dams

    Energy Technology Data Exchange (ETDEWEB)

    Davydov, V. A., E-mail: davydov-va@yandex.ru; Baidikov, S. V., E-mail: badikek@mail.ru; Gorshkov, V. Yu., E-mail: vitalaa@yandex.ru; Malikov, A. V., E-mail: alex.mal.1986@mail.ru [Russian Academy of Sciences, Geophysical Institute, Ural Branch (Russian Federation)

    2016-07-15

    The safety assessment of hydraulic structures is proposed to be conducted via geoelectric measurements, which are capable of assessing the health of earth dams in their natural bedding without intervention in their structure. Geoelectric measurements are shown as being capable of pinpointing hazardous parts of a dam, including areas of elevated seepage. Applications of such methods are shown for a number of mini-dams in the Sverdlovsk region. Aparameter (effective longitudinal conductivity) that may be used to monitor the safety of hydraulic structures is proposed. Quantitative estimates of this parameter are given in terms of the degree of safely.

  4. Methods of geodiversity assessment and theirs application

    Science.gov (United States)

    Zwoliński, Zbigniew; Najwer, Alicja; Giardino, Marco

    2016-04-01

    The concept of geodiversity has rapidly gained the approval of scientists around the world (Wiedenbein 1993, Sharples 1993, Kiernan 1995, 1996, Dixon 1996, Eberhard 1997, Kostrzewski 1998, 2011, Gray 2004, 2008, 2013, Zwoliński 2004, Serrano, Ruiz- Flano 2007, Gordon et al. 2012). However, the problem recognition is still at an early stage, and in effect not explicitly understood and defined (Najwer, Zwoliński 2014). Nevertheless, despite widespread use of the concept, little progress has been made in its assessment and mapping. Less than the last decade can be observing investigation of methods for geodiversity assessment and its visualisation. Though, many have acknowledged the importance of geodiversity evaluation (Kozłowski 2004, Gray 2004, Reynard, Panizza 2005, Zouros 2007, Pereira et al. 2007, Hjort et al. 2015). Hitherto, only a few authors have undertaken that kind of methodological issues. Geodiversity maps are being created for a variety of purposes and therefore their methods are quite manifold. In the literature exists some examples of the geodiversity maps applications for the geotourism purpose, basing mainly on the geological diversity, in order to point the scale of the area's tourist attractiveness (Zwoliński 2010, Serrano and Gonzalez Trueba 2011, Zwoliński and Stachowiak 2012). In some studies, geodiversity maps were created and applied to investigate the spatial or genetic relationships with the richness of particular natural environmental components (Burnett et al. 1998, Silva 2004, Jačková, Romportl 2008, Hjort et al. 2012, 2015, Mazurek et al. 2015, Najwer et al. 2014). There are also a few examples of geodiversity assessment in order to geoconservation and efficient management and planning of the natural protected areas (Serrano and Gonzalez Trueba 2011, Pellitero et al. 2011, 2014, Jaskulska et al. 2013, Melelli 2014, Martinez-Grana et al. 2015). The most popular method of assessing the diversity of abiotic components of the natural

  5. Methods of Environmental Impact Assessment in Colombia

    Directory of Open Access Journals (Sweden)

    Javier Toro Calderón

    2013-10-01

    Full Text Available The Environmental Impact Assessment (EIA in Colombia constitutes the primary tool for making decisions with respect to projects, works and activities (PWA with potential for significant environmental impacts. In the case of the infrastructure of the PWA, the EIA is mandatory and determines the environmental license (EL for construction and operation. This paper analyzes the methods used to assess the environmental impact of the PWA that have applied for licenses with the Ministry of Environment and Sustainable Development. It was found that the method most frequently used is the qualitative proposal by Conesa, with modifications that reduce the effectiveness of the EIA and favor the subjectivity and bias of the evaluator. Finally a series of recom­mendations to improve the process in the country are proposed.

  6. Statistical methods in personality assessment research.

    Science.gov (United States)

    Schinka, J A; LaLone, L; Broeckel, J A

    1997-06-01

    Emerging models of personality structure and advances in the measurement of personality and psychopathology suggest that research in personality and personality assessment has entered a stage of advanced development, in this article we examine whether researchers in these areas have taken advantage of new and evolving statistical procedures. We conducted a review of articles published in the Journal of Personality, Assessment during the past 5 years. Of the 449 articles that included some form of data analysis, 12.7% used only descriptive statistics, most employed only univariate statistics, and fewer than 10% used multivariate methods of data analysis. We discuss the cost of using limited statistical methods, the possible reasons for the apparent reluctance to employ advanced statistical procedures, and potential solutions to this technical shortcoming.

  7. Using Evidence Credibility Decay Model for dependence assessment in human reliability analysis

    International Nuclear Information System (INIS)

    Guo, Xingfeng; Zhou, Yanhui; Qian, Jin; Deng, Yong

    2017-01-01

    Highlights: • A new computational model is proposed for dependence assessment in HRA. • We combined three factors of “CT”, “TR” and “SP” within Dempster–Shafer theory. • The BBA of “SP” is reconstructed by discounting rate based on the ECDM. • Simulation experiments are illustrated to show the efficiency of the proposed method. - Abstract: Dependence assessment among human errors plays an important role in human reliability analysis. When dependence between two sequent tasks exists in human reliability analysis, if the preceding task fails, the failure probability of the following task is higher than success. Typically, three major factors are considered: “Closeness in Time” (CT), “Task Relatedness” (TR) and “Similarity of Performers” (SP). Assume TR is not changed, both SP and CT influence the degree of dependence level and SP is discounted by the time as the result of combine two factors in this paper. In this paper, a new computational model is proposed based on the Dempster–Shafer Evidence Theory (DSET) and Evidence Credibility Decay Model (ECDM) to assess the dependence between tasks in human reliability analysis. First, the influenced factors among human tasks are identified and the basic belief assignments (BBAs) of each factor are constructed based on expert evaluation. Then, the BBA of SP is discounted as the result of combining two factors and reconstructed by using the ECDM, the factors are integrated into a fused BBA. Finally, the dependence level is calculated based on fused BBA. Experimental results demonstrate that the proposed model not only quantitatively describe the fact that the input factors influence the dependence level, but also exactly show how the dependence level regular changes with different situations of input factors.

  8. Method of assessing heterogeneity in images

    Science.gov (United States)

    Jacob, Richard E.; Carson, James P.

    2016-08-23

    A method of assessing heterogeneity in images is disclosed. 3D images of an object are acquired. The acquired images may be filtered and masked. Iterative decomposition is performed on the masked images to obtain image subdivisions that are relatively homogeneous. Comparative analysis, such as variogram analysis or correlogram analysis, is performed of the decomposed images to determine spatial relationships between regions of the images that are relatively homogeneous.

  9. Anomalies in the Fujikawa method using parameter-dependent regulators

    International Nuclear Information System (INIS)

    Urrutia, L.F.; Vergara, J.D.

    1992-01-01

    We propose an extended definition of the regularized Jacobian which allows the calculation of anomalies using parameter-dependent regulators in the Fujikawa approach. This extension incorporates the basic Green's function of the problem in the regularized Jacobian, allowing us to interpret a specific regularization procedure as a way of selecting the finite part of the Green's function, in complete analogy with what is done at the level of the effective action. In this way we are able to consider the effect of counterterms in the regularized Jacobian in order to relate different regularization procedures. We also discuss the ambiguities that arise in our prescription due to some freedom in the place where we can insert the regulator, using charge-conjugation invariance as a guiding principle. The method is applied to the case of vector and axial-vector anomalies in two- and four-dimensional quantum electrodynamics. In the first situation we recover the standard family of anomalies calculated by the point-splitting regularization prescription. We also study in detail an alternative choice in the position of the regulator and we calculate explicitly all the currents that generate the families of anomalies that we are considering. Next we extend the calculation to four dimensions, using the same prescriptions as before, and we compare the results with those obtained from the point-splitting calculation, which we also perform in the case of the vector anomaly. A discussion of the relation among the results obtained by different regularization prescriptions is given in terms of the allowed counterterms in the regularized Jacobian, which are highly constrained by the requirement of charge-conjugation invariance

  10. A classification scheme for risk assessment methods.

    Energy Technology Data Exchange (ETDEWEB)

    Stamp, Jason Edwin; Campbell, Philip LaRoche

    2004-08-01

    This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In

  11. Simulation of three-dimensional, time-dependent, incompressible flows by a finite element method

    International Nuclear Information System (INIS)

    Chan, S.T.; Gresho, P.M.; Lee, R.L.; Upson, C.D.

    1981-01-01

    A finite element model has been developed for simulating the dynamics of problems encountered in atmospheric pollution and safety assessment studies. The model is based on solving the set of three-dimensional, time-dependent, conservation equations governing incompressible flows. Spatial discretization is performed via a modified Galerkin finite element method, and time integration is carried out via the forward Euler method (pressure is computed implicitly, however). Several cost-effective techniques (including subcycling, mass lumping, and reduced Gauss-Legendre quadrature) which have been implemented are discussed. Numerical results are presented to demonstrate the applicability of the model

  12. Human performance assessment: methods and measures

    International Nuclear Information System (INIS)

    Andresen, Gisle; Droeivoldsmo, Asgeir

    2000-10-01

    The Human Error Analysis Project (HEAP) was initiated in 1994. The aim of the project was to acquire insights on how and why cognitive errors occur when operators are engaged in problem solving in advanced integrated control rooms. Since human error had not been studied in the HAlden Man-Machine LABoratory (HAMMLAB) before, it was also necessary to carry out research in methodology. In retrospect, it is clear that much of the methodological work is relevant to human-machine research in general, and not only to research on human error. The purpose of this report is, therefore, to give practitioners and researchers an overview of the methodological parts of HEAP. The scope of the report is limited to methods used throughout the data acquisition process, i.e., data-collection methods, data-refinement methods, and measurement methods. The data-collection methods include various types of verbal protocols, simulator logs, questionnaires, and interviews. Data-refinement methods involve different applications of the Eyecon system, a flexible data-refinement tool, and small computer programs used for rearranging, reformatting, and aggregating raw-data. Measurement methods involve assessment of diagnostic behaviour, erroneous actions, complexity, task/system performance, situation awareness, and workload. The report concludes that the data-collection methods are generally both reliable and efficient. The data-refinement methods, however, should be easier to use in order to facilitate explorative analyses. Although the series of experiments provided an opportunity for measurement validation, there are still uncertainties connected to several measures, due to their reliability still being unknown. (Author). 58 refs.,7 tabs

  13. Statistical methods for assessing agreement between continuous measurements

    DEFF Research Database (Denmark)

    Sokolowski, Ineta; Hansen, Rikke Pilegaard; Vedsted, Peter

    Background: Clinical research often involves study of agreement amongst observers. Agreement can be measured in different ways, and one can obtain quite different values depending on which method one uses. Objective: We review the approaches that have been discussed to assess the agreement between...... continuous measures and discuss their strengths and weaknesses. Different methods are illustrated using actual data from the `Delay in diagnosis of cancer in general practice´ project in Aarhus, Denmark. Subjects and Methods: We use weighted kappa-statistic, intraclass correlation coefficient (ICC......), concordance coefficient, Bland-Altman limits of agreement and percentage of agreement to assess the agreement between patient reported delay and doctor reported delay in diagnosis of cancer in general practice. Key messages: The correct statistical approach is not obvious. Many studies give the product...

  14. On - Site Assessment Methods For Environmental Radioactivity

    International Nuclear Information System (INIS)

    Petrinec, B.; Babic, D.; Bituh, T.

    2015-01-01

    A method for the rapid determination of radioactivity in cases of release into the environment as well as in cases of nuclear/radiological accidents is described. These measurements would enable a direct risk assessment for humans and biota, without any sampling and at a considerably larger number of locations than in previous studies. Thus obtained, the substantially expanded dataset is expected to shed more light on the properties of environmental radioactivity both in the region studied and in other similar areas. Field measurements will be performed and samples of soil and biota will be collected in order to compare field results with laboratory measurements. Once the method has been validated, previously unexplored locations will be included in the study. Our measurements at numerous locations will also provide control values for comparison in cases of any unplanned or accidental radiological event. An assessment of the possible effects of radionuclide concentrations on the human food chain and biota will be performed within the appropriate models used worldwide exactly for this purpose. In this way, the project should contribute to regional, European, and global efforts towards understanding the radiological impact on ecosystems. Field measurements will also address certain issues in the environmental metrology of radioactive substances, e.g., simultaneous determination of activity concentrations and related dose rates. This will serve as a tool for rapid risk assessment in emergency situations. (author).

  15. The heritability of avoidant and dependent personality disorder assessed by personal interview and questionnaire.

    Science.gov (United States)

    Gjerde, L C; Czajkowski, N; Røysamb, E; Orstavik, R E; Knudsen, G P; Ostby, K; Torgersen, S; Myers, J; Kendler, K S; Reichborn-Kjennerud, T

    2012-12-01

    Personality disorders (PDs) have been shown to be modestly heritable. Accurate heritability estimates are, however, dependent on reliable measurement methods, as measurement error deflates heritability. The aim of this study was to estimate the heritability of DSM-IV avoidant and dependent personality disorder, by including two measures of the PDs at two time points. Data were obtained from a population-based cohort of young adult Norwegian twins, of whom 8045 had completed a self-report questionnaire assessing PD traits. 2794 of these twins subsequently underwent a structured diagnostic interview for DSM-IV PDs. Questionnaire items predicting interview results were selected by multiple regression, and measurement models of the PDs were fitted in Mx. The heritabilities of the PD factors were 0.64 for avoidant PD and 0.66 for dependent PD. No evidence of common environment, that is, environmental factors that are shared between twins and make them similar, was found. Genetic and environmental contributions to avoidant and dependent PD seemed to be the same across sexes. The combination of both a questionnaire- and an interview assessment of avoidant and dependent PD results in substantially higher heritabilities than previously found using single-occasion interviews only. © 2012 John Wiley & Sons A/S.

  16. SUBJECTIVE METHODS FOR ASSESSMENT OF DRIVER DROWSINESS

    Directory of Open Access Journals (Sweden)

    Alina Mashko

    2017-12-01

    Full Text Available The paper deals with the issue of fatigue and sleepiness behind the wheel, which for a long time has been of vital importance for the research in the area of driver-car interaction safety. Numerous experiments on car simulators with diverse measurements to observe human behavior have been performed at the laboratories of the faculty of the authors. The paper provides analysis and an overview and assessment of the subjective (self-rating and observer rating methods for observation of driver behavior and the detection of critical behavior in sleep deprived drivers using the developed subjective rating scales.

  17. A study on the dependency evaluation for multiple human actions in human reliability analysis of probabilistic safety assessment

    International Nuclear Information System (INIS)

    Kang, D. I.; Yang, J. E.; Jung, W. D.; Sung, T. Y.; Park, J. H.; Lee, Y. H.; Hwang, M. J.; Kim, K. Y.; Jin, Y. H.; Kim, S. C.

    1997-02-01

    This report describes the study results on the method of the dependency evaluation and the modeling, and the limited value of human error probability (HEP) for multiple human actions in accident sequences of probabilistic safety assessment (PSA). THERP and Parry's method, which have been generally used in dependency evaluation of human reliability analysis (HRA), are introduced and their limitations are discussed. New dependency evaluation method in HRA is established to make up for the weak points of THERP and Parry's methods. The limited value of HEP is also established based on the review of several HRA related documents. This report describes the definition, the type, the evaluation method, and the evaluation example of dependency to help the reader's understanding. It is expected that this study results will give a guidance to HRA analysts in dependency evaluation of multiple human actions and enable PSA analysts to understand HRA in detail. (author). 23 refs., 3 tabs., 2 figs

  18. An empirical method for dynamic camouflage assessment

    Science.gov (United States)

    Blitch, John G.

    2011-06-01

    As camouflage systems become increasingly sophisticated in their potential to conceal military personnel and precious cargo, evaluation methods need to evolve as well. This paper presents an overview of one such attempt to explore alternative methods for empirical evaluation of dynamic camouflage systems which aspire to keep pace with a soldier's movement through rapidly changing environments that are typical of urban terrain. Motivating factors are covered first, followed by a description of the Blitz Camouflage Assessment (BCA) process and results from an initial proof of concept experiment conducted in November 2006. The conclusion drawn from these results, related literature and the author's personal experience suggest that operational evaluation of personal camouflage needs to be expanded beyond its foundation in signal detection theory and embrace the challenges posed by high levels of cognitive processing.

  19. LCIA selection methods for assessing toxic releases

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    2002-01-01

    the inventory that contribute significantly to the impact categories on ecotoxicity and human toxicity to focus the characterisation work. The reason why the selection methods are more important for the chemical-related impact categories than for other impact categories is the extremely high number......Characterization of toxic emissions in life cycle impact assessment (LCIA) is in many cases severely limited by the lack of characterization factors for the emissions mapped in the inventory. The number of substances assigned characterization factors for (eco)toxicity included in the dominating LCA....... The methods are evaluated against a set of pre-defined criteria (comprising consistency with characterization and data requirement) and applied to case studies and a test set of chemicals. The reported work is part of the EU-project OMNIITOX....

  20. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Frutuoso e Melo, P.F.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    A computacional application of the explicit method for analyzing event trees in the context of probabilistic risk assessments is discussed. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classiying them in core-degradation classes as specified by the user. (Author) [pt

  1. An application of the explicit method for analysing intersystem dependencies in the evaluation of event trees

    International Nuclear Information System (INIS)

    Oliveira, L.F.S.; Frutuoso e Melo, P.F.; Lima, J.E.P.; Stal, I.L.

    1985-01-01

    We discuss in this paper a computational application of the explicit method for analyzing event trees in the context of probabilistic risk assessments. A detailed analysis of the explicit method is presented, including the train level analysis (TLA) of safety systems and the impact vector method. It is shown that the penalty for not adopting TLA is that in some cases non-conservative results may be reached. The impact vector method can significantly reduce the number of sequences to be considered, and its use has inspired the definition of a dependency matrix, which enables the proper running of a computer code especially developed for analysing event trees. This code constructs and quantifies the event trees in the fashion just discussed, by receiving as input the construction and quantification dependencies defined in the dependency matrix. The code has been extensively used in the Angra 1 PRA currently underway. In its present version it gives as output the dominant sequences for each given initiator, properly classifying them in core-degradation classes as specified by the user. This calculation is made in a pointwise fashion. Extensions of this code are being developed in order to perform uncertainty analyses on the dominant sequences and also risk importance measures of the safety systems envolved. (orig.)

  2. Comparison of Methods for Dependency Determination between Human Failure Events within Human Reliability Analysis

    International Nuclear Information System (INIS)

    Cepin, M.

    2008-01-01

    The human reliability analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan human reliability analysis (IJS-HRA) and standardized plant analysis risk human reliability analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance

  3. Comparison of methods for dependency determination between human failure events within human reliability analysis

    International Nuclear Information System (INIS)

    Cepis, M.

    2007-01-01

    The Human Reliability Analysis (HRA) is a highly subjective evaluation of human performance, which is an input for probabilistic safety assessment, which deals with many parameters of high uncertainty. The objective of this paper is to show that subjectivism can have a large impact on human reliability results and consequently on probabilistic safety assessment results and applications. The objective is to identify the key features, which may decrease of subjectivity of human reliability analysis. Human reliability methods are compared with focus on dependency comparison between Institute Jozef Stefan - Human Reliability Analysis (IJS-HRA) and Standardized Plant Analysis Risk Human Reliability Analysis (SPAR-H). Results show large differences in the calculated human error probabilities for the same events within the same probabilistic safety assessment, which are the consequence of subjectivity. The subjectivity can be reduced by development of more detailed guidelines for human reliability analysis with many practical examples for all steps of the process of evaluation of human performance. (author)

  4. Assessment of importance of elements for systems that condition depends on the sequence of elements failures

    International Nuclear Information System (INIS)

    Povyakalo, A.A.

    1996-01-01

    This paper proposes new general formulas for calculation of indices of elements importance for systems whose condition depends on sequence of elements failures. These systems have been called as systems with memory of failures (M-systems). Techniques existing for assessment of importance of elements are based on the Bool's models of system reliability, for which it is significant to suggest, that in every period of time system state depends only on a combination of states of elements at that very moment of time. These systems have been called as combinational systems (C-systems). Reliability of M-systems at any moment of operating time is a functional having distributions of elements time before failure as its arguments. Bool's models and methods of assessment of element importance, based on these models, are not appropriate for these systems. Pereguda and Povyakalo proposed the new techniques for assessment of elements importance for PO-SS systems that includes Protection Object (PO) and Safety System (PO). PO-SS system is an example of M-system. That technique is used at this paper as a basis for more general consideration. It has been shown that technique proposed for assessment of elements importance for M-systems has well-known Birnbaum's method as its particular case. Also the system with double protection is considered as an example

  5. Analysis of risk assessment methods for goods trucking

    Directory of Open Access Journals (Sweden)

    Yunyazova A.O.

    2018-04-01

    Full Text Available the article considers models of risk assessment that can be applied to cargo transportation, for forecasting possible damage in the form of financial and material costs in order to reduce the percentage of probability of their occurrence. The analysis of risk by the method «Criterion. Event. Rule" is represented. This method is based on the collection of information by various methods, assigning an assessment to the identified risks, ranking and formulating a report on the analysis. It can be carried out as a fully manual mechanical method of information collecting and performing calculations or can be brought to an automated level from data collection to the delivery of finished results (but in this case some nuances that could significantly influence the outcome of the analysis can be ignored. The expert method is of particular importance, since it relies directly on human experience. In this case, a special role is played by the human factor. The collection of information and the assigned assessments to risk groups depend on the extent to which experts agree on this issue. The smaller the fluctuations in the values ​​of the estimates of the experts, the more accurate and optimal the results will be.

  6. Survey and assessment of conventional software verification and validation methods

    International Nuclear Information System (INIS)

    Miller, L.A.; Groundwater, E.; Mirsky, S.M.

    1993-04-01

    By means of a literature survey, a comprehensive set of methods was identified for the verification and validation of conventional software. The 134 methods so identified were classified according to their appropriateness for various phases of a developmental lifecycle -- requirements, design, and implementation; the last category was subdivided into two, static testing and dynamic testing methods. The methods were then characterized in terms of eight rating factors, four concerning ease-of-use of the methods and four concerning the methods' power to detect defects. Based on these factors, two measurements were developed to permit quantitative comparisons among methods, a Cost-Benefit metric and an Effectiveness Metric. The Effectiveness Metric was further refined to provide three different estimates for each method, depending on three classes of needed stringency of V ampersand V (determined by ratings of a system's complexity and required-integrity). Methods were then rank-ordered for each of the three classes in terms of their overall cost-benefits and effectiveness. The applicability was then assessed of each method for the four identified components of knowledge-based and expert systems, as well as the system as a whole

  7. An iterated Radau method for time-dependent PDE's

    NARCIS (Netherlands)

    S. Pérez-Rodríguez; S. González-Pinto; B.P. Sommeijer (Ben)

    2008-01-01

    htmlabstractThis paper is concerned with the time integration of semi-discretized, multi-dimensional PDEs of advection-diffusion-reaction type. To cope with the stiffness of these ODEs, an implicit method has been selected, viz., the two-stage, third-order Radau IIA method. The main topic of this

  8. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    International Nuclear Information System (INIS)

    Nelsen, L.A.

    2009-01-01

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining

  9. ALARA ASSESSMENT OF SETTLER SLUDGE SAMPLING METHODS

    Energy Technology Data Exchange (ETDEWEB)

    NELSEN LA

    2009-01-30

    The purpose of this assessment is to compare underwater and above water settler sludge sampling methods to determine if the added cost for underwater sampling for the sole purpose of worker dose reductions is justified. Initial planning for sludge sampling included container, settler and knock-out-pot (KOP) sampling. Due to the significantly higher dose consequence of KOP sludge, a decision was made to sample KOP underwater to achieve worker dose reductions. Additionally, initial plans were to utilize the underwater sampling apparatus for settler sludge. Since there are no longer plans to sample KOP sludge, the decision for underwater sampling for settler sludge needs to be revisited. The present sampling plan calls for spending an estimated $2,500,000 to design and construct a new underwater sampling system (per A21 C-PL-001 RevOE). This evaluation will compare and contrast the present method of above water sampling to the underwater method that is planned by the Sludge Treatment Project (STP) and determine if settler samples can be taken using the existing sampling cart (with potentially minor modifications) while maintaining doses to workers As Low As Reasonably Achievable (ALARA) and eliminate the need for costly redesigns, testing and personnel retraining.

  10. Processing methods for temperature-dependent MCNP libraries

    International Nuclear Information System (INIS)

    Li Songyang; Wang Kan; Yu Ganglin

    2008-01-01

    In this paper,the processing method of NJOY which transfers ENDF files to ACE (A Compact ENDF) files (point-wise cross-Section file used for MCNP program) is discussed. Temperatures that cover the range for reactor design and operation are considered. Three benchmarks are used for testing the method: Jezebel Benchmark, 28 cm-thick Slab Core Benchmark and LWR Benchmark with Burnable Absorbers. The calculation results showed the precision of the neutron cross-section library and verified the correct processing methods in usage of NJOY. (authors)

  11. Model-driven dependability assessment of software systems

    CERN Document Server

    Bernardi, Simona; Petriu, Dorina C

    2013-01-01

    In this book, the authors present cutting-edge model-driven techniques for modeling and analysis of software dependability. Most of them are based on the use of UML as software specification language. From the software system specification point of view, such techniques exploit the standard extension mechanisms of UML (i.e., UML profiling). UML profiles enable software engineers to add non-functional properties to the software model, in addition to the functional ones. The authors detail the state of the art on UML profile proposals for dependability specification and rigorously describe the t

  12. Diagnostic methods to assess inspiratory and expiratory muscle strength

    Directory of Open Access Journals (Sweden)

    Pedro Caruso

    2015-04-01

    Full Text Available Impairment of (inspiratory and expiratory respiratory muscles is a common clinical finding, not only in patients with neuromuscular disease but also in patients with primary disease of the lung parenchyma or airways. Although such impairment is common, its recognition is usually delayed because its signs and symptoms are nonspecific and late. This delayed recognition, or even the lack thereof, occurs because the diagnostic tests used in the assessment of respiratory muscle strength are not widely known and available. There are various methods of assessing respiratory muscle strength during the inspiratory and expiratory phases. These methods are divided into two categories: volitional tests (which require patient understanding and cooperation; and non-volitional tests. Volitional tests, such as those that measure maximal inspiratory and expiratory pressures, are the most commonly used because they are readily available. Non-volitional tests depend on magnetic stimulation of the phrenic nerve accompanied by the measurement of inspiratory mouth pressure, inspiratory esophageal pressure, or inspiratory transdiaphragmatic pressure. Another method that has come to be widely used is ultrasound imaging of the diaphragm. We believe that pulmonologists involved in the care of patients with respiratory diseases should be familiar with the tests used in order to assess respiratory muscle function.Therefore, the aim of the present article is to describe the advantages, disadvantages, procedures, and clinical applicability of the main tests used in the assessment of respiratory muscle strength.

  13. Assessing wine quality using isotopic methods

    International Nuclear Information System (INIS)

    Costinel, Diana; Ionete, Roxana Elena; Vremera, Raluca; Stefanescu, Ioan

    2010-01-01

    Full text: The analytical methods used to determine the isotope ratios of deuterium, carbon-13 and oxygen-18 in wines have gained official recognition from the Office International de la Vigne et du Vin (OIV) and National Organisation of Vine and Wine. The amount of stable isotopes in water and carbon dioxide from plant organic materials and their distribution in sugar and ethanol molecules are influenced by geo-climatic conditions of the region, grape varieties and the year of harvest. For wine characterization, to prove the botanical and geographical origin of the raw material, the isotopic analysis by continuous flow mass spectrometry CF-IRMS has made a significant contribution. This paper emphasize the results of a study concerning the assessing of water adulterated wines and non-grape alcohol and sugar additions at different concentration levels, using CF-IRMS analytical technique. (authors)

  14. Elastic wave scattering methods: assessments and suggestions

    International Nuclear Information System (INIS)

    Gubernatis, J.E.

    1985-01-01

    The author was asked by the meeting organizers to review and assess the developments over the past ten or so years in elastic wave scattering methods and to suggest areas of future research opportunities. He highlights the developments, focusing on what he feels were distinct steps forward in our theoretical understanding of how elastic waves interact with flaws. For references and illustrative figures, he decided to use as his principal source the proceedings of the various annual Reviews of Progress in Quantitative Nondestructive Evaluation (NDE). These meetings have been the main forum not only for presenting results of theoretical research but also for demonstrating the relevance of the theoretical research for the design and interpretation of experiment. In his opinion a quantitative NDE is possible only if this relevance exists, and his major objective is to discuss and illustrate the degree to which relevance has developed

  15. Site-dependent life-cycle impact assessment of acidification

    DEFF Research Database (Denmark)

    Potting, Josepha Maria Barbara; Schöpp, W.; Blok, Kornelis

    1998-01-01

    The lack of spatial differentiation in current life-cycle impact assessment (LCIA) affects the relevance of the assessed impact. This article first describes a framework for constructing factors relating the region of emission to the acidifying impact on its deposition areas. Next, these factors...... are established for 44 European regions with the help of the RAINS model, an integrated assessment model that combines information on regional emission levels with information on long-range atmospheric transport to estimate patterns of deposition and concentration for comparison with critical loads and thresholds...

  16. Objective Assessment Method for RNAV STAR Adherence

    Science.gov (United States)

    Stewart, Michael; Matthews, Bryan

    2017-01-01

    Flight crews and air traffic controllers have reported many safety concerns regarding area navigation standard terminal arrival routes (RNAV STARs). Specifically, optimized profile descents (OPDs). However, our information sources to quantify these issues are limited to subjective reporting and time consuming case-by-case investigations. This work is a preliminary study into the objective performance of instrument procedures and provides a framework to track procedural concepts and assess design specifications. We created a tool and analysis methods for gauging aircraft adherence as it relates to RNAV STARs. This information is vital for comprehensive understanding of how our air traffic behaves. In this study, we mined the performance of 24 major US airports over the preceding three years. Overlaying 4D radar track data onto RNAV STAR routes provided a comparison between aircraft flight paths and the waypoint positions and altitude restrictions. NASA Ames Supercomputing resources were utilized to perform the data mining and processing. We assessed STARs by lateral transition path (full-lateral), vertical restrictions (full-lateral/full-vertical), and skipped waypoints (skips). In addition, we graphed frequencies of aircraft altitudes relative to the altitude restrictions. Full-lateral adherence was always greater than Full-lateral/ full- vertical, as it is a subset, but the difference between the rates was not consistent. Full-lateral/full-vertical adherence medians of the 2016 procedures ranged from 0% in KDEN (Denver) to 21% in KMEM (Memphis). Waypoint skips ranged from 0% to nearly 100% for specific waypoints. Altitudes restrictions were sometimes missed by systematic amounts in 1,000 ft. increments from the restriction, creating multi-modal distributions. Other times, altitude misses looked to be more normally distributed around the restriction. This tool may aid in providing acceptability metrics as well as risk assessment information.

  17. An Improved Image Contrast Assessment Method

    Directory of Open Access Journals (Sweden)

    Yuanyuan Fan

    2013-07-01

    Full Text Available Contrast is an important factor affecting the image quality. In order to overcome the problems of local band-limited contrast, a novel image contrast assessment method based on the property of HVS is proposed. Firstly, the image by low-pass filter is performed fast wavelet decomposition. Secondly, all levels of band-pass filtered image and its corresponding low-pass filtered image are obtained by processing wavelet coefficients. Thirdly, local band-limited contrast is calculated, and the local band-limited contrast entropy is calculated according to the definition of entropy, Finally, the contrast entropy of image is obtained by averaging the local band-limited contrast entropy weighed using CSF coefficient. The experiment results show that the best contrast image can be accurately identified in the sequence images obtained by adjusting the exposure time and stretching gray respectively, the assessment results accord with human visual characteristics and make up the lack of local band-limited contrast.

  18. Quantitative assessment of target dependence of pion fluctuation in ...

    Indian Academy of Sciences (India)

    journal of. December 2012 physics pp. 1395–1405. Quantitative assessment ... The analysis reveals the erratic behaviour of the produced pions signifying ..... authors (Sitaram Pal) gratefully acknowledges the financial help from the University.

  19. Dynamic model based on Bayesian method for energy security assessment

    International Nuclear Information System (INIS)

    Augutis, Juozas; Krikštolaitis, Ričardas; Pečiulytė, Sigita; Žutautaitė, Inga

    2015-01-01

    Highlights: • Methodology for dynamic indicator model construction and forecasting of indicators. • Application of dynamic indicator model for energy system development scenarios. • Expert judgement involvement using Bayesian method. - Abstract: The methodology for the dynamic indicator model construction and forecasting of indicators for the assessment of energy security level is presented in this article. An indicator is a special index, which provides numerical values to important factors for the investigated area. In real life, models of different processes take into account various factors that are time-dependent and dependent on each other. Thus, it is advisable to construct a dynamic model in order to describe these dependences. The energy security indicators are used as factors in the dynamic model. Usually, the values of indicators are obtained from statistical data. The developed dynamic model enables to forecast indicators’ variation taking into account changes in system configuration. The energy system development is usually based on a new object construction. Since the parameters of changes of the new system are not exactly known, information about their influences on indicators could not be involved in the model by deterministic methods. Thus, dynamic indicators’ model based on historical data is adjusted by probabilistic model with the influence of new factors on indicators using the Bayesian method

  20. Frequency-Dependent FDTD Algorithm Using Newmark’s Method

    Directory of Open Access Journals (Sweden)

    Bing Wei

    2014-01-01

    Full Text Available According to the characteristics of the polarizability in frequency domain of three common models of dispersive media, the relation between the polarization vector and electric field intensity is converted into a time domain differential equation of second order with the polarization vector by using the conversion from frequency to time domain. Newmark βγ difference method is employed to solve this equation. The electric field intensity to polarizability recursion is derived, and the electric flux to electric field intensity recursion is obtained by constitutive relation. Then FDTD iterative computation in time domain of electric and magnetic field components in dispersive medium is completed. By analyzing the solution stability of the above differential equation using central difference method, it is proved that this method has more advantages in the selection of time step. Theoretical analyses and numerical results demonstrate that this method is a general algorithm and it has advantages of higher accuracy and stability over the algorithms based on central difference method.

  1. Assessment of physical protection systems: EVA method

    International Nuclear Information System (INIS)

    Bernard, J.-L.; Lamotte, C.; Jorda, A.

    2001-01-01

    CEA's missions in various sectors of activity such as nuclear, defence, industrial contracts and the associated regulatory requirements, make it necessary to develop a strategy in the field of physical protection. In particular, firms having nuclear materials are subject to the July 25, 1980 law no.80-572 on the protection and control of nuclear materials. A holding permit delivered by the regulatory authority is conditioned to the protection by the operator of the nuclear materials used. In France it is the nuclear operator who must demonstrate, in the form of a security study, that potential aggressors would be neutralised before they could escape with the material. To meet these requirements, we have developed methods to assess the vulnerability of our facilities. The EVA method, the French acronym for 'Evaluation de la vulnerabilite des Acces' (access vulnerability system) allows dealing with internal and external threats involving brutal actions. In scenarios relating to external threat, the intruders get past the various barriers of our protection system, attempting to steal a large volume of material in one swoop and then escape. In the case of internal threat, the goal is the same. However, as the intruder usually has access to the material in the scope of his activities, the action begins at the level of the target. Our protection system is based on in-depth defense where the intruders are detected and then delayed in their advance towards their target to allow time for intervention forces to intercept them

  2. Methods for regional assessment of geothermal resources

    Science.gov (United States)

    Muffler, P.; Cataldi, R.

    1978-01-01

    A consistent, agreed-upon terminology is prerequisite for geothermal resource assessment. Accordingly, we propose a logical, sequential subdivision of the "geothermal resource base", accepting its definition as all the thermal energy in the earth's crust under a given area, measured from mean annual temperature. That part of the resource base which is shallow enough to be tapped by production drilling is termed the "accessible resource base", and it in turn is divided into "useful" and "residual" components. The useful component (i.e. the thermal energy that could reasonably be extracted at costs competitive with other forms of energy at some specified future time) is termed the "geothermal resource". This in turn is divided into "economic" and "subeconomic" components, based on conditions existing at the time of assessment. In the format of a McKelvey diagram, this logic defines the vertical axis (degree of economic feasibility). The horizontal axis (degree of geologic assurance) contains "identified" and "undiscovered" components. "Reserve" is then designated as the identified economic resource. All categories should be expressed in units of thermal energy, with resource and reserve figures calculated at wellhead, prior to the inevitable large losses inherent in any practical thermal use or in conversion to electricity. Methods for assessing geothermal resources can be grouped into 4 classes: (a) surface thermal flux, (b) volume, (c) planar fracture and (d) magmatic heat budget. The volume method appears to be most useful because (1) it is applicable to virtually any geologic environment, (2) the required parameters can in Sprinciple be measured or estimated, (3) the inevitable errors are in part compensated and (4) the major uncertainties (recoverability and resupply) are amenable to resolution in the foreseeable future. The major weakness in all the methods rests in the estimation of how much of the accessible resource base can be extracted at some time in the

  3. Evaluation of methods to assess physical activity

    Science.gov (United States)

    Leenders, Nicole Y. J. M.

    Epidemiological evidence has accumulated that demonstrates that the amount of physical activity-related energy expenditure during a week reduces the incidence of cardiovascular disease, diabetes, obesity, and all-cause mortality. To further understand the amount of daily physical activity and related energy expenditure that are necessary to maintain or improve the functional health status and quality of life, instruments that estimate total (TDEE) and physical activity-related energy expenditure (PAEE) under free-living conditions should be determined to be valid and reliable. Without evaluation of the various methods that estimate TDEE and PAEE with the doubly labeled water (DLW) method in females there will be eventual significant limitations on assessing the efficacy of physical activity interventions on health status in this population. A triaxial accelerometer (Tritrac-R3D, (TT)), an uniaxial (Computer Science and Applications Inc., (CSA)) activity monitor, a Yamax-Digiwalker-500sp°ler , (YX-stepcounter), by measuring heart rate responses (HR method) and a 7-d Physical Activity Recall questionnaire (7-d PAR) were compared with the "criterion method" of DLW during a 7-d period in female adults. The DLW-TDEE was underestimated on average 9, 11 and 15% using 7-d PAR, HR method and TT. The underestimation of DLW-PAEE by 7-d PAR was 21% compared to 47% and 67% for TT and YX-stepcounter. Approximately 56% of the variance in DLW-PAEE*kgsp{-1} is explained by the registration of body movement with accelerometry. A larger proportion of the variance in DLW-PAEE*kgsp{-1} was explained by jointly incorporating information from the vertical and horizontal movement measured with the CSA and Tritrac-R3D (rsp2 = 0.87). Although only a small amount of variance in DLW-PAEE*kgsp{-1} is explained by the number of steps taken per day, because of its low cost and ease of use, the Yamax-stepcounter is useful in studies promoting daily walking. Thus, studies involving the

  4. Extension of biomass estimates to pre-assessment periods using density dependent surplus production approach.

    Directory of Open Access Journals (Sweden)

    Jan Horbowy

    Full Text Available Biomass reconstructions to pre-assessment periods for commercially important and exploitable fish species are important tools for understanding long-term processes and fluctuation on stock and ecosystem level. For some stocks only fisheries statistics and fishery dependent data are available, for periods before surveys were conducted. The methods for the backward extension of the analytical assessment of biomass for years for which only total catch volumes are available were developed and tested in this paper. Two of the approaches developed apply the concept of the surplus production rate (SPR, which is shown to be stock density dependent if stock dynamics is governed by classical stock-production models. The other approach used a modified form of the Schaefer production model that allows for backward biomass estimation. The performance of the methods was tested on the Arctic cod and North Sea herring stocks, for which analytical biomass estimates extend back to the late 1940s. Next, the methods were applied to extend biomass estimates of the North-east Atlantic mackerel from the 1970s (analytical biomass estimates available to the 1950s, for which only total catch volumes were available. For comparison with other methods which employs a constant SPR estimated as an average of the observed values, was also applied. The analyses showed that the performance of the methods is stock and data specific; the methods that work well for one stock may fail for the others. The constant SPR method is not recommended in those cases when the SPR is relatively high and the catch volumes in the reconstructed period are low.

  5. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten

    2014-01-01

    was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France......BACKGROUND: There are challenges, when extracting bacterial DNA from specimens for molecular diagnostics, since fecal samples also contain DNA from human cells and many different substances derived from food, cell residues and medication that can inhibit downstream PCR. The purpose of the study...... by easyMag® from the same fecal samples. Furthermore, DNA extracts obtained using easyMag® seemed to contain inhibitory compounds, since in order to perform a successful PCR-analysis, the sample should be diluted at least 10 times. DGGE performed on PCR from DNA extracted by QIAamp DNA Stool Mini Kit DNA...

  6. Comparison between Evapotranspiration Fluxes Assessment Methods

    Science.gov (United States)

    Casola, A.; Longobardi, A.; Villani, P.

    2009-11-01

    Knowledge of hydrological processes acting in the water balance is determinant for a rational water resources management plan. Among these, the water losses as vapour, in the form of evapotranspiration, play an important role in the water balance and the heat transfers between the land surface and the atmosphere. Mass and energy interactions between soil, atmosphere and vegetation, in fact, influence all hydrological processes modificating rainfall interception, infiltration, evapotraspiration, surface runoff and groundwater recharge.A numbers of methods have been developed in scientific literature for modelling evapotranspiration. They can be divided in three main groups: i) traditional meteorological models, ii) energy fluxes balance models, considering interaction between vegetation and the atmosphere, and iii) remote sensing based models. The present analysis preliminary performs a study of fluxes directions and an evaluation of energy balance closure in a typical Mediterranean short vegetation area, using data series recorded from an eddy covariance station, located in the Campania region, Southern Italy. The analysis was performed on different seasons of the year with the aim to assess climatic forcing features impact on fluxes balance, to evaluate the smaller imbalance and to highlight influencing factors and sampling errors on balance closure. The present study also concerns evapotranspiration fluxes assessment at the point scale. Evapotranspiration is evaluated both from empirical relationships (Penmann-Montheit, Penmann F AO, Prestley&Taylor) calibrated with measured energy fluxes at mentioned experimental site, and from measured latent heat data scaled by the latent heat of vaporization. These results are compared with traditional and reliable well known models at the plot scale (Coutagne, Turc, Thorthwaite).

  7. Measured emittance dependence on injection method in laser plasma accelerators

    Science.gov (United States)

    Barber, Samuel; van Tilborg, Jeroen; Schroeder, Carl; Lehe, Remi; Tsai, Hai-En; Swanson, Kelly; Steinke, Sven; Nakamura, Kei; Geddes, Cameron; Benedetti, Carlo; Esarey, Eric; Leemans, Wim

    2017-10-01

    The success of many laser plasma accelerator (LPA) based applications relies on the ability to produce electron beams with excellent 6D brightness, where brightness is defined as the ratio of charge to the product of the three normalized emittances. As such, parametric studies of the emittance of LPA generated electron beams are essential. Profiting from a stable and tunable LPA setup, combined with a carefully designed single-shot transverse emittance diagnostic, we present a direct comparison of charge dependent emittance measurements of electron beams generated by two different injection mechanisms: ionization injection and shock induced density down-ramp injection. Notably, the measurements reveal that ionization injection results in significantly higher emittance. With the down-ramp injection configuration, emittances less than 1 micron at spectral charge densities up to 2 pC/MeV were measured. This work was supported by the U.S. DOE under Contract No. DE-AC02-05CH11231, by the NSF under Grant No. PHY-1415596, by the U.S. DOE NNSA, DNN R&D (NA22), and by the Gordon and Betty Moore Foundation under Grant ID GBMF4898.

  8. Nicotine Vapor Method to Induce Nicotine Dependence in Rodents.

    Science.gov (United States)

    Kallupi, Marsida; George, Olivier

    2017-07-05

    Nicotine, the main addictive component of tobacco, induces potentiation of brain stimulation reward, increases locomotor activity, and induces conditioned place preference. Nicotine cessation produces a withdrawal syndrome that can be relieved by nicotine replacement therapy. In the last decade, the market for electronic cigarettes has flourished, especially among adolescents. The nicotine vaporizer or electronic nicotine delivery system is a battery-operated device that allows the user to simulate the experience of tobacco smoking without inhaling smoke. The device is designed to be an alternative to conventional cigarettes that emits vaporized nicotine inhaled by the user. This report describes a procedure to vaporize nicotine in the air to produce blood nicotine levels in rodents that are clinically relevant to those that are observed in humans and produce dependence. We also describe how to construct the apparatus to deliver nicotine vapor in a stable, reliable, and consistent manner, as well as how to analyze air for nicotine content. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  9. Assessment of chemical exposures: calculation methods for environmental professionals

    National Research Council Canada - National Science Library

    Daugherty, Jack E

    1997-01-01

    ... on by scientists, businessmen, and policymakers. Assessment of Chemical Exposures: Calculation Methods for Environmental Professionals addresses the expanding scope of exposure assessments in both the workplace and environment...

  10. Assessing age-dependent susceptibility to measles in Japan.

    Science.gov (United States)

    Kinoshita, Ryo; Nishiura, Hiroshi

    2017-06-05

    Routine vaccination against measles in Japan started in 1978. Whereas measles elimination was verified in 2015, multiple chains of measles transmission were observed in 2016. We aimed to reconstruct the age-dependent susceptibility to measles in Japan so that future vaccination strategies can be elucidated. An epidemiological model was used to quantify the age-dependent immune fraction using datasets of vaccination coverage and seroepidemiological survey. The second dose was interpreted in two different scenarios, i.e., booster and random shots. The effective reproduction number, the average number of secondary cases generated by a single infected individual, and the age at infection were explored using the age-dependent transmission model and the next generation matrix. While the herd immunity threshold of measles likely ranges from 90% to 95%, assuming that the basic reproductive number ranges from 10 to 20, the estimated immune fraction in Japan was below those thresholds in 2016, despite the fact that the estimates were above 80% for all ages. If the second dose completely acted as the booster shot, a proportion immune above 90% was achieved only among those aged 5years or below in 2016. Alternatively, if the second dose was randomly distributed regardless of primary vaccination status, a proportion immune over 90% was achieved among those aged below 25years. The effective reproduction number was estimated to range from 1.50 to 3.01 and from 1.50 to 3.00, respectively, for scenarios 1 and 2 in 2016; if the current vaccination schedule were continued, the reproduction number is projected to range from 1.50 to 3.01 and 1.39 to 2.78, respectively, in 2025. Japan continues to be prone to imported cases of measles. Supplementary vaccination among adults aged 20-49years would be effective if the chains of transmission continue to be observed in that age group. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Formal methods for dependable real-time systems

    Science.gov (United States)

    Rushby, John

    1993-01-01

    The motivation for using formal methods to specify and reason about real time properties is outlined and approaches that were proposed and used are sketched. The formal verifications of clock synchronization algorithms are concluded as showing that mechanically supported reasoning about complex real time behavior is feasible. However, there was significant increase in the effectiveness of verification systems since those verifications were performed, at it is to be expected that verifications of comparable difficulty will become fairly routine. The current challenge lies in developing perspicuous and economical approaches to the formalization and specification of real time properties.

  12. Dependence of coke properties on the method of charge preparation

    Energy Technology Data Exchange (ETDEWEB)

    Morozov, O S

    1979-04-01

    Selective crushing is essential to obtain the required coke properties, so that in the coarse fractions there is a considerable reduction in the middlings and dirt normally difficult to crush. These are at the same time enriched with vitrinite so that there is an increase in the coal substance as such, reflected in improved caking capacity in the coarse size range. Various methods of selective crushing are employed, including air entrainment mills, fluidised bed systems. Other advantages claimed for selective crushing are the uniform pore distribution and air permeability and also the diminished breakage stress.

  13. An efficient and accurate method to obtain the energy-dependent Green function for general potentials

    International Nuclear Information System (INIS)

    Kramer, T; Heller, E J; Parrott, R E

    2008-01-01

    Time-dependent quantum mechanics provides an intuitive picture of particle propagation in external fields. Semiclassical methods link the classical trajectories of particles with their quantum mechanical propagation. Many analytical results and a variety of numerical methods have been developed to solve the time-dependent Schroedinger equation. The time-dependent methods work for nearly arbitrarily shaped potentials, including sources and sinks via complex-valued potentials. Many quantities are measured at fixed energy, which is seemingly not well suited for a time-dependent formulation. Very few methods exist to obtain the energy-dependent Green function for complicated potentials without resorting to ensemble averages or using certain lead-in arrangements. Here, we demonstrate in detail a time-dependent approach, which can accurately and effectively construct the energy-dependent Green function for very general potentials. The applications of the method are numerous, including chemical, mesoscopic, and atomic physics

  14. Assessing Whether Oil Dependency in Venezuela Contributes to National Instability

    Directory of Open Access Journals (Sweden)

    Adam Kott

    2012-08-01

    Full Text Available The focus of this article is on what role, if any, oil has on Venezuela's instability. When trying to explain why a resource-rich country experiences slow or negative growth, experts often point to the resource curse. The following pages explore the traditional theory behind the resource curse as well as alternative perspectives to this theory such as ownership structure and the correlation between oil prices and democracy. This article also explores the various forms of instability within Venezuela and their causes. Finally, the article looks at President Hugo Chavez's political and economic policies as well as the stagnation of the state oil company, Petroleos de Venezuela (PDVSA. This article dispels the myth that the resource curse is the source of destabilization in many resource dependent countries. Rather than a cause of instability, this phenomenon is a symptom of a much larger problem that is largely structural.

  15. Ergonomic risk assessment by REBA method

    Directory of Open Access Journals (Sweden)

    A. Hassanzadeh

    2007-09-01

    Full Text Available Background and aims   Awkward posture has been recognized as one of the important risk factors of work-related musculoskeletal disorders (WMSD. The current study aimed at determining ergonomic risk level, WMSDs ratio and exploring working postures contribution to WMSD. During the study, working postures were phased and then they were scored using the REBAtool from observing the work.   Methods   To perform the study, workers of a home appliances manufacturing factory were  assessed. In order to collecting required data, each part of the body was scored and work frequency,  load/force, coupling were considered to achieve a REBA score. Nordic Questionnaire was used  to determining WMSD ratio and its relationship whit REBA score. 231 working phases were  assessed and 13761 questions using Nordic Questionnaire were answered. Percentage of the workers in press, spot welding, grinding, cutting, assembling, and painting was 15.8, 21.6, 25.9, 34.5, 89.9%, respectively. Workers were 18-54 years old and their work recording average was 52  month.   Results   REBAscore was 4-13 in under study tasks. REBA score = 9 had the most frequency  (20% and REBA score =13 had the least frequency (1.4%. Risk level in press, cutting, and  painting was high (25.5, 100, 68.2% cases. This shows that cutting has the highest risk level. On the other hand 38.5% of the workers in past 12 month had problem in different parts of their body. Totally 11.7% of the workers had problem in neck, 19.4$ in leg, 10.7% in foot, 82.5% in lower back,  87.6% in upper back and 7.8% in shoulders.10.7% of the workers had previous illness that 8.7%  of them were non occupational and 1.9% were caused their previous jobs. The REBAscore mean  and ergonomic risk level is not equal in tasks (p-value0. Action level was necessary  soon in others.   Conclusion   Risk level should be reduced specially in cutting. The heavy workload and  working height poor design, awkward

  16. Demonstration and evaluation of a method for assessing mediated moderation.

    Science.gov (United States)

    Morgan-Lopez, Antonio A; MacKinnon, David P

    2006-02-01

    Mediated moderation occurs when the interaction between two variables affects a mediator, which then affects a dependent variable. In this article, we describe the mediated moderation model and evaluate it with a statistical simulation using an adaptation of product-of-coefficients methods to assess mediation. We also demonstrate the use of this method with a substantive example from the adolescent tobacco literature. In the simulation, relative bias (RB) in point estimates and standard errors did not exceed problematic levels of +/- 10% although systematic variability in RB was accounted for by parameter size, sample size, and nonzero direct effects. Power to detect mediated moderation effects appears to be severely compromised under one particular combination of conditions: when the component variables that make up the interaction terms are correlated and partial mediated moderation exists. Implications for the estimation of mediated moderation effects in experimental and nonexperimental research are discussed.

  17. Assessing the accuracy of ancestral protein reconstruction methods.

    Directory of Open Access Journals (Sweden)

    Paul D Williams

    2006-06-01

    Full Text Available The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.

  18. Assessing the accuracy of ancestral protein reconstruction methods.

    Science.gov (United States)

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-06-23

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.

  19. Dependent data in social sciences research forms, issues, and methods of analysis

    CERN Document Server

    Eye, Alexander; Wiedermann, Wolfgang

    2015-01-01

    This volume presents contributions on handling data in which the postulate of independence in the data matrix is violated. When this postulate is violated and when the methods assuming independence are still applied, the estimated parameters are likely to be biased, and statistical decisions are very likely to be incorrect. Problems associated with dependence in data have been known for a long time, and led to the development of tailored methods for the analysis of dependent data in various areas of statistical analysis. These methods include, for example, methods for the analysis of longitudinal data, corrections for dependency, and corrections for degrees of freedom. This volume contains the following five sections: growth curve modeling, directional dependence, dyadic data modeling, item response modeling (IRT), and other methods for the analysis of dependent data (e.g., approaches for modeling cross-section dependence, multidimensional scaling techniques, and mixed models). Researchers and graduate stud...

  20. Methods and Model Dependency of Extreme Event Attribution: The 2015 European Drought

    Science.gov (United States)

    Hauser, Mathias; Gudmundsson, Lukas; Orth, René; Jézéquel, Aglaé; Haustein, Karsten; Vautard, Robert; van Oldenborgh, Geert J.; Wilcox, Laura; Seneviratne, Sonia I.

    2017-10-01

    Science on the role of anthropogenic influence on extreme weather events, such as heatwaves or droughts, has evolved rapidly in the past years. The approach of "event attribution" compares the occurrence-probability of an event in the present, factual climate with its probability in a hypothetical, counterfactual climate without human-induced climate change. Several methods can be used for event attribution, based on climate model simulations and observations, and usually researchers only assess a subset of methods and data sources. Here, we explore the role of methodological choices for the attribution of the 2015 meteorological summer drought in Europe. We present contradicting conclusions on the relevance of human influence as a function of the chosen data source and event attribution methodology. Assessments using the maximum number of models and counterfactual climates with pre-industrial greenhouse gas concentrations point to an enhanced drought risk in Europe. However, other evaluations show contradictory evidence. These results highlight the need for a multi-model and multi-method framework in event attribution research, especially for events with a low signal-to-noise ratio and high model dependency such as regional droughts.

  1. Assessing digital control system dependability using the dynamic flowgraph methodology

    International Nuclear Information System (INIS)

    Garrett, C.J.; Guarro, S.B.; Apostolakis, G.E.

    1993-01-01

    Dynamic Flowgraph Methodology (DFM) is a methodological approach to modeling and analyzing the behavior of software-driven embedded systems for the purpose of reliability/safety assessment and verification. The methodology has two fundamental goals: (a) to identify how certain postulated events may occur in a system and (b) to identify an appropriate testing strategy based on an analysis of system functional behavior. To achieve these goals, the methodology employs a modeling framework in which system models are developed in terms of causal relationships between physical variables and temporal characteristics of the execution of software modules. These models are then analyzed to determine how a certain state (desirable or undesirable) can be reached. This is done by developing timed fault trees, which take the form of logical combinations of static trees relating system parameters at different points in time. The prime implicants (multistate analog of minimal cut sets) of the fault trees can be used to identify and eliminate system faults resulting from unanticipated combinations of software logic errors, hardware failures, and adverse environmental conditions and to direct testing activity to more efficiently eliminate implementation errors by focusing on the neighborhood of potential failure modes arising from these combinations of system conditions

  2. 38 CFR 3.25 - Parent's dependency and indemnity compensation (DIC)-Method of payment computation.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Parent's dependency and... Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS ADJUDICATION Pension, Compensation, and Dependency and Indemnity Compensation General § 3.25 Parent's dependency and indemnity compensation (DIC)—Method of payment...

  3. Exposure time independent summary statistics for assessment of drug dependent cell line growth inhibition.

    Science.gov (United States)

    Falgreen, Steffen; Laursen, Maria Bach; Bødker, Julie Støve; Kjeldsen, Malene Krag; Schmitz, Alexander; Nyegaard, Mette; Johnsen, Hans Erik; Dybkær, Karen; Bøgsted, Martin

    2014-06-05

    In vitro generated dose-response curves of human cancer cell lines are widely used to develop new therapeutics. The curves are summarised by simplified statistics that ignore the conventionally used dose-response curves' dependency on drug exposure time and growth kinetics. This may lead to suboptimal exploitation of data and biased conclusions on the potential of the drug in question. Therefore we set out to improve the dose-response assessments by eliminating the impact of time dependency. First, a mathematical model for drug induced cell growth inhibition was formulated and used to derive novel dose-response curves and improved summary statistics that are independent of time under the proposed model. Next, a statistical analysis workflow for estimating the improved statistics was suggested consisting of 1) nonlinear regression models for estimation of cell counts and doubling times, 2) isotonic regression for modelling the suggested dose-response curves, and 3) resampling based method for assessing variation of the novel summary statistics. We document that conventionally used summary statistics for dose-response experiments depend on time so that fast growing cell lines compared to slowly growing ones are considered overly sensitive. The adequacy of the mathematical model is tested for doxorubicin and found to fit real data to an acceptable degree. Dose-response data from the NCI60 drug screen were used to illustrate the time dependency and demonstrate an adjustment correcting for it. The applicability of the workflow was illustrated by simulation and application on a doxorubicin growth inhibition screen. The simulations show that under the proposed mathematical model the suggested statistical workflow results in unbiased estimates of the time independent summary statistics. Variance estimates of the novel summary statistics are used to conclude that the doxorubicin screen covers a significant diverse range of responses ensuring it is useful for biological

  4. Assessment of seismic margin calculation methods

    International Nuclear Information System (INIS)

    Kennedy, R.P.; Murray, R.C.; Ravindra, M.K.; Reed, J.W.; Stevenson, J.D.

    1989-03-01

    Seismic margin review of nuclear power plants requires that the High Confidence of Low Probability of Failure (HCLPF) capacity be calculated for certain components. The candidate methods for calculating the HCLPF capacity as recommended by the Expert Panel on Quantification of Seismic Margins are the Conservative Deterministic Failure Margin (CDFM) method and the Fragility Analysis (FA) method. The present study evaluated these two methods using some representative components in order to provide further guidance in conducting seismic margin reviews. It is concluded that either of the two methods could be used for calculating HCLPF capacities. 21 refs., 9 figs., 6 tabs

  5. Travel Efficiency Assessment Method: Three Case Studies

    Science.gov (United States)

    This slide presentation summarizes three case studies EPA conducted in partnership with Boston, Kansas City, and Tucson, to assess the potential benefits of employing travel efficiency strategies in these areas.

  6. Culture-Dependent and -Independent Methods Capture Different Microbial Community Fractions in Hydrocarbon-Contaminated Soils.

    Directory of Open Access Journals (Sweden)

    Franck O P Stefani

    Full Text Available Bioremediation is a cost-effective and sustainable approach for treating polluted soils, but our ability to improve on current bioremediation strategies depends on our ability to isolate microorganisms from these soils. Although culturing is widely used in bioremediation research and applications, it is unknown whether the composition of cultured isolates closely mirrors the indigenous microbial community from contaminated soils. To assess this, we paired culture-independent (454-pyrosequencing of total soil DNA with culture-dependent (isolation using seven different growth media techniques to analyse the bacterial and fungal communities from hydrocarbon-contaminated soils. Although bacterial and fungal rarefaction curves were saturated for both methods, only 2.4% and 8.2% of the bacterial and fungal OTUs, respectively, were shared between datasets. Isolated taxa increased the total recovered species richness by only 2% for bacteria and 5% for fungi. Interestingly, none of the bacteria that we isolated were representative of the major bacterial OTUs recovered by 454-pyrosequencing. Isolation of fungi was moderately more effective at capturing the dominant OTUs observed by culture-independent analysis, as 3 of 31 cultured fungal strains ranked among the 20 most abundant fungal OTUs in the 454-pyrosequencing dataset. This study is one of the most comprehensive comparisons of microbial communities from hydrocarbon-contaminated soils using both isolation and high-throughput sequencing methods.

  7. Culture-Dependent and -Independent Methods Capture Different Microbial Community Fractions in Hydrocarbon-Contaminated Soils.

    Science.gov (United States)

    Stefani, Franck O P; Bell, Terrence H; Marchand, Charlotte; de la Providencia, Ivan E; El Yassimi, Abdel; St-Arnaud, Marc; Hijri, Mohamed

    2015-01-01

    Bioremediation is a cost-effective and sustainable approach for treating polluted soils, but our ability to improve on current bioremediation strategies depends on our ability to isolate microorganisms from these soils. Although culturing is widely used in bioremediation research and applications, it is unknown whether the composition of cultured isolates closely mirrors the indigenous microbial community from contaminated soils. To assess this, we paired culture-independent (454-pyrosequencing of total soil DNA) with culture-dependent (isolation using seven different growth media) techniques to analyse the bacterial and fungal communities from hydrocarbon-contaminated soils. Although bacterial and fungal rarefaction curves were saturated for both methods, only 2.4% and 8.2% of the bacterial and fungal OTUs, respectively, were shared between datasets. Isolated taxa increased the total recovered species richness by only 2% for bacteria and 5% for fungi. Interestingly, none of the bacteria that we isolated were representative of the major bacterial OTUs recovered by 454-pyrosequencing. Isolation of fungi was moderately more effective at capturing the dominant OTUs observed by culture-independent analysis, as 3 of 31 cultured fungal strains ranked among the 20 most abundant fungal OTUs in the 454-pyrosequencing dataset. This study is one of the most comprehensive comparisons of microbial communities from hydrocarbon-contaminated soils using both isolation and high-throughput sequencing methods.

  8. Mixing Methods in Assessing Coaches' Decision Making

    Science.gov (United States)

    Vergeer, Ineke; Lyle, John

    2007-01-01

    Mixing methods has recently achieved respectability as an appropriate approach to research design, offering a variety of advantages (Tashakkori & Teddlie, 2003). The purpose of this paper is to outline and evaluate a mixed methods approach within the domain of coaches' decision making. Illustrated with data from a policy-capturing study on…

  9. Radioisotope method for assessing skin blood pressure

    International Nuclear Information System (INIS)

    Tarkowska, A.; Misiunia, P.; Woytowicz, A.; Olewinski, T.

    1979-01-01

    A method of measuring the skin blood pressure (SBP) evolved by Holstein and Lassen is described. The method is based on determination of the force of pressure causing blockade of Na 131 I clearance from the site of its intradermal injection. Using this method it was found that in the lower extremities in healthy subjects the SBP approached the diastolic pressure measured by the conventional method in the brachial artery. On the other hand in patients with obliterative arteriosclerosis and in Buerger's disease the SBP was considerably lower than the diastolic arterial pressure. The authors think that the method gives a good insight into the state of blood supply to the extremities in healthy subjects and in peripheral vascular failure. (author)

  10. Can Confirmation Measures Reflect Statistically Sound Dependencies in Data? The Concordance-based Assessment

    Directory of Open Access Journals (Sweden)

    Susmaga Robert

    2018-03-01

    Full Text Available The paper considers particular interestingness measures, called confirmation measures (also known as Bayesian confirmation measures, used for the evaluation of “if evidence, then hypothesis” rules. The agreement of such measures with a statistically sound (significant dependency between the evidence and the hypothesis in data is thoroughly investigated. The popular confirmation measures were not defined to possess such form of agreement. However, in error-prone environments, potential lack of agreement may lead to undesired effects, e.g. when a measure indicates either strong confirmation or strong disconfirmation, while in fact there is only weak dependency between the evidence and the hypothesis. In order to detect and prevent such situations, the paper employs a coefficient allowing to assess the level of dependency between the evidence and the hypothesis in data, and introduces a method of quantifying the level of agreement (referred to as a concordance between this coefficient and the measure being analysed. The concordance is characterized and visualised using specialized histograms, scatter-plots, etc. Moreover, risk-related interpretations of the concordance are introduced. Using a set of 12 confirmation measures, the paper presents experiments designed to establish the actual concordance as well as other useful characteristics of the measures.

  11. Diffusion Capillary Phantom vs. Human Data: Outcomes for Reconstruction Methods Depend on Evaluation Medium

    Directory of Open Access Journals (Sweden)

    Sarah D. Lichenstein

    2016-09-01

    Full Text Available Purpose: Diffusion MRI provides a non-invasive way of estimating structural connectivity in the brain. Many studies have used diffusion phantoms as benchmarks to assess the performance of different tractography reconstruction algorithms and assumed that the results can be applied to in vivo studies. Here we examined whether quality metrics derived from a common, publically available, diffusion phantom can reliably predict tractography performance in human white matter tissue. Material and Methods: We compared estimates of fiber length and fiber crossing among a simple tensor model (diffusion tensor imaging, a more complicated model (ball-and-sticks and model-free (diffusion spectrum imaging, generalized q-sampling imaging reconstruction methods using a capillary phantom and in vivo human data (N=14. Results: Our analysis showed that evaluation outcomes differ depending on whether they were obtained from phantom or human data. Specifically, the diffusion phantom favored a more complicated model over a simple tensor model or model-free methods for resolving crossing fibers. On the other hand, the human studies showed the opposite pattern of results, with the model-free methods being more advantageous than model-based methods or simple tensor models. This performance difference was consistent across several metrics, including estimating fiber length and resolving fiber crossings in established white matter pathways. Conclusions: These findings indicate that the construction of current capillary diffusion phantoms tends to favor complicated reconstruction models over a simple tensor model or model-free methods, whereas the in vivo data tends to produce opposite results. This brings into question the previous phantom-based evaluation approaches and suggests that a more realistic phantom or simulation is necessary to accurately predict the relative performance of different tractography reconstruction methods. Acronyms: BSM: ball-and-sticks model; d

  12. A Method for Assessing Quality of Service in Broadband Networks

    DEFF Research Database (Denmark)

    Bujlow, Tomasz; Riaz, M. Tahir; Pedersen, Jens Myrup

    2012-01-01

    Monitoring of Quality of Service (QoS) in high-speed Internet infrastructure is a challenging task. However, precise assessments must take into account the fact that the requirements for the given quality level are service-dependent. Backbone QoS monitoring and analysis requires processing of large...... taken from the description of system sockets. This paper proposes a new method for measuring the Quality of Service (QoS) level in broadband networks, based on our Volunteer-Based System for collecting the training data, Machine Learning Algorithms for generating the classification rules and application...... and provide C5.0 high-quality training data, divided into groups corresponding to different types of applications. It was found that currently existing means of collecting data (classification by ports, Deep Packet Inspection, statistical classification, public data sources) are not sufficient and they do...

  13. Comparative analysis of selected hydromorphological assessment methods

    Czech Academy of Sciences Publication Activity Database

    Šípek, Václav; Matoušková, M.; Dvořák, M.

    2010-01-01

    Roč. 169, 1-4 (2010), s. 309-319 ISSN 0167-6369 Institutional support: RVO:67985874 Keywords : Hydromorphology * Ecohydromorphological river habitat assessment: EcoRivHab * Rapid Bioassessment Protocol * LAWA Field and Overview Survey * Libechovka River * Bilina River * Czech Republic Subject RIV: DA - Hydrology ; Limnology Impact factor: 1.436, year: 2010

  14. Method of operator safety assessment for underground mobile mining equipment

    Science.gov (United States)

    Działak, Paulina; Karliński, Jacek; Rusiński, Eugeniusz

    2018-01-01

    The paper presents a method of assessing the safety of operators of mobile mining equipment (MME), which is adapted to current and future geological and mining conditions. The authors focused on underground mines, with special consideration of copper mines (KGHM). As extraction reaches into deeper layers of the deposit it can activate natural hazards, which, thus far, have been considered unusual and whose range and intensity are different depending on the field of operation. One of the main hazards that affect work safety and can become the main barrier in the exploitation of deposits at greater depths is climate threat. The authors have analysed the phenomena which may impact the safety of MME operators, with consideration of accidents that have not yet been studied and are not covered by the current safety standards for this group of miners. An attempt was made to develop a method for assessing the safety of MME operators, which takes into account the mentioned natural hazards and which is adapted to current and future environmental conditions in underground mines.

  15. Method of operator safety assessment for underground mobile mining equipment

    Directory of Open Access Journals (Sweden)

    Działak Paulina

    2018-01-01

    Full Text Available The paper presents a method of assessing the safety of operators of mobile mining equipment (MME, which is adapted to current and future geological and mining conditions. The authors focused on underground mines, with special consideration of copper mines (KGHM. As extraction reaches into deeper layers of the deposit it can activate natural hazards, which, thus far, have been considered unusual and whose range and intensity are different depending on the field of operation. One of the main hazards that affect work safety and can become the main barrier in the exploitation of deposits at greater depths is climate threat. The authors have analysed the phenomena which may impact the safety of MME operators, with consideration of accidents that have not yet been studied and are not covered by the current safety standards for this group of miners. An attempt was made to develop a method for assessing the safety of MME operators, which takes into account the mentioned natural hazards and which is adapted to current and future environmental conditions in underground mines.

  16. Extrapolation Method for System Reliability Assessment

    DEFF Research Database (Denmark)

    Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro

    2012-01-01

    of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals......The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations...... that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals....

  17. [Methods and Applications of Psychological Stress State Assessment].

    Science.gov (United States)

    Li, Xin; Yang, Yadan; Hou, Yongjie; Chen, Zetao

    2015-08-01

    In this paper, the response of individual's physiological system under psychological stress state is discussed, and the theoretical support for psychological stress assessment research is provided. The two methods, i.e., the psychological stress assessment of questionnaire and physiological parameter assessment used for current psychological stress assessment are summarized. Then, the future trend of development of psychological stress assessment research is pointed out. We hope that this work could do and provide further support and help to psychological stress assessment studies.

  18. Assessing Social Isolation: Pilot Testing Different Methods.

    Science.gov (United States)

    Taylor, Harry Owen; Herbers, Stephanie; Talisman, Samuel; Morrow-Howell, Nancy

    2016-04-01

    Social isolation is a significant public health problem among many older adults; however, most of the empirical knowledge about isolation derives from community-based samples. There has been less attention given to isolation in senior housing communities. The objectives of this pilot study were to test two methods to identify socially isolated residents in low-income senior housing and compare findings about the extent of isolation from these two methods. The first method, self-report by residents, included 47 out of 135 residents who completed in-person interviews. To determine self-report isolation, residents completed the Lubben Social Network Scale 6 (LSNS-6). The second method involved a staff member who reported the extent of isolation on all 135 residents via an online survey. Results indicated that 26% of residents who were interviewed were deemed socially isolated by the LSNS-6. Staff members rated 12% of residents as having some or a lot of isolation. In comparing the two methods, staff members rated 2% of interviewed residents as having a lot of isolation. The combination of self-report and staff report could be more informative than just self-report alone, particularly when participation rates are low. However, researchers should be aware of the potential discrepancy between these two methods.

  19. Feasibility of Ecological Momentary Assessment Using Cellular Telephones in Methamphetamine Dependent Subjects

    Directory of Open Access Journals (Sweden)

    John Mendelson

    2008-01-01

    Full Text Available Background: Predictors of relapse to methamphetamine use are poorly understood. State variables may play an important role in relapse, but they have been difficult to measure at frequent intervals in outpatients.Methods: We conducted a feasibility study of the use of cellular telephones to collect state variable data from outpatients. Six subjects in treatment for methamphetamine dependence were called three times per weekday for approximately seven weeks. Seven questionnaires were administered that assessed craving, stress, affect and current type of location and social environment.Results: 395/606 (65% of calls attempted were completed. The mean time to complete each call was 4.9 (s.d. 1.8 minutes and the mean time to complete each item was 8.4 (s.d. 4.8 seconds. Subjects rated the acceptability of the procedures as good. All six cellular phones and battery chargers were returned undamaged.Conclusion: Cellular telephones are a feasible method for collecting state data from methamphetamine dependent outpatients.

  20. Testing the multi-configuration time-dependent Hartree-Fock method

    International Nuclear Information System (INIS)

    Zanghellini, Juergen; Kitzler, Markus; Brabec, Thomas; Scrinzi, Armin

    2004-01-01

    We test the multi-configuration time-dependent Hartree-Fock method as a new approach towards the numerical calculation of dynamical processes in multi-electron systems using the harmonic quantum dot and one-dimensional helium in strong laser pulses as models. We find rapid convergence for quantities such as ground-state population, correlation coefficient and single ionization towards the exact results. The method converges, where the time-dependent Hartree-Fock method fails qualitatively

  1. An analytical nodal method for time-dependent one-dimensional discrete ordinates problems

    International Nuclear Information System (INIS)

    Barros, R.C. de

    1992-01-01

    In recent years, relatively little work has been done in developing time-dependent discrete ordinates (S N ) computer codes. Therefore, the topic of time integration methods certainly deserves further attention. In this paper, we describe a new coarse-mesh method for time-dependent monoenergetic S N transport problesm in slab geometry. This numerical method preserves the analytic solution of the transverse-integrated S N nodal equations by constants, so we call our method the analytical constant nodal (ACN) method. For time-independent S N problems in finite slab geometry and for time-dependent infinite-medium S N problems, the ACN method generates numerical solutions that are completely free of truncation errors. Bsed on this positive feature, we expect the ACN method to be more accurate than conventional numerical methods for S N transport calculations on coarse space-time grids

  2. Evaluation of Dynamic Methods for Earthwork Assessment

    Directory of Open Access Journals (Sweden)

    Vlček Jozef

    2015-05-01

    Full Text Available Rapid development of road construction imposes requests on fast and quality methods for earthwork quality evaluation. Dynamic methods are now adopted in numerous civil engineering sections. Especially evaluation of the earthwork quality can be sped up using dynamic equipment. This paper presents the results of the parallel measurements of chosen devices for determining the level of compaction of soils. Measurements were used to develop the correlations between values obtained from various apparatuses. Correlations show that examined apparatuses are suitable for examination of compaction level of fine-grained soils with consideration of boundary conditions of used equipment. Presented methods are quick and results can be obtained immediately after measurement, and they are thus suitable in cases when construction works have to be performed in a short period of time.

  3. Blast casting requires fresh assessment of methods

    Energy Technology Data Exchange (ETDEWEB)

    Pilshaw, S.R.

    1987-08-01

    The article discusses the reasons why conventional blasting operations, mainly that of explosive products, drilling and initiation methods are inefficient, and suggests new methods and materials to overcome the problems of the conventional operations. The author suggests that the use of bulk ANFO for casting, instead of high energy and density explosives with high velocity detonation is more effective in producing heave action results. Similarly the drilling of smaller blast holes than is conventional allows better loading distribution of explosives in the rock mass. The author also suggests that casting would be more efficient if the shot rows were loaded differently to produce a variable burden blasting pattern.

  4. Comparative study of environmental impact assessment methods ...

    African Journals Online (AJOL)

    This study aims to introduce and systematically investigate the environmental issues during important decision-making stages. Meanwhile, impacts of development on the environmental components will be also analyzed. This research studies various methods of predicting the environmental changes and determining the ...

  5. Microbiological methods for assessing soil quality

    NARCIS (Netherlands)

    Bloem, J.; Hopkins, D.W.; Benedetti, A.

    2006-01-01

    This book provides a selection of microbiological methods that are already applied in regional or national soil quality monitoring programs. It is split into two parts: part one gives an overview of approaches to monitoring, evaluating and managing soil quality. Part two provides a selection of

  6. A new method for spray deposit assessment

    Science.gov (United States)

    Chester M. Himel; Leland Vaughn; Raymond P. Miskus; Arthur D. Moore

    1965-01-01

    Solid fluorescent particles suspended in a spray liquid are distributed in direct proportion to the size of the spray droplets. Use of solid fluorescent particles is the basis of a new method for visual recognition of the size and number of droplets impinging on target and nontarget portions of sprayed areas.

  7. Health smart home: towards an assistant tool for automatic assessment of the dependence of elders.

    Science.gov (United States)

    Le, Xuan Hoa Binh; Di Mascolo, Maria; Gouin, Alexia; Noury, Norbert

    2007-01-01

    In order to help elders living alone to age in place independently and safely, it can be useful to have an assistant tool that can automatically assess their dependence and issue an alert if there is any loss of autonomy. The dependence can be assessed by the degree of performance, by the elders, of activities of daily living. This article presents an approach enabling the activity recognition for an elder living alone in a Health Smart Home equipped with noninvasive sensors.

  8. Design and initial validation of the Raster method for telecom service availability risk assessment

    NARCIS (Netherlands)

    Vriezekolk, E.; Wieringa, R.J.; Etalle, S.; Rothkrantz, L.; Ristvej, J.; Franco, Z.

    2012-01-01

    Crisis organisations depend on telecommunication services; unavailability of these services reduces the effectiveness of crisis response. Crisis organisations should therefore be aware of availability risks, and need a suitable risk assessment method. Such a method needs to be aware of the

  9. Safety assessment of infrastructures using a new Bayesian Monte Carlo method

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Demirbilek, Z.

    2011-01-01

    A recently developed Bayesian Monte Carlo (BMC) method and its application to safety assessment of structures are described in this paper. We use a one-dimensional BMC method that was proposed in 2009 by Rajabalinejad in order to develop a weighted logical dependence between successive Monte Carlo

  10. Design and initial validation of the Raster method for telecom service availability risk assessment

    NARCIS (Netherlands)

    Vriezekolk, E.; Wieringa, Roelf J.; Etalle, Sandro; Rothkrantz, L.J.M.; Ristvej, J.; Franco, Z.

    Crisis organisations depend on telecommunication services; unavailability of these services reduces the effectiveness of crisis response. Crisis organisations should therefore be aware of availability risks, and need a suitable risk assessment method. Such a method needs to be aware of the

  11. Feasibility of applying site-dependent impact assessment of acidification in LCA

    NARCIS (Netherlands)

    Bellekom, A.A.; Potting, J; Benders, R.M.J.

    2006-01-01

    Goal, Scope and Background. Taking into account the location of emissions and its subsequent, site-dependent impacts improves the accuracy of LCIA. Opponents of site-dependent impact assessment argue that it is too time-consuming to collect the required additional inventory data. In this paper we

  12. Nursing-care dependency : Development of an assessment scale for demented and mentally handicapped patients

    NARCIS (Netherlands)

    Dijkstra, Ate; Buist, Girbe; Dassen, T

    1996-01-01

    This article describing the first phase in the development of an assessment scale of nursing-care dependency (NCD) for Dutch demented and mentally handicapped patients focuses on the background to the study and the content validation of the nursing-care dependency scale. The scale aims to

  13. Method of assessing severe accident management strategies

    International Nuclear Information System (INIS)

    Kastenberg, W.E.; Apostolakis, G.; Dhir, V.K.; Okrent, D.; Jae, M.; Lim, H.; Milici, T.; Park, H.; Swider, J.; Xing, L.; Yu, D.

    1991-01-01

    Accident management can be defined as the innovative use of existing and or alternative resources, systems, and actions to prevent or mitigate a severe accident. A significant number of probabilistic safety assessments (PSAs) have been completed that yield the principal plant vulnerabilities. These vulnerabilities can be categorized as (1) dominant sequences with respect to core-melt frequency. (2) dominant sequences with respect to various risk measures. (3) dominant threats that challenge safety functions. (4) dominant threats with respect to failure of safety systems. For each sequence/threat and each combination of strategy, there may be several options available to the operator. Each strategy/option involves phenomenological and operational considerations regarding uncertainty. These considerations include uncertainties in key phenomena, operator behavior, system availability and behavior, and available information. This paper presents a methodology for assessing severe accident management strategies given the key uncertainties delineated at two workshops held at the University of California, Los Angeles. Based on decision trees and influence diagrams, the methodology is currently being applied to two case studies: cavity flooding in a pressurized water reactor (PWR) to prevent vessel penetration or failure, and drywell flooding in a boiling water reactor to prevent vessel and/or containment failure

  14. Quality assessment in radiological imaging methods

    International Nuclear Information System (INIS)

    Herstel, W.

    1985-01-01

    The equipment used in diagnostic radiology is becoming more and more complicated. In the imaging process four components are distinguished, each of which can introduce loss in essential information: the X-ray source, the human body, the imaging system and the observer. In nearly all imaging methods the X-ray quantum fluctuations are a limitation to observation. But there are also technical factors. As an illustration it is shown how in a television scanning process the resolution is restricted by the system parameters. A short review is given of test devices and the results are given of an image comparison based on regular bar patterns. Although this method has the disadvantage of measuring mainly the limiting resolution, the results of the test correlate reasonably well with the subjective appreciations of radiographs of bony structures made by a group of trained radiologists. Fluoroscopic systems should preferably be tested using moving structures under dynamic conditions. (author)

  15. A nomograph method for assessing body weight.

    Science.gov (United States)

    Thomas, A E; McKay, D A; Cutlip, M B

    1976-03-01

    The ratio of weight/height emerges from varied epidemiological studies as the most generally useful index of relative body mass in adults. The authors present a nomograph to facilitate use of this relationship in clinical situations. While showing the range of weight given as desirable in life insurance studies, the scale expresses relative weight as a continuous variable. This method encourages use of clinical judgment in interpreting "overweight" and "underweight" and in accounting for muscular and skeletal contributions to measured mass.

  16. Survey of Methods to Assess Workload

    Science.gov (United States)

    1979-08-01

    thesis study which had to do with the effect of binaural beats upon performan:.e (2) found out there was a subjectively experienced quality of beats ...were forced to conclude that the neuralmechanism by which binaural beats influenced performance is not open to correct subjective evaluation. In terms of...methods for developing indicies of pilot workload, FAA Report (FAA-AN-77- 15), July 1977. 2. ,’ R. E. The effect of binaural beats on performance, J

  17. Methods to Quantify Uncertainty in Human Health Risk Assessment

    National Research Council Canada - National Science Library

    Aurelius, Lea

    1998-01-01

    ...) and other health professionals, such as the Bioenviroumental Engineer, to identify the appropriate use of probabilistic techniques for a site, and the methods by which probabilistic risk assessment...

  18. Seismic assessment of a site using the time series method

    International Nuclear Information System (INIS)

    Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.

    1997-01-01

    To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probability (10 -4 ) may lead to some exaggerations of the seismic safety level. The use of some very high value for the seismic acceleration imposed by the seismic safety levels required by the hazard analysis may lead to very costly technical solutions that can make the plant operation more difficult and increase maintenance costs. The considerations of seismic events as a time series with dependence among the events produced, may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The paper proposes the applications of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects NPP Cernavoda site, by this method. The paper also presents the manner to analyse the focus activity as per the new approach and it assesses the maximum seismic acceleration that may affect NPP Cernavoda throughout its life-span (∼ 30 years). Development and applications of new mathematical analysis method, both for long - and short - time intervals, may lead to important contributions in the process of foretelling the seismic events in the future. (authors)

  19. Evaluation of a dysphagia screening system based on the Mann Assessment of Swallowing Ability for use in dependent older adults.

    Science.gov (United States)

    Ohira, Mariko; Ishida, Ryo; Maki, Yoshinobu; Ohkubo, Mai; Sugiyama, Tetsuya; Sakayori, Takaharu; Sato, Toru

    2017-04-01

    Dysphagia is common in dependent older adults. Thus, a method of evaluating eating and swallowing functions that can be used to diagnose and manage dysphagia in a simple and robust manner is required. In 2002, the Mann Assessment of Swallowing Ability (MASA) was introduced to identify dysphagia in acute-stage stroke patients. As the MASA enables easy screening, it might also be applicable to dependent older adults if appropriate MASA cut-off values and the most useful assessment items could be determined. In the present study, we attempted to determine suitable MASA cut-off values, and the most useful assessment items for predicting aspiration and pharyngeal retention in dependent older adults. Using the MASA, we evaluated the eating and swallowing functions of 50 dependent older adults with dysphagia. All of the patients also underwent videoendoscopic-based swallowing evaluations to detect aspiration and pharyngeal retention. The participants' characteristics and the utility of each assessment item were compared between various groups. Using the patients' videoendoscopic findings as a reference, receiver operating characteristic curve analysis was carried out to determine appropriate cut-off values for predicting aspiration and pharyngeal retention in dependent older adults. The optimal MASA cut-off values for predicting aspiration and pharyngeal retention were 122 points and 151 points, respectively. A total of 17 of the 24 clinical items assessed by the MASA were found to be associated with aspiration in dependent older adults. The MASA is a useful screening tool for evaluating eating and swallowing functions in dependent older adults. Geriatr Gerontol Int 2017; 17: 561-567. © 2016 Japan Geriatrics Society.

  20. Application of the multigrid amplitude function method for time-dependent transport equation using MOC

    International Nuclear Information System (INIS)

    Tsujita, K.; Endo, T.; Yamamoto, A.

    2013-01-01

    An efficient numerical method for time-dependent transport equation, the mutigrid amplitude function (MAF) method, is proposed. The method of characteristics (MOC) is being widely used for reactor analysis thanks to the advances of numerical algorithms and computer hardware. However, efficient kinetic calculation method for MOC is still desirable since it requires significant computation time. Various efficient numerical methods for solving the space-dependent kinetic equation, e.g., the improved quasi-static (IQS) and the frequency transform methods, have been developed so far mainly for diffusion calculation. These calculation methods are known as effective numerical methods and they offer a way for faster computation. However, they have not been applied to the kinetic calculation method using MOC as the authors' knowledge. Thus, the MAF method is applied to the kinetic calculation using MOC aiming to reduce computation time. The MAF method is a unified numerical framework of conventional kinetic calculation methods, e.g., the IQS, the frequency transform, and the theta methods. Although the MAF method is originally developed for the space-dependent kinetic calculation based on the diffusion theory, it is extended to transport theory in the present study. The accuracy and computational time are evaluated though the TWIGL benchmark problem. The calculation results show the effectiveness of the MAF method. (authors)

  1. A comparison of radiological risk assessment methods for environmental restoration

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Peterson, J.M.

    1993-01-01

    Evaluation of risks to human health from exposure to ionizing radiation at radioactively contaminated sites is an integral part of the decision-making process for determining the need for remediation and selecting remedial actions that may be required. At sites regulated under the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), a target risk range of 10 -4 to 10 -6 incremental cancer incidence over a lifetime is specified by the US Environmental Protection Agency (EPA) as generally acceptable, based on the reasonable maximum exposure to any individual under current and future land use scenarios. Two primary methods currently being used in conducting radiological risk assessments at CERCLA sites are compared in this analysis. Under the first method, the radiation dose equivalent (i.e., Sv or rem) to the receptors of interest over the appropriate period of exposure is estimated and multiplied by a risk factor (cancer risk/Sv). Alternatively, incremental cancer risk can be estimated by combining the EPA's cancer slope factors (previously termed potency factors) for radionuclides with estimates of radionuclide intake by ingestion and inhalation, as well as radionuclide concentrations in soil that contribute to external dose. The comparison of the two methods has demonstrated that resulting estimates of lifetime incremental cancer risk under these different methods may differ significantly, even when all other exposure assumptions are held constant, with the magnitude of the discrepancy depending upon the dominant radionuclides and exposure pathways for the site. The basis for these discrepancies, the advantages and disadvantages of each method, and the significance of the discrepant results for environmental restoration decisions are presented

  2. Testing an Automated Accuracy Assessment Method on Bibliographic Data

    Directory of Open Access Journals (Sweden)

    Marlies Olensky

    2014-12-01

    Full Text Available This study investigates automated data accuracy assessment as described in data quality literature for its suitability to assess bibliographic data. The data samples comprise the publications of two Nobel Prize winners in the field of Chemistry for a 10-year-publication period retrieved from the two bibliometric data sources, Web of Science and Scopus. The bibliographic records are assessed against the original publication (gold standard and an automatic assessment method is compared to a manual one. The results show that the manual assessment method reflects truer accuracy scores. The automated assessment method would need to be extended by additional rules that reflect specific characteristics of bibliographic data. Both data sources had higher accuracy scores per field than accumulated per record. This study contributes to the research on finding a standardized assessment method of bibliographic data accuracy as well as defining the impact of data accuracy on the citation matching process.

  3. The application of statistical methods to assess economic assets

    Directory of Open Access Journals (Sweden)

    D. V. Dianov

    2017-01-01

    out precisely in the boundary of the typological group to which the object is identified.The rationale for the comprehensive application of statistical methods in the implementation of cost and comparative approaches in the assessment of economic assets, practical component, are a primary result of scientific research. It is not enough to use methodological developments in the assessment activities in modern conditions of market development and scientific and technical level for the large-scale evaluation of all available material resources of the economy and their total potential. The application of mathematical-statistical apparatus, therefore, is an objective necessity for obtaining general indicators of the size of the national wealth.In conclusion, we can mention about the methodical approaches, the building of model algorithms application of statistical methods in solving scientific and practical problems, depending on the identification belonging of the valued objects. It is premature to talk about the formalization of the application of statistical methods, the results of which would be transformed into a сertain reporting. It requires the solution of a question on the fixed assets’ census at least at the level of regions, subjects of the Russian Federation.

  4. Problems of method of technology assessment

    International Nuclear Information System (INIS)

    Zimmermann, V.

    1993-03-01

    The study undertakes to analyse the theoretical and methodological structure of Technology Assessment (TA). It is based on a survey of TA studies which provided an important condition for theoreticall sound statements on methodological aspects of TA. It was established that the main basic theoretical problems of TA are in the field of dealing with complexity. This is also apparent in the constitution of problems, the most elementary and central approach of TA. Scientifically founded constitution of problems and the corresponding construction of models call for interdisciplinary scientific work. Interdisciplinarity in the TA research process is achieved at the level of virtual networks, these networks being composed of individuals suited to teamwork. The emerging network structures have an objective-organizational and an ideational basis. The objective-organizational basis is mainly the result of team composition and the external affiliations of the team members. The ideational basis of the virtual network is represented by the team members' mode of thinking, which is individually located at a multidisciplinary level. The theoretical 'skeleton' of the TA knowledge system, which is represented by process knowledge based linkage structures, can be generated and also processed in connection with the knowledge on types of problems, areas of analysis and procedures to deal with complexity. Within this process, disciplinary knowledge is a necessary but not a sufficient condition. Metatheoretical and metadisciplinary knowledge and the correspondingly processes complexity of models are the basis for the necessary methodological awareness, that allows TA to become designable as a research procedure. (orig./HP) [de

  5. Communication: Time-dependent optimized coupled-cluster method for multielectron dynamics

    Science.gov (United States)

    Sato, Takeshi; Pathak, Himadri; Orimo, Yuki; Ishikawa, Kenichi L.

    2018-02-01

    Time-dependent coupled-cluster method with time-varying orbital functions, called time-dependent optimized coupled-cluster (TD-OCC) method, is formulated for multielectron dynamics in an intense laser field. We have successfully derived the equations of motion for CC amplitudes and orthonormal orbital functions based on the real action functional, and implemented the method including double excitations (TD-OCCD) and double and triple excitations (TD-OCCDT) within the optimized active orbitals. The present method is size extensive and gauge invariant, a polynomial cost-scaling alternative to the time-dependent multiconfiguration self-consistent-field method. The first application of the TD-OCC method of intense-laser driven correlated electron dynamics in Ar atom is reported.

  6. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    International Nuclear Information System (INIS)

    Gora, D.; Bernardini, E.; Cruz Silva, A.H.

    2011-04-01

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  7. A method for untriggered time-dependent searches for multiple flares from neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Gora, D. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Institute of Nuclear Physics PAN, Cracow (Poland); Bernardini, E.; Cruz Silva, A.H. [Institute of Nuclear Physics PAN, Cracow (Poland)

    2011-04-15

    A method for a time-dependent search for flaring astrophysical sources which can be potentially detected by large neutrino experiments is presented. The method uses a time-clustering algorithm combined with an unbinned likelihood procedure. By including in the likelihood function a signal term which describes the contribution of many small clusters of signal-like events, this method provides an effective way for looking for weak neutrino flares over different time-scales. The method is sensitive to an overall excess of events distributed over several flares which are not individually detectable. For standard cases (one flare) the discovery potential of the method is worse than a standard time-dependent point source analysis with unknown duration of the flare by a factor depending on the signal-to-background level. However, for flares sufficiently shorter than the total observation period, the method is more sensitive than a time-integrated analysis. (orig.)

  8. Interlaboratory Validation of the Leaching Environmental Assessment Framework (LEAF) Method 1313 and Method 1316

    Science.gov (United States)

    This document summarizes the results of an interlaboratory study conducted to generate precision estimates for two parallel batch leaching methods which are part of the Leaching Environmental Assessment Framework (LEAF). These methods are: (1) Method 1313: Liquid-Solid Partition...

  9. Methods for land use impact assessment: A review

    International Nuclear Information System (INIS)

    Perminova, Tataina; Sirina, Natalia; Laratte, Bertrand; Baranovskaya, Natalia; Rikhvanov, Leonid

    2016-01-01

    Many types of methods to assess land use impact have been developed. Nevertheless a systematic synthesis of all these approaches is necessary to highlight the most commonly used and most effective methods. Given the growing interest in this area of research, a review of the different methods of assessing land use impact (LUI) was performed using bibliometric analysis. One hundred eighty seven articles of agricultural and biological science, and environmental sciences were examined. According to our results, the most frequently used land use assessment methods are Life-Cycle Assessment, Material Flow Analysis/Input–Output Analysis, Environmental Impact Assessment and Ecological Footprint. Comparison of the methods allowed their specific features to be identified and to arrive at the conclusion that a combination of several methods is the best basis for a comprehensive analysis of land use impact assessment. - Highlights: • We identified the most frequently used methods in land use impact assessment. • A comparison of the methods based on several criteria was carried out. • Agricultural land use is by far the most common area of study within the methods. • Incentive driven methods, like LCA, arouse the most interest in this field.

  10. Methods for land use impact assessment: A review

    Energy Technology Data Exchange (ETDEWEB)

    Perminova, Tataina, E-mail: tatiana.perminova@utt.fr [Research Centre for Environmental Studies and Sustainability, University of Technology of Troyes, CNRS UMR 6281, 12 Rue Marie Curie CS 42060, F-10004 Troyes Cedex (France); Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk (Russian Federation); Sirina, Natalia, E-mail: natalia.sirina@utt.fr [Research Centre for Environmental Studies and Sustainability, University of Technology of Troyes, CNRS UMR 6281, 12 Rue Marie Curie CS 42060, F-10004 Troyes Cedex (France); Laratte, Bertrand, E-mail: bertrand.laratte@utt.fr [Research Centre for Environmental Studies and Sustainability, University of Technology of Troyes, CNRS UMR 6281, 12 Rue Marie Curie CS 42060, F-10004 Troyes Cedex (France); Baranovskaya, Natalia, E-mail: natalya.baranovs@mail.ru [Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk (Russian Federation); Rikhvanov, Leonid, E-mail: rikhvanov@tpu.ru [Department of Geoecology and Geochemistry, Institute of Natural Resources, National Research Tomsk Polytechnic University, 30 Lenin Avenue, 634050 Tomsk (Russian Federation)

    2016-09-15

    Many types of methods to assess land use impact have been developed. Nevertheless a systematic synthesis of all these approaches is necessary to highlight the most commonly used and most effective methods. Given the growing interest in this area of research, a review of the different methods of assessing land use impact (LUI) was performed using bibliometric analysis. One hundred eighty seven articles of agricultural and biological science, and environmental sciences were examined. According to our results, the most frequently used land use assessment methods are Life-Cycle Assessment, Material Flow Analysis/Input–Output Analysis, Environmental Impact Assessment and Ecological Footprint. Comparison of the methods allowed their specific features to be identified and to arrive at the conclusion that a combination of several methods is the best basis for a comprehensive analysis of land use impact assessment. - Highlights: • We identified the most frequently used methods in land use impact assessment. • A comparison of the methods based on several criteria was carried out. • Agricultural land use is by far the most common area of study within the methods. • Incentive driven methods, like LCA, arouse the most interest in this field.

  11. MIMIC Methods for Assessing Differential Item Functioning in Polytomous Items

    Science.gov (United States)

    Wang, Wen-Chung; Shih, Ching-Lin

    2010-01-01

    Three multiple indicators-multiple causes (MIMIC) methods, namely, the standard MIMIC method (M-ST), the MIMIC method with scale purification (M-SP), and the MIMIC method with a pure anchor (M-PA), were developed to assess differential item functioning (DIF) in polytomous items. In a series of simulations, it appeared that all three methods…

  12. Alternative method for assessing coking coal plasticity

    Energy Technology Data Exchange (ETDEWEB)

    Dzuy Nguyen; Susan Woodhouse; Merrick Mahoney [University of Adelaide (Australia). BHP Billiton Newcastle Technology Centre

    2008-07-15

    Traditional plasticity measurements for coal have a number of limitations associated with the reproducibility of the tests and their use in predicting coking behaviour. This report reviews alternative rheological methods for characterising the plastic behaviour of coking coals. It reviews the application of more fundamental rheological measurements to the coal system as well as reviewing applications of rheology to other physical systems. These systems may act as potential models for the application of fundamental rheological measurements to cokemaking. The systems considered were polymer melts, coal ash melts, lava, bread making and ice cream. These systems were chosen because they exhibit some physically equivalent processes to the processes occurring during cokemaking, eg, the generation of bubbles within a softened system that then resolidifies. A number of recommendations were made; the steady and oscillatory shear squeeze flow techniques be further investigated to determine if the measured rheology characteristics are related to transformations within the coke oven and the characteristics of resultant coke; modification of Gieseler plastometers for more fundamental rheology measurements not be attempted.

  13. Procedures and methods of benefit assessments for medicines in Germany.

    Science.gov (United States)

    Bekkering, Geertruida E; Kleijnen, Jos

    2008-11-01

    medicines and treatment forms under consideration of the additional therapeutic benefit for the patients. 3. The minimum criteria for assessing patient benefit are improvements in the state of health, shortening the duration of illness, extension of the duration of life, reduction of side effects and improvements in quality of life. EBM refers to the application of the best available evidence to answer a research question, which can inform questions about the care of patients. The optimal design, even for effectiveness questions, is not always the randomised, controlled trial (RCT) but depends on the research question and the outcomes of interest. To increase transparency for each question, the levels of evidence examined should be made explicit. There is no empirical evidence to support the use of cutoff points with respect to the number of studies before making recommendations. To get the best available evidence for the research question(s), all relevant evidence should be considered for each question, and the best available evidence should be used to answer the question. Separate levels of evidence may have to be used for each outcome.There are many ways in which bias can be introduced in systematic reviews. Some types of bias can be prevented, other types can only be reported and, for some, the influence of the bias can be investigated. Reviews must show that potential sources of bias have been dealt with adequately.Methods used by other agencies that perform benefit assessments are useful to interpret the term 'international standards' to which the institute must comply. The National Institute for Health and Clinical Excellence (NICE) is a good example in this respect. NICE shows that it is possible to have transparent procedures for benefit assessments but that this requires detailed documentation. NICE has implemented an open procedure with respect to the comments of reviewers, which makes the procedure transparent. Although the Institute for Quality and Efficiency

  14. [Procedures and methods of benefit assessments for medicines in Germany].

    Science.gov (United States)

    Bekkering, G E; Kleijnen, J

    2008-12-01

    comparison with other medicines and treatment forms under consideration of the additional therapeutic benefit for the patients. 3. The minimum criteria for assessing patient benefit are improvements in the state of health, shortening the duration of illness, extension of the duration of life, reduction of side effects and improvements in quality of life. EBM refers to the application of the best available evidence to answer a research question, which can inform questions about the care of patients. The optimal design, even for effectiveness questions, is not always the randomised, controlled trial (RCT) but depends on the research question and the outcomes of interest. To increase transparency for each question, the levels of evidence examined should be made explicit. There is no empirical evidence to support the use of cutoff points with respect to the number of studies before making recommendations. To get the best available evidence for the research question(s), all relevant evidence should be considered for each question, and the best available evidence should be used to answer the question. Separate levels of evidence may have to be used for each outcome. There are many ways in which bias can be introduced in systematic reviews. Some types of bias can be prevented, other types can only be reported and, for some, the influence of the bias can be investigated. Reviews must show that potential sources of bias have been dealt with adequately. Methods used by other agencies that perform benefit assessments are useful to interpret the term 'international standards' to which the institute must comply. The National Institute for Health and Clinical Excellence (NICE) is a good example in this respect. NICE shows that it is possible to have transparent procedures for benefit assessments but that this requires detailed documentation. NICE has implemented an open procedure with respect to the comments of reviewers, which makes the procedure transparent. Although the Institute for

  15. Risk assessment methods for life cycle costing in buildings

    Directory of Open Access Journals (Sweden)

    Oduyemi Olufolahan

    2016-01-01

    Originality/value. This paper contributes with new outlooks aimed at assessing the current level of awareness, usage and advocated benefits of risk assessment methods in LCC and adds to the limited empirical studies on risk assessment to corporate occupants and decision makers.

  16. CAPABILITY ASSESSMENT OF MEASURING EQUIPMENT USING STATISTIC METHOD

    Directory of Open Access Journals (Sweden)

    Pavel POLÁK

    2014-10-01

    Full Text Available Capability assessment of the measurement device is one of the methods of process quality control. Only in case the measurement device is capable, the capability of the measurement and consequently production process can be assessed. This paper deals with assessment of the capability of the measuring device using indices Cg and Cgk.

  17. Assessing Commercial and Alternative Poultry Processing Methods using Microbiome Analyses

    Science.gov (United States)

    Assessing poultry processing methods/strategies has historically used culture-based methods to assess bacterial changes or reductions, both in terms of general microbial communities (e.g. total aerobic bacteria) or zoonotic pathogens of interest (e.g. Salmonella, Campylobacter). The advent of next ...

  18. Qualitative Assessment of Inquiry-Based Teaching Methods

    Science.gov (United States)

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  19. Online probabilistic operational safety assessment of multi-mode engineering systems using Bayesian methods

    International Nuclear Information System (INIS)

    Lin, Yufei; Chen, Maoyin; Zhou, Donghua

    2013-01-01

    In the past decades, engineering systems become more and more complex, and generally work at different operational modes. Since incipient fault can lead to dangerous accidents, it is crucial to develop strategies for online operational safety assessment. However, the existing online assessment methods for multi-mode engineering systems commonly assume that samples are independent, which do not hold for practical cases. This paper proposes a probabilistic framework of online operational safety assessment of multi-mode engineering systems with sample dependency. To begin with, a Gaussian mixture model (GMM) is used to characterize multiple operating modes. Then, based on the definition of safety index (SI), the SI for one single mode is calculated. At last, the Bayesian method is presented to calculate the posterior probabilities belonging to each operating mode with sample dependency. The proposed assessment strategy is applied in two examples: one is the aircraft gas turbine, another is an industrial dryer. Both examples illustrate the efficiency of the proposed method

  20. Suggestions on the Development of Safety Culture Assessment Method

    International Nuclear Information System (INIS)

    Choi, Young Sung; Choi, Kwang Sik; Kim, Woong Sik

    2006-01-01

    Several efforts have been made to assess safety culture of organization that operates nuclear power plants in Korea. The MOST and KINS played a major role to develop assessment methods and KHNP applied them to its NPPs. This paper explains the two methods developed by KINS briefly and presents the insights obtained from the two different applications. It concludes with some suggestions for safety culture assessment based on the insights

  1. Assessment of dependence and anxiety among benzodiazepine users in a provincial municipality in Rio Grande do Sul, Brazil

    Directory of Open Access Journals (Sweden)

    Janaína Barden Schallemberger

    Full Text Available Abstract Introduction: Benzodiazepines are among the most prescribed drugs for anxiety and one of the most used drug classes in the world and have a high potential for addiction. The objective of this study was to assess levels of dependence and anxiety among users of these drugs in the public health system. Methods: This was a cross-sectional, descriptive and quantitative study. Benzodiazepine users treated on the public health system were selected. Anxiety levels were assessed with the Hamilton Anxiety Scale and dependency with the Benzodiazepine Dependence Self-Report Questionnaire. Results: Benzodiazepine use was higher among women and in older age groups. Duration of benzodiazepine use was greater than 1 year for all respondents. The dependence assessment indicated that more than half of users were dependent on taking benzodiazepines and most had a severe degree of anxiety. Conclusion: This study found evidence of prolonged and inappropriate use of benzodiazepines. It is necessary to educate users about the risks of these drugs and to develop strategies to rationalize use of these drugs by working with prescribers and dispensers.

  2. Assessment methods in surgical training in the United Kingdom

    Directory of Open Access Journals (Sweden)

    Evgenios Evgeniou

    2013-02-01

    Full Text Available A career in surgery in the United Kingdom demands a commitment to a long journey of assessment. The assessment methods used must ensure that the appropriate candidates are selected into a programme of study or a job and must guarantee public safety by regulating the progression of surgical trainees and the certification of trained surgeons. This review attempts to analyse the psychometric properties of various assessment methods used in the selection of candidates to medical school, job selection, progression in training, and certification. Validity is an indicator of how well an assessment measures what it is designed to measure. Reliability informs us whether a test is consistent in its outcome by measuring the reproducibility and discriminating ability of the test. In the long journey of assessment in surgical training, the same assessment formats are frequently being used for selection into a programme of study, job selection, progression, and certification. Although similar assessment methods are being used for different purposes in surgical training, the psychometric properties of these assessment methods have not been examined separately for each purpose. Because of the significance of these assessments for trainees and patients, their reliability and validity should be examined thoroughly in every context where the assessment method is being used.

  3. Human Health Risk Assessment Applied to Rural Populations Dependent on Unregulated Drinking Water Sources: A Scoping Review.

    Science.gov (United States)

    Ford, Lorelei; Bharadwaj, Lalita; McLeod, Lianne; Waldner, Cheryl

    2017-07-28

    Safe drinking water is a global challenge for rural populations dependent on unregulated water. A scoping review of research on human health risk assessments (HHRA) applied to this vulnerable population may be used to improve assessments applied by government and researchers. This review aims to summarize and describe the characteristics of HHRA methods, publications, and current literature gaps of HHRA studies on rural populations dependent on unregulated or unspecified drinking water. Peer-reviewed literature was systematically searched (January 2000 to May 2014) and identified at least one drinking water source as unregulated (21%) or unspecified (79%) in 100 studies. Only 7% of reviewed studies identified a rural community dependent on unregulated drinking water. Source water and hazards most frequently cited included groundwater (67%) and chemical water hazards (82%). Most HHRAs (86%) applied deterministic methods with 14% reporting probabilistic and stochastic methods. Publications increased over time with 57% set in Asia, and 47% of studies identified at least one literature gap in the areas of research, risk management, and community exposure. HHRAs applied to rural populations dependent on unregulated water are poorly represented in the literature even though almost half of the global population is rural.

  4. Assessment of radioactivity for 24 hours urine sample depending on correction factor by using creatinine

    International Nuclear Information System (INIS)

    Kharita, M. H.; Maghrabi, M.

    2006-09-01

    Assessment of intake and internal does requires knowing the amount of radioactivity in 24 hours urine sample, sometimes it is difficult to get 24 hour sample because this method is not comfortable and in most cases the workers refuse to collect this amount of urine. This work focuses on finding correction factor of 24 hour sample depending on knowing the amount of creatinine in the sample whatever the size of this sample. Then the 24 hours excretion of radionuclide is calculated assuming the average creatinine excretion rate is 1.7 g per 24 hours, based on the amount of activity and creatinine in the urine sample. Several urine sample were collected from occupationally exposed workers the amount and ratios of creatinine and activity in these samples were determined, then normalized to 24 excretion of radionuclide. The average chemical recovery was 77%. It should be emphasized that this method should only be used if a 24 hours sample was not possible to collect. (author)

  5. A Modified Generalized Fisher Method for Combining Probabilities from Dependent Tests

    Directory of Open Access Journals (Sweden)

    Hongying (Daisy eDai

    2014-02-01

    Full Text Available Rapid developments in molecular technology have yielded a large amount of high throughput genetic data to understand the mechanism for complex traits. The increase of genetic variants requires hundreds and thousands of statistical tests to be performed simultaneously in analysis, which poses a challenge to control the overall Type I error rate. Combining p-values from multiple hypothesis testing has shown promise for aggregating effects in high-dimensional genetic data analysis. Several p-value combining methods have been developed and applied to genetic data; see [Dai, et al. 2012b] for a comprehensive review. However, there is a lack of investigations conducted for dependent genetic data, especially for weighted p-value combining methods. Single nucleotide polymorphisms (SNPs are often correlated due to linkage disequilibrium. Other genetic data, including variants from next generation sequencing, gene expression levels measured by microarray, protein and DNA methylation data, etc. also contain complex correlation structures. Ignoring correlation structures among genetic variants may lead to severe inflation of Type I error rates for omnibus testing of p-values. In this work, we propose modifications to the Lancaster procedure by taking the correlation structure among p-values into account. The weight function in the Lancaster procedure allows meaningful biological information to be incorporated into the statistical analysis, which can increase the power of the statistical testing and/or remove the bias in the process. Extensive empirical assessments demonstrate that the modified Lancaster procedure largely reduces the Type I error rates due to correlation among p-values, and retains considerable power to detect signals among p-values. We applied our method to reassess published renal transplant data, and identified a novel association between B cell pathways and allograft tolerance.

  6. Time-dependent density-functional theory in the projector augmented-wave method

    DEFF Research Database (Denmark)

    Walter, Michael; Häkkinen, Hannu; Lehtovaara, Lauri

    2008-01-01

    We present the implementation of the time-dependent density-functional theory both in linear-response and in time-propagation formalisms using the projector augmented-wave method in real-space grids. The two technically very different methods are compared in the linear-response regime where we...

  7. Change in radiosensitivity of seeds depending on their humidity data and methods of moistening

    International Nuclear Information System (INIS)

    Savin, B.N.; Labrada, A.R.

    1980-01-01

    Investigated was the change in readiosensitivity of maize seeds depending on their humidity, method of moistening and initial humidity before moistening. Maize seeds of Krasnodarskaya 303 TV breed were irradiated with γ-rays. It was shown that seeds of the same humidity had different radiosensitivity depending on the method of moistening. When moistening seeds in water, they had the highest radiostability at 20-24% humidity but when moistening them in exsiccator, this index was the highest at 15% humidity. Along with the method of moistening initial humidity before moistening also effected the radiosensitivity. The necessity to take this factor into account during presowing irradiation was noted

  8. Seismic assessment of a site using the time series method

    International Nuclear Information System (INIS)

    Krutzik, N.J.; Rotaru, I.; Bobei, M.; Mingiuc, C.; Serban, V.; Androne, M.

    2001-01-01

    1. To increase the safety of a NPP located on a seismic site, the seismic acceleration level to which the NPP should be qualified must be as representative as possible for that site, with a conservative degree of safety but not too exaggerated. 2. The consideration of the seismic events affecting the site as independent events and the use of statistic methods to define some safety levels with very low annual occurrence probabilities (10 -4 ) may lead to some exaggerations of the seismic safety level. 3. The use of some very high values for the seismic accelerations imposed by the seismic safety levels required by the hazard analysis may lead to very expensive technical solutions that can make the plant operation more difficult and increase the maintenance costs. 4. The consideration of seismic events as a time series with dependence among the events produced may lead to a more representative assessment of a NPP site seismic activity and consequently to a prognosis on the seismic level values to which the NPP would be ensured throughout its life-span. That prognosis should consider the actual seismic activity (including small earthquakes in real time) of the focuses that affect the plant site. The method is useful for two purposes: a) research, i.e. homogenizing the history data basis by the generation of earthquakes during periods lacking information and correlation of the information with the existing information. The aim is to perform the hazard analysis using a homogeneous data set in order to determine the seismic design data for a site; b) operation, i.e. the performance of a prognosis on the seismic activity on a certain site and consideration of preventive measures to minimize the possible effects of an earthquake. 5. The paper proposes the application of Autoregressive Time Series to issue a prognosis on the seismic activity of a focus and presents the analysis on Vrancea focus that affects Cernavoda NPP site by this method. 6. The paper also presents the

  9. Inventory of LCIA selection methods for assessing toxic releases. Methods and typology report part B

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    method(s) in Work package 8 (WP8) of the OMNIITOX project. The selection methods and the other CRS methods are described in detail, a set of evaluation criteria are developed and the methods are evaluated against these criteria. This report (Deliverable 11B (D11B)) gives the results from task 7.1d, 7.1e......This report describes an inventory of Life Cycle Impact Assessment (LCIA) selection methods for assessing toxic releases. It consists of an inventory of current selection methods and other Chemical Ranking and Scoring (CRS) methods assessed to be relevant for the development of (a) new selection...... and 7.1f of WP 7 for selection methods. The other part of D11 (D11A) is reported in another report and deals with characterisation methods. A selection method is a method for prioritising chemical emissions to be included in an LCIA characterisation of toxic releases, i.e. calculating indicator scores...

  10. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    International Nuclear Information System (INIS)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld; Roemer, Lone; Kring, Soeren

    2010-01-01

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  11. Assessment of hip dysplasia and osteoarthritis: Variability of different methods

    Energy Technology Data Exchange (ETDEWEB)

    Troelsen, Anders; Elmengaard, Brian; Soeballe, Kjeld (Orthopedic Research Unit, Univ. Hospital of Aarhus, Aarhus (Denmark)), e-mail: a_troelsen@hotmail.com; Roemer, Lone (Dept. of Radiology, Univ. Hospital of Aarhus, Aarhus (Denmark)); Kring, Soeren (Dept. of Orthopedic Surgery, Aabenraa Hospital, Aabenraa (Denmark))

    2010-03-15

    Background: Reliable assessment of hip dysplasia and osteoarthritis is crucial in young adults who may benefit from joint-preserving surgery. Purpose: To investigate the variability of different methods for diagnostic assessment of hip dysplasia and osteoarthritis. Material and Methods: By each of four observers, two assessments were done by vision and two by angle construction. For both methods, the intra- and interobserver variability of center-edge and acetabular index angle assessment were analyzed. The observers' ability to diagnose hip dysplasia and osteoarthritis were assessed. All measures were compared to those made on computed tomography scan. Results: Intra- and interobserver variability of angle assessment was less when angles were drawn compared with assessment by vision, and the observers' ability to diagnose hip dysplasia improved when angles were drawn. Assessment of osteoarthritis in general showed poor agreement with findings on computed tomography scan. Conclusion: We recommend that angles always should be drawn for assessment of hip dysplasia on pelvic radiographs. Given the inherent variability of diagnostic assessment of hip dysplasia, a computed tomography scan could be considered in patients with relevant hip symptoms and a center-edge angle between 20 deg and 30 deg. Osteoarthritis should be assessed by measuring the joint space width or by classifying the Toennis grade as either 0-1 or 2-3

  12. Methods of Comprehensive Assessment for China’s Energy Sustainability

    Science.gov (United States)

    Xu, Zhijin; Song, Yankui

    2018-02-01

    In order to assess the sustainable development of China’s energy objectively and accurately, we need to establish a reasonable indicator system for energy sustainability and make a targeted comprehensive assessment with the scientific methods. This paper constructs a comprehensive indicator system for energy sustainability from five aspects of economy, society, environment, energy resources and energy technology based on the theory of sustainable development and the theory of symbiosis. On this basis, it establishes and discusses the assessment models and the general assessment methods for energy sustainability with the help of fuzzy mathematics. It is of some reference for promoting the sustainable development of China’s energy, economy and society.

  13. Solving Ratio-Dependent Predator-Prey System with Constant Effort Harvesting Using Homotopy Perturbation Method

    Directory of Open Access Journals (Sweden)

    Abdoul R. Ghotbi

    2008-01-01

    Full Text Available Due to wide range of interest in use of bioeconomic models to gain insight into the scientific management of renewable resources like fisheries and forestry, homotopy perturbation method is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort prey harvesting. The results are compared with the results obtained by Adomian decomposition method. The results show that, in new model, there are less computations needed in comparison to Adomian decomposition method.

  14. Revisiting Individual Creativity Assessment: Triangulation in Subjective and Objective Assessment Methods

    Science.gov (United States)

    Park, Namgyoo K.; Chun, Monica Youngshin; Lee, Jinju

    2016-01-01

    Compared to the significant development of creativity studies, individual creativity research has not reached a meaningful consensus regarding the most valid and reliable method for assessing individual creativity. This study revisited 2 of the most popular methods for assessing individual creativity: subjective and objective methods. This study…

  15. Neutron Scattering in Hydrogenous Moderators, Studied by Time Dependent Reaction Rate Method

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L G; Moeller, E; Purohit, S N

    1966-03-15

    The moderation and absorption of a neutron burst in water, poisoned with the non-1/v absorbers cadmium and gadolinium, has been followed on the time scale by multigroup calculations, using scattering kernels for the proton gas and the Nelkin model. The time dependent reaction rate curves for each absorber display clear differences for the two models, and the separation between the curves does not depend much on the absorber concentration. An experimental method for the measurement of infinite medium reaction rate curves in a limited geometry has been investigated. This method makes the measurement of the time dependent reaction rate generally useful for thermalization studies in a small geometry of a liquid hydrogenous moderator, provided that the experiment is coupled to programs for the calculation of scattering kernels and time dependent neutron spectra. Good agreement has been found between the reaction rate curve, measured with cadmium in water, and a calculated curve, where the Haywood kernel has been used.

  16. A method for solving a three-body problem with energy-dependent interactions

    International Nuclear Information System (INIS)

    Safronov, A.N.

    1994-01-01

    A method is proposed for solving a three-body problem with energy-dependent interactions. This method is based on introducing the dependence of scattering operators and state vectors on an additional external parameter. Effects caused by the energy dependence of the interaction operator are investigated by using the unitary condition for the amplitude of the 2 → 2 and 2 → 3 transitions. It is shown, in particular, that taking this dependence into account leads to a change in the relation between the asymptotic normalization factor of the wave function of the three-body bound state and the vertex constant of virtual dissociation (synthesis) of the system into two fragments. 15 refs

  17. Assessment of wear dependence parameters in complex model of cutting tool wear

    Science.gov (United States)

    Antsev, A. V.; Pasko, N. I.; Antseva, N. V.

    2018-03-01

    This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.

  18. Assessing dependency using self-report and indirect measures: examining the significance of discrepancies.

    Science.gov (United States)

    Cogswell, Alex; Alloy, Lauren B; Karpinski, Andrew; Grant, David A

    2010-07-01

    The present study addressed convergence between self-report and indirect approaches to assessing dependency. We were moderately successful in validating an implicit measure, which was found to be reliable, orthogonal to 2 self-report instruments, and predictive of external criteria. This study also examined discrepancies between scores on self-report and implicit measures, and has implications for their significance. The possibility that discrepancies themselves are pathological was not supported, although discrepancies were associated with particular personality profiles. Finally, this study offered additional evidence for the relation between dependency and depressive symptomatology and identified implicit dependency as contributing unique variance in predicting past major depression.

  19. Assessment of medical communication skills by computer: assessment method and student experiences

    NARCIS (Netherlands)

    Hulsman, R. L.; Mollema, E. D.; Hoos, A. M.; de Haes, J. C. J. M.; Donnison-Speijer, J. D.

    2004-01-01

    BACKGROUND A computer-assisted assessment (CAA) program for communication skills designated ACT was developed using the objective structured video examination (OSVE) format. This method features assessment of cognitive scripts underlying communication behaviour, a broad range of communication

  20. Core design and operation optimization methods based on time-dependent perturbation theory

    International Nuclear Information System (INIS)

    Greenspan, E.

    1983-08-01

    A general approach for the optimization of nuclear reactor core design and operation is outlined; it is based on two cornerstones: a newly developed time-dependent (or burnup-dependent) perturbation theory for nonlinear problems and a succesive iteration technique. The resulting approach is capable of handling realistic reactor models using computational methods of any degree of sophistication desired, while accounting for all the constraints imposed. Three general optimization strategies, different in the way for handling the constraints, are formulated. (author)

  1. Methods of assessment and management of enterprise risks

    Directory of Open Access Journals (Sweden)

    I. A. Kiseleva

    2017-01-01

    Full Text Available The article is devoted to the actual topic of our time – the management of business risks. An integral part of professional risk management is to identify the nature of the object of management in the sphere of economy. Since the domestic theory of risk management is under development, the problem of a clear comprehensive definition of risk becomes now of particular relevance. The article discusses the basic concepts of risk management; studied its components in the business activities; reflected system and risk management principles; The basic types of risks in business. A organizational and economic mechanism of enterprise risk assessment. Practical advice on risk management. Entrepreneurship without risk does not exist. With the development of market economy the specific entrepreneur determines the methods that will work, and they all lead to entrepreneurial risks. The level of threats on the market today, above the level of potential profits. It is concluded that it is impossible to increase revenue without increasing the risk or reduce risk without reducing income. The lower range of the probability distribution of expected returns relative to its mean value, the lower the risk associated with this operation. Avoid risk in business is almost impossible, but you can reduce this risk. And it depends on how professionally and correctly operates the entrepreneur, what kind of strategy he will choose to reduce the appearance of risk.

  2. Transport methods: general. 3. An Additive Angular-Dependent Re-balance Acceleration Method for Neutron Transport Equations

    International Nuclear Information System (INIS)

    Cho, Nam Zin; Park, Chang Je

    2001-01-01

    An additive angular-dependent re-balance (AADR) factor acceleration method is described to accelerate the source iteration of discrete ordinates transport calculation. The formulation of the AADR method follows that of the angular-dependent re-balance (ADR) method in that the re-balance factor is defined only on the cell interface and in that the low-order equation is derived by integrating the transport equation (high-order equation) over angular subspaces. But, the re-balance factor is applied additively. While the AADR method is similar to the boundary projection acceleration and the alpha-weighted linear acceleration, it is more general and does have distinct features. The method is easily extendible to DP N and low-order S N re-balancing, and it does not require consistent discretizations between the high- and low-order equations as in diffusion synthetic acceleration. We find by Fourier analysis and numerical results that the AADR method with a chosen form of weighting functions is unconditionally stable and very effective. There also exists an optimal weighting parameter that leads to the smallest spectral radius. The AADR acceleration method described in this paper is simple to implement, unconditionally stable, and very effective. It uses a physically based weighting function with an optimal parameter, leading to the best spectral radius of ρ<0.1865, compared to ρ<0.2247 of DSA. The application of the AADR acceleration method with the LMB scheme on a test problem shows encouraging results

  3. Methods for dependency estimation and system unavailability evaluation based on failure data statistics

    International Nuclear Information System (INIS)

    Azarm, M.A.; Hsu, F.; Martinez-Guridi, G.; Vesely, W.E.

    1993-07-01

    This report introduces a new perspective on the basic concept of dependent failures where the definition of dependency is based on clustering in failure times of similar components. This perspective has two significant implications: first, it relaxes the conventional assumption that dependent failures must be simultaneous and result from a severe shock; second, it allows the analyst to use all the failures in a time continuum to estimate the potential for multiple failures in a window of time (e.g., a test interval), therefore arriving at a more accurate value for system unavailability. In addition, the models developed here provide a method for plant-specific analysis of dependency, reflecting the plant-specific maintenance practices that reduce or increase the contribution of dependent failures to system unavailability. The proposed methodology can be used for screening analysis of failure data to estimate the fraction of dependent failures among the failures. In addition, the proposed method can evaluate the impact of the observed dependency on system unavailability and plant risk. The formulations derived in this report have undergone various levels of validations through computer simulation studies and pilot applications. The pilot applications of these methodologies showed that the contribution of dependent failures of diesel generators in one plant was negligible, while in another plant was quite significant. It also showed that in the plant with significant contribution of dependency to Emergency Power System (EPS) unavailability, the contribution changed with time. Similar findings were reported for the Containment Fan Cooler breakers. Drawing such conclusions about system performance would not have been possible with any other reported dependency methodologies

  4. Withdrawal of corticosteroids in inflammatory bowel disease patients after dependency periods ranging from 2 to 45 years: a proposed method.

    LENUS (Irish Health Repository)

    Murphy, S J

    2012-02-01

    BACKGROUND: Even in the biologic era, corticosteroid dependency in IBD patients is common and causes a lot of morbidity, but methods of withdrawal are not well described. AIM: To assess the effectiveness of a corticosteroid withdrawal method. METHODS: Twelve patients (10 men, 2 women; 6 ulcerative colitis, 6 Crohn\\'s disease), median age 53.5 years (range 29-75) were included. IBD patients with quiescent disease refractory to conventional weaning were transitioned to oral dexamethasone, educated about symptoms of the corticosteroid withdrawal syndrome (CWS) and weaned under the supervision of an endocrinologist. When patients failed to wean despite a slow weaning pace and their IBD remaining quiescent, low dose synthetic ACTH stimulation testing was performed to assess for adrenal insufficiency. Multivariate analysis was performed to assess predictors of a slow wean. RESULTS: Median durations for disease and corticosteroid dependency were 21 (range 3-45) and 14 (range 2-45) years respectively. Ten patients (83%) were successfully weaned after a median follow-up from final wean of 38 months (range 5-73). Disease flares occurred in two patients, CWS in five and ACTH testing was performed in 10. Multivariate analysis showed that longer duration of corticosteroid use appeared to be associated with a slower wean (P = 0.056). CONCLUSIONS: Corticosteroid withdrawal using this protocol had a high success rate and durable effect and was effective in patients with long-standing (up to 45 years) dependency. As symptoms of CWS mimic symptoms of IBD disease flares, gastroenterologists may have difficulty distinguishing them, which may be a contributory factor to the frequency of corticosteroid dependency in IBD patients.

  5. The large discretization step method for time-dependent partial differential equations

    Science.gov (United States)

    Haras, Zigo; Taasan, Shlomo

    1995-01-01

    A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.

  6. Comparison of deterministic and stochastic methods for time-dependent Wigner simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Sihong, E-mail: sihong@math.pku.edu.cn [LMAM and School of Mathematical Sciences, Peking University, Beijing 100871 (China); Sellier, Jean Michel, E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)

    2015-11-01

    Recently a Monte Carlo method based on signed particles for time-dependent simulations of the Wigner equation has been proposed. While it has been thoroughly validated against physical benchmarks, no technical study about its numerical accuracy has been performed. To this end, this paper presents the first step towards the construction of firm mathematical foundations for the signed particle Wigner Monte Carlo method. An initial investigation is performed by means of comparisons with a cell average spectral element method, which is a highly accurate deterministic method and utilized to provide reference solutions. Several different numerical tests involving the time-dependent evolution of a quantum wave-packet are performed and discussed in deep details. In particular, this allows us to depict a set of crucial criteria for the signed particle Wigner Monte Carlo method to achieve a satisfactory accuracy.

  7. Hypertext Glosses for Foreign Language Reading Comprehension and Vocabulary Acquisition: Effects of Assessment Methods

    Science.gov (United States)

    Chen, I-Jung

    2016-01-01

    This study compared how three different gloss modes affected college students' L2 reading comprehension and vocabulary acquisition. The study also compared how results on comprehension and vocabulary acquisition may differ depending on the four assessment methods used. A between-subjects design was employed with three groups of Mandarin-speaking…

  8. SPECIFIC METHOD OF RISK ASSESSMENT IN TOURISM ENTERPRISES

    Directory of Open Access Journals (Sweden)

    Andreea ARMEAN

    2014-12-01

    Full Text Available The objective of this paper is to present an innovative method of risk assessment for tourism businesses. The contribution to literature is the novelty of this method of following paths: is an ante-factum assessment not post-factum; risk assessment is based on perception rather than results; is based on specific risks tourism enterprises not on the overall risks. Is an asset-research methodology and consists in generating its own method of risk assessment based on the ideas summarized from the literature studied. The aim established is tourism enterprises from Romania. The data necessary for the application of this method will result from applying to top level management of tourism enterprises, a questionnaire about risk perception. The results from this study will help identify and measure the risks specific to tourism enterprises. The applicability of the results is to improve risk management in these enterprises.

  9. OPERATIONAL RISK IN INTERNATIONAL BUSINESS: TAXONOMY AND ASSESSMENT METHODS

    Directory of Open Access Journals (Sweden)

    Marinoiu Ana Maria

    2009-05-01

    Full Text Available The paper aims at presenting the classifications and the assessment methods for operational risk according to international regulations (ie. Basel 2, in the context of its importance as a managerial tool for international business. Considering the growin

  10. On the assessment of usability testing methods for children

    NARCIS (Netherlands)

    Markopoulos, P.; Bekker, M.M.

    2003-01-01

    The paper motivates the need to acquire methodological knowledge for involving children as test users in usability testing. It introduces a methodological framework for delineating comparative assessments of usability testing methods for children participants. This framework consists in three

  11. Assessment of New Calculation Method for Toxicological Sums-of-Fractions for Hanford Tank Farm Wastes

    International Nuclear Information System (INIS)

    Mahoney, Lenna A.

    2006-01-01

    The toxicological source terms used for potential accident assessment in the Hanford Tank Farms DSA are based on toxicological sums-of-fractions (SOFs) that were calculated based on the Best Basis Inventory (BBI) from May 2002, using a method that depended on thermodynamic equilibrium calculations of the compositions of liquid and solid phases. The present report describes a simplified SOF-calculation method that is to be used in future toxicological updates and assessments and compares its results (for the 2002 BBI) to those of the old method.

  12. DREAM: a method for semi-quantitative dermal exposure assessment

    NARCIS (Netherlands)

    Wendel de Joode, B. van; Brouwer, D.H.; Kromhout, H.; Hemmen, J.J. van

    2003-01-01

    This paper describes a new method (DREAM) for structured, semi-quantitative dermal exposure assessment for chemical or biological agents that can be used in occupational hygiene or epidemiology. It is anticipated that DREAM could serve as an initial assessment of dermal exposure, amongst others,

  13. Visual art teachers and performance assessment methods in ...

    African Journals Online (AJOL)

    This paper examines the competencies of visual arts teachers in using performance assessment methods, and to ascertain the extent to which the knowledge, skills and experiences of teachers affect their competence in using assessment strategies in their classroom. The study employs a qualitative research design; ...

  14. Assessing risk of draft survey by AHP method

    Science.gov (United States)

    Xu, Guangcheng; Zhao, Kuimin; Zuo, Zhaoying; Liu, Gang; Jian, Binguo; Lin, Yan; Fan, Yukun; Wang, Fei

    2018-04-01

    The paper assesses the risks of vessel floating in the seawater for draft survey by using the analytic hierarchy process. On this basis, the paper established draft survey risk index from the view of draft reading, ballast water, fresh water, and calculation process and so on. Then the paper proposes the method to deal with risk assessment using one concrete sample.

  15. Minimal Residual Disease Assessment in Lymphoma: Methods and Applications.

    Science.gov (United States)

    Herrera, Alex F; Armand, Philippe

    2017-12-01

    Standard methods for disease response assessment in patients with lymphoma, including positron emission tomography and computed tomography scans, are imperfect. In other hematologic malignancies, particularly leukemias, the ability to detect minimal residual disease (MRD) is increasingly influencing treatment paradigms. However, in many subtypes of lymphoma, the application of MRD assessment techniques, like flow cytometry or polymerase chain reaction-based methods, has been challenging because of the absence of readily detected circulating disease or canonic chromosomal translocations. Newer MRD detection methods that use next-generation sequencing have yielded promising results in a number of lymphoma subtypes, fueling the hope that MRD detection may soon be applicable in clinical practice for most patients with lymphoma. MRD assessment can provide real-time information about tumor burden and response to therapy, noninvasive genomic profiling, and monitoring of clonal dynamics, allowing for many possible applications that could significantly affect the care of patients with lymphoma. Further validation of MRD assessment methods, including the incorporation of MRD assessment into clinical trials in patients with lymphoma, will be critical to determine how best to deploy MRD testing in routine practice and whether MRD assessment can ultimately bring us closer to the goal of personalized lymphoma care. In this review article, we describe the methods available for detecting MRD in patients with lymphoma and their relative advantages and disadvantages. We discuss preliminary results supporting the potential applications for MRD testing in the care of patients with lymphoma and strategies for including MRD assessment in lymphoma clinical trials.

  16. Operational Safety Assessment of Turbo Generators with Wavelet Rényi Entropy from Sensor-Dependent Vibration Signals

    Directory of Open Access Journals (Sweden)

    Xiaoli Zhang

    2015-04-01

    Full Text Available With the rapid development of sensor technology, various professional sensors are installed on modern machinery to monitor operational processes and assure operational safety, which play an important role in industry and society. In this work a new operational safety assessment approach with wavelet Rényi entropy utilizing sensor-dependent vibration signals is proposed. On the basis of a professional sensor and the corresponding system, sensor-dependent vibration signals are acquired and analyzed by a second generation wavelet package, which reflects time-varying operational characteristic of individual machinery. Derived from the sensor-dependent signals’ wavelet energy distribution over the observed signal frequency range, wavelet Rényi entropy is defined to compute the operational uncertainty of a turbo generator, which is then associated with its operational safety degree. The proposed method is applied in a 50 MW turbo generator, whereupon it is proved to be reasonable and effective for operation and maintenance.

  17. Analysis of the most widely used Building Environmental Assessment methods

    International Nuclear Information System (INIS)

    Gu, Zhenhong; Wennersten, R.; Assefa, G.

    2006-01-01

    Building Environmental Assessment (BEA) is a term used for several methods for environmental assessment of the building environment. Generally, Life Cycle Assessment (LCA) is an important foundation and part of the BEA method, but current BEA methods form more comprehensive tools than LCA. Indicators and weight assignments are the two most important factors characterizing BEA. From the comparison of the three most widely used BEA methods, EcoHomes (BREEAM for residential buildings), LEED-NC and GBTool, it can be seen that BEA methods are shifting from ecological, indicator-based scientific systems to more integrated systems covering ecological, social and economic categories. Being relatively new methods, current BEA systems are far from perfect and are under continuous development. The further development of BEA methods will focus more on non-ecological indicators and how to promote implementation. Most BEA methods are developed based on regional regulations and LCA methods, but they do not attempt to replace these regulations. On the contrary, they try to extend implementation by incentive programmes. There are several ways to enhance BEA in the future: expand the studied scope from design levels to whole life-cycle levels of constructions, enhance international cooperation, accelerate legislation and standardize and develop user-oriented assessment systems

  18. Valuation methods within the framework of life cycle assessment

    Energy Technology Data Exchange (ETDEWEB)

    Finnveden, G.

    1996-05-01

    Life Cycle Assessment Valuation methods are discussed. Different approaches for valuation are discussed as well as presently available valuation methods in relation to: * the values involved in the valuation, * the LCA framework, and * different applications of LCA. Among the conclusions are: * ethical and ideological valuations are involved not only when applying valuation weighting factors, but also when choosing valuation method and also when choosing whether to perform a valuation weighting or not, * it can be questioned whether straight distance-to-target methods are valuation methods, * it is still an open question whether presently available valuation methods produce meaningful and reliable information, * further development of quantitative valuation methods could concentrate both on different types of monetarisation methods and panel methods, * in many applications of LCA, the expected result is an identification of critical areas rather than a one-dimensional score, reducing the need for valuation methods. 88 refs, 3 figs, 4 tabs

  19. Element stacking method for topology optimization with material-dependent boundary and loading conditions

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Park, Y.K.; Kim, Y.Y.

    2007-01-01

    A new topology optimization scheme, called the element stacking method, is developed to better handle design optimization involving material-dependent boundary conditions and selection of elements of different types. If these problems are solved by existing standard approaches, complicated finite...... element models or topology optimization reformulation may be necessary. The key idea of the proposed method is to stack multiple elements on the same discretization pixel and select a single or no element. In this method, stacked elements on the same pixel have the same coordinates but may have...... independent degrees of freedom. Some test problems are considered to check the effectiveness of the proposed stacking method....

  20. Time-Dependent Close-Coupling Methods for Electron-Atom/Molecule Scattering

    International Nuclear Information System (INIS)

    Colgan, James

    2014-01-01

    The time-dependent close-coupling (TDCC) method centers on an accurate representation of the interaction between two outgoing electrons moving in the presence of a Coulomb field. It has been extensively applied to many problems of electrons, photons, and ions scattering from light atomic targets. Theoretical Description: The TDCC method centers on a solution of the time-dependent Schrödinger equation for two interacting electrons. The advantages of a time-dependent approach are two-fold; one treats the electron-electron interaction essentially in an exact manner (within numerical accuracy) and a time-dependent approach avoids the difficult boundary condition encountered when two free electrons move in a Coulomb field (the classic three-body Coulomb problem). The TDCC method has been applied to many fundamental atomic collision processes, including photon-, electron- and ion-impact ionization of light atoms. For application to electron-impact ionization of atomic systems, one decomposes the two-electron wavefunction in a partial wave expansion and represents the subsequent two-electron radial wavefunctions on a numerical lattice. The number of partial waves required to converge the ionization process depends on the energy of the incoming electron wavepacket and on the ionization threshold of the target atom or ion.

  1. [Establishment of Assessment Method for Air Bacteria and Fungi Contamination].

    Science.gov (United States)

    Zhang, Hua-ling; Yao, Da-jun; Zhang, Yu; Fang, Zi-liang

    2016-03-15

    In this paper, in order to settle existing problems in the assessment of air bacteria and fungi contamination, the indoor and outdoor air bacteria and fungi filed concentrations by impact method and settlement method in existing documents were collected and analyzed, then the goodness of chi square was used to test whether these concentration data obeyed normal distribution at the significant level of α = 0.05, and combined with the 3σ principle of normal distribution and the current assessment standards, the suggested concentrations ranges of air microbial concentrations were determined. The research results could provide a reference for developing air bacteria and fungi contamination assessment standards in the future.

  2. Assessment of a volume-dependent dynamic respiratory system compliance in ALI/ARDS by pooling breathing cycles

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Guttmann, Josef

    2012-01-01

    New methods were developed to calculate the volume-dependent dynamic respiratory system compliance (C rs ) in mechanically ventilated patients. Due to noise in respiratory signals and different characteristics of the methods, their results can considerably differ. The aim of the study was to establish a practical procedure to validate the estimation of intratidal dynamic C rs . A total of 28 patients from intensive care units of eight German university hospitals with acute lung injury (ALI) and acute respiratory distress syndrome (ARDS) were studied retrospectively. Dynamic volume-dependent C rs was determined during ongoing mechanical ventilation with the SLICE method, dynostatic algorithm and adaptive slice method. Conventional two-point compliance C 2P was calculated for comparison. A number of consecutive breathing cycles were pooled to reduce noise in the respiratory signals. C rs -volume curves produced with different methods converged when the number of pooling cycles increased (n ≥ 7). The mean volume-dependent C rs of 20 breaths was highly correlated with mean C 2P (C 2P,mean = 0.945 × C rs,mean − 0.053, r 2 = 0.968, p < 0.0001). The Bland–Altman analysis indicated that C 2P,mean was lower than C rs,mean (−2.4 ± 6.4 ml cm −1 H 2 O, mean bias ± 2 SD), but not significant according to the paired t-test (p > 0.05). Methods for analyzing dynamic respiratory mechanics are sensitive to noise and will converge to a unique solution when the number of pooled cycles increases. Under steady-state conditions, assessment of the volume-dependent C rs in ALI/ARDS patients can be validated by pooling respiratory data of consecutive breaths regardless of which method is applied. Confidence in dynamic C rs determination may be increased with the proposed pooling. (note)

  3. Recombinant cells and organisms having persistent nonstandard amino acid dependence and methods of making them

    Science.gov (United States)

    Church, George M.; Mandell, Daniel J.; Lajoie, Marc J.

    2017-12-05

    Recombinant cells and recombinant organisms persistently expressing nonstandard amino acids (NSAAs) are provided. Methods of making recombinant cells and recombinant organisms dependent on persistently expressing NSAAs for survival are also provided. These methods may be used to make safe recombinant cells and recombinant organisms and/or to provide a selective pressure to maintain one or more reassigned codon functions in recombinant cells and recombinant organisms.

  4. Asymptotic iteration method solutions to the d-dimensional Schroedinger equation with position-dependent mass

    International Nuclear Information System (INIS)

    Yasuk, F.; Tekin, S.; Boztosun, I.

    2010-01-01

    In this study, the exact solutions of the d-dimensional Schroedinger equation with a position-dependent mass m(r)=1/(1+ζ 2 r 2 ) is presented for a free particle, V(r)=0, by using the method of point canonical transformations. The energy eigenvalues and corresponding wavefunctions for the effective potential which is to be a generalized Poeschl-Teller potential are obtained within the framework of the asymptotic iteration method.

  5. A modular method to handle multiple time-dependent quantities in Monte Carlo simulations

    International Nuclear Information System (INIS)

    Shin, J; Faddegon, B A; Perl, J; Schümann, J; Paganetti, H

    2012-01-01

    A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method. (paper)

  6. Assessment of hatchling egg losses and two chick sexing methods ...

    African Journals Online (AJOL)

    Assessment of hatchling egg losses and two chick sexing methods in the Nigerian indigenous chicken. ... Journal of Agricultural Research and Development ... The aim of the present study is to evaluate hatchling egg loss as well as sex determination methods at day old and sexual dimorphism over 8 weeks in Nigerian ...

  7. Methods of assessing total doses integrated across pathways

    International Nuclear Information System (INIS)

    Grzechnik, M.; Camplin, W.; Clyne, F.; Allott, R.; Webbe-Wood, D.

    2006-01-01

    Calculated doses for comparison with limits resulting from discharges into the environment should be summed across all relevant pathways and food groups to ensure adequate protection. Current methodology for assessments used in the radioactivity in Food and the Environment (R.I.F.E.) reports separate doses from pathways related to liquid discharges of radioactivity to the environment from those due to gaseous releases. Surveys of local inhabitant food consumption and occupancy rates are conducted in the vicinity of nuclear sites. Information has been recorded in an integrated way, such that the data for each individual is recorded for all pathways of interest. These can include consumption of foods, such as fish, crustaceans, molluscs, fruit and vegetables, milk and meats. Occupancy times over beach sediments and time spent in close proximity to the site is also recorded for inclusion of external and inhalation radiation dose pathways. The integrated habits survey data may be combined with monitored environmental radionuclide concentrations to calculate total dose. The criteria for successful adoption of a method for this calculation were: Reproducibility can others easily use the approach and reassess doses? Rigour and realism how good is the match with reality?Transparency a measure of the ease with which others can understand how the calculations are performed and what they mean. Homogeneity is the group receiving the dose relatively homogeneous with respect to age, diet and those aspects that affect the dose received? Five methods of total dose calculation were compared and ranked according to their suitability. Each method was labelled (A to E) and given a short, relevant name for identification. The methods are described below; A) Individual doses to individuals are calculated and critical group selection is dependent on dose received. B) Individual Plus As in A, but consumption and occupancy rates for high dose is used to derive rates for application in

  8. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    International Nuclear Information System (INIS)

    Hoffman, Adam J.; Lee, John C.

    2016-01-01

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  9. A time-dependent neutron transport method of characteristics formulation with time derivative propagation

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, Adam J., E-mail: adamhoff@umich.edu; Lee, John C., E-mail: jcl@umich.edu

    2016-02-15

    A new time-dependent Method of Characteristics (MOC) formulation for nuclear reactor kinetics was developed utilizing angular flux time-derivative propagation. This method avoids the requirement of storing the angular flux at previous points in time to represent a discretized time derivative; instead, an equation for the angular flux time derivative along 1D spatial characteristics is derived and solved concurrently with the 1D transport characteristic equation. This approach allows the angular flux time derivative to be recast principally in terms of the neutron source time derivatives, which are approximated to high-order accuracy using the backward differentiation formula (BDF). This approach, called Source Derivative Propagation (SDP), drastically reduces the memory requirements of time-dependent MOC relative to methods that require storing the angular flux. An SDP method was developed for 2D and 3D applications and implemented in the computer code DeCART in 2D. DeCART was used to model two reactor transient benchmarks: a modified TWIGL problem and a C5G7 transient. The SDP method accurately and efficiently replicated the solution of the conventional time-dependent MOC method using two orders of magnitude less memory.

  10. Photodissociation of NaH using time-dependent Fourier grid method

    Indian Academy of Sciences (India)

    We have solved the time dependent Schrödinger equation by using the Chebyshev polynomial scheme and Fourier grid Hamiltonian method to calculate the dissociation cross section of NaH molecule by 1-photon absorption from the 1+ state to the 1 state. We have found that the results differ significantly from an ...

  11. THE COMBINATION METHOD FOR DEPENDENT EVIDENCE AND ITS APPLICATION FOR SIMULTANEOUS FAULTS DIAGNOSIS

    OpenAIRE

    HAI-NA JIANG; XIAO-BIN XU; CHENG-LIN WEN

    2015-01-01

    This paper provides a method based on Dezert-Smarandache Theory (DSmT) for simultaneous faults diagnosis when evidence is dependent. Firstly, according to the characteristics of simultaneous faults, a frame of discernment is given for both single fault and simultaneous faults diagnosis, the DSmT combination rule applicable to simultaneous faults diagnosis is introduced.

  12. Solving Ratio-Dependent Predatorprey System with Constant Effort Harvesting Using Variational Iteration Method

    DEFF Research Database (Denmark)

    Ghotbi, Abdoul R; Barari, Amin

    2009-01-01

    Due to wide range of interest in use of bio-economic models to gain insight in to the scientific management of renewable resources like fisheries and forestry, variational iteration method (VIM) is employed to approximate the solution of the ratio-dependent predator-prey system with constant effort...

  13. STEM - software test and evaluation methods. A study of failure dependency in diverse software

    International Nuclear Information System (INIS)

    Bishop, P.G.; Pullen, F.D.

    1989-02-01

    STEM is a collaborative software reliability project undertaken in partnership with Halden Reactor Project, UKAEA, and the Finnish Technical Research Centre. The objective of STEM is to evaluate a number of fault detection and fault estimation methods which can be applied to high integrity software. This Report presents a study of the observed failure dependencies between faults in diversely produced software. (author)

  14. Developing an Engineering Design Process Assessment using Mixed Methods.

    Science.gov (United States)

    Wind, Stefanie A; Alemdar, Meltem; Lingle, Jeremy A; Gale, Jessica D; Moore, Roxanne A

    Recent reforms in science education worldwide include an emphasis on engineering design as a key component of student proficiency in the Science, Technology, Engineering, and Mathematics disciplines. However, relatively little attention has been directed to the development of psychometrically sound assessments for engineering. This study demonstrates the use of mixed methods to guide the development and revision of K-12 Engineering Design Process (EDP) assessment items. Using results from a middle-school EDP assessment, this study illustrates the combination of quantitative and qualitative techniques to inform item development and revisions. Overall conclusions suggest that the combination of quantitative and qualitative evidence provides an in-depth picture of item quality that can be used to inform the revision and development of EDP assessment items. Researchers and practitioners can use the methods illustrated here to gather validity evidence to support the interpretation and use of new and existing assessments.

  15. A Comparison between Two Instruments for Assessing Dependency in Daily Activities: Agreement of the Northwick Park Dependency Score with the Functional Independence Measure

    Directory of Open Access Journals (Sweden)

    Siv Svensson

    2012-01-01

    Full Text Available Background. There is a need for tools to assess dependency among persons with severe impairments. Objectives. The aim was to compare the Functional Independence Measure (FIM and the Northwick Park Dependency Score (NPDS, in a sample from in-patient rehabilitation. Material and Methods. Data from 115 persons (20 to 65 years of age with neurological impairments was gathered. Analyses were made of sensitivity, specificity, positive predictive value, and negative predictive value. Agreement of the scales was assessed with kappa and concordance with Goodman-Kruskal’s gamma. Scale structures were explored using the Rank-Transformable Pattern of Agreement (RTPA. Content validation was performed. Results. The sensitivity of the NPDS as compared to FIM varied between 0.53 (feeding and 1.0 (mobility and specificity between 0.64 (mobility and 1.0 (bladder. The positive predictive value varied from 0.62 (mobility to 1.0 (bladder, and the negative predictive value varied from 0.48 (bowel to 1.0 (mobility. Agreement between the scales was moderate to good (four items and excellent (three items. Concordance was good, with a gamma of −.856, an asymptotic error (ase of .025, and P<.000. The parallel reliability between the FIM and the NPDS showed a tendency for NPDS to be more sensitive (having more categories when dependency is high. Conclusion. FIM and NPDS complement each other. NPDS can be used as a measure for severely injured patients who are sensitive when there is a high need of nursing time.

  16. Assessment of proposed electromagnetic quantum vacuum energy extraction methods

    OpenAIRE

    Moddel, Garret

    2009-01-01

    In research articles and patents several methods have been proposed for the extraction of zero-point energy from the vacuum. None has been reliably demonstrated, but the proposals remain largely unchallenged. In this paper the feasibility of these methods is assessed in terms of underlying thermodynamics principles of equilibrium, detailed balance, and conservation laws. The methods are separated into three classes: nonlinear processing of the zero-point field, mechanical extraction using Cas...

  17. Assessing the Accuracy of Ancestral Protein Reconstruction Methods

    OpenAIRE

    Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A

    2006-01-01

    The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolu...

  18. The bootstrap and Bayesian bootstrap method in assessing bioequivalence

    International Nuclear Information System (INIS)

    Wan Jianping; Zhang Kongsheng; Chen Hui

    2009-01-01

    Parametric method for assessing individual bioequivalence (IBE) may concentrate on the hypothesis that the PK responses are normal. Nonparametric method for evaluating IBE would be bootstrap method. In 2001, the United States Food and Drug Administration (FDA) proposed a draft guidance. The purpose of this article is to evaluate the IBE between test drug and reference drug by bootstrap and Bayesian bootstrap method. We study the power of bootstrap test procedures and the parametric test procedures in FDA (2001). We find that the Bayesian bootstrap method is the most excellent.

  19. Comparison of three methods to assess individual skeletal maturity.

    Science.gov (United States)

    Pasciuti, Enzo; Franchi, Lorenzo; Baccetti, Tiziano; Milani, Silvano; Farronato, Giampietro

    2013-09-01

    The knowledge of facial growth and development is fundamental to determine the optimal timing for different treatment procedures in the growing patient. To analyze the reproducibility of three methods in assessing individual skeletal maturity, and to evaluate any degree of concordance among them. In all, 100 growing subjects were enrolled to test three methods: the hand-wrist, cervical vertebral maturation (CVM), and medial phalanges of the third finger method (MP3). Four operators determined the skeletal maturity of the subjects to evaluate the reproducibility of each method. After 30 days the operators repeated the analysis to assess the repeatability of each method. Finally, one operator examined all subjects' radiographs to detect any concordance among the three methods. The weighted kappa values for inter-operator variability were 0.94, 0.91, and 0.90, for the WRI, CVM, and MP3 methods, respectively. The weighted kappa values for intra-operator variability were 0.92, 0.91, and 0.92, for the WRI, CVM, and MP3 methods, respectively. The three methods revealed a high degree of repeatability and reproducibility. Complete agreement among the three methods was observed in 70% of the analyzed samples. The CVM method has the advantage of not necessitating an additional radiograph. The MP3 method is a simple and practical alternative as it requires only a standard dental x-ray device.

  20. Culture-Dependent and -Independent Methods Capture Different Microbial Community Fractions in Hydrocarbon-Contaminated Soils

    OpenAIRE

    Stefani, Franck O. P.; Bell, Terrence H.; Marchand, Charlotte; de la Providencia, Ivan E.; El Yassimi, Abdel; St-Arnaud, Marc; Hijri, Mohamed

    2015-01-01

    Bioremediation is a cost-effective and sustainable approach for treating polluted soils, but our ability to improve on current bioremediation strategies depends on our ability to isolate microorganisms from these soils. Although culturing is widely used in bioremediation research and applications, it is unknown whether the composition of cultured isolates closely mirrors the indigenous microbial community from contaminated soils. To assess this, we paired culture-independent (454-pyrosequenci...

  1. Reporting methods of blinding in randomized trials assessing nonpharmacological treatments.

    Directory of Open Access Journals (Sweden)

    Isabelle Boutron

    2007-02-01

    Full Text Available BACKGROUND: Blinding is a cornerstone of treatment evaluation. Blinding is more difficult to obtain in trials assessing nonpharmacological treatment and frequently relies on "creative" (nonstandard methods. The purpose of this study was to systematically describe the strategies used to obtain blinding in a sample of randomized controlled trials of nonpharmacological treatment. METHODS AND FINDINGS: We systematically searched in Medline and the Cochrane Methodology Register for randomized controlled trials (RCTs assessing nonpharmacological treatment with blinding, published during 2004 in high-impact-factor journals. Data were extracted using a standardized extraction form. We identified 145 articles, with the method of blinding described in 123 of the reports. Methods of blinding of participants and/or health care providers and/or other caregivers concerned mainly use of sham procedures such as simulation of surgical procedures, similar attention-control interventions, or a placebo with a different mode of administration for rehabilitation or psychotherapy. Trials assessing devices reported various placebo interventions such as use of sham prosthesis, identical apparatus (e.g., identical but inactivated machine or use of activated machine with a barrier to block the treatment, or simulation of using a device. Blinding participants to the study hypothesis was also an important method of blinding. The methods reported for blinding outcome assessors relied mainly on centralized assessment of paraclinical examinations, clinical examinations (i.e., use of video, audiotape, photography, or adjudications of clinical events. CONCLUSIONS: This study classifies blinding methods and provides a detailed description of methods that could overcome some barriers of blinding in clinical trials assessing nonpharmacological treatment, and provides information for readers assessing the quality of results of such trials.

  2. Addressing dependability by applying an approach for model-based risk assessment

    International Nuclear Information System (INIS)

    Gran, Bjorn Axel; Fredriksen, Rune; Thunem, Atoosa P.-J.

    2007-01-01

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development

  3. Addressing dependability by applying an approach for model-based risk assessment

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjorn Axel [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: bjorn.axel.gran@hrp.no; Fredriksen, Rune [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: rune.fredriksen@hrp.no; Thunem, Atoosa P.-J. [Institutt for energiteknikk, OECD Halden Reactor Project, NO-1751 Halden (Norway)]. E-mail: atoosa.p-j.thunem@hrp.no

    2007-11-15

    This paper describes how an approach for model-based risk assessment (MBRA) can be applied for addressing different dependability factors in a critical application. Dependability factors, such as availability, reliability, safety and security, are important when assessing the dependability degree of total systems involving digital instrumentation and control (I and C) sub-systems. In order to identify risk sources their roles with regard to intentional system aspects such as system functions, component behaviours and intercommunications must be clarified. Traditional risk assessment is based on fault or risk models of the system. In contrast to this, MBRA utilizes success-oriented models describing all intended system aspects, including functional, operational and organizational aspects of the target. The EU-funded CORAS project developed a tool-supported methodology for the application of MBRA in security-critical systems. The methodology has been tried out within the telemedicine and e-commerce areas, and provided through a series of seven trials a sound basis for risk assessments. In this paper the results from the CORAS project are presented, and it is discussed how the approach for applying MBRA meets the needs of a risk-informed Man-Technology-Organization (MTO) model, and how methodology can be applied as a part of a trust case development.

  4. Ageing management by probabilistic safety assessment (PSA) methods

    International Nuclear Information System (INIS)

    Das, M.; Bhawal, R.N.; Maiti, S.C.

    1994-01-01

    The process and safety system of a nuclear power plant must achieve the reliability/availability target throughout the plant life or for extended plant life. It is therefore necessary to assess the trend of component or system ageing and to take preventive measures so that ageing effect can be counter balanced. In this paper a mathematical model has been established to predict ageing effect and to find out time dependent inspection or test interval to upgrade the system availability. (author). 5 figs

  5. Method and system for dynamic probabilistic risk assessment

    Science.gov (United States)

    Dugan, Joanne Bechta (Inventor); Xu, Hong (Inventor)

    2013-01-01

    The DEFT methodology, system and computer readable medium extends the applicability of the PRA (Probabilistic Risk Assessment) methodology to computer-based systems, by allowing DFT (Dynamic Fault Tree) nodes as pivot nodes in the Event Tree (ET) model. DEFT includes a mathematical model and solution algorithm, supports all common PRA analysis functions and cutsets. Additional capabilities enabled by the DFT include modularization, phased mission analysis, sequence dependencies, and imperfect coverage.

  6. Simulation of transients with space-dependent feedback by coarse mesh flux expansion method

    International Nuclear Information System (INIS)

    Langenbuch, S.; Maurer, W.; Werner, W.

    1975-01-01

    For the simulation of the time-dependent behaviour of large LWR-cores, even the most efficient Finite-Difference (FD) methods require a prohibitive amount of computing time in order to achieve results of acceptable accuracy. Static CM-solutions computed with a mesh-size corresponding to the fuel element structure (about 20 cm) are at least as accurate as FD-solutions computed with about 5 cm mesh-size. For 3d-calculations this results in a reduction of storage requirements by a factor 60 and of computing costs by a factor 40, relative to FD-methods. These results have been obtained for pure neutronic calculations, where feedback is not taken into account. In this paper it is demonstrated that the method retains its accuracy also in kinetic calculations, even in the presence of strong space dependent feedback. (orig./RW) [de

  7. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    Science.gov (United States)

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2017-08-01

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  8. A Method Based on Intuitionistic Fuzzy Dependent Aggregation Operators for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Fen Wang

    2013-01-01

    Full Text Available Recently, resolving the decision making problem of evaluation and ranking the potential suppliers have become as a key strategic factor for business firms. In this paper, two new intuitionistic fuzzy aggregation operators are developed: dependent intuitionistic fuzzy ordered weighed averaging (DIFOWA operator and dependent intuitionistic fuzzy hybrid weighed aggregation (DIFHWA operator. Some of their main properties are studied. A method based on the DIFHWA operator for intuitionistic fuzzy multiple attribute decision making is presented. Finally, an illustrative example concerning supplier selection is given.

  9. An Integrated Method of Supply Chains Vulnerability Assessment

    Directory of Open Access Journals (Sweden)

    Jiaguo Liu

    2016-01-01

    Full Text Available Supply chain vulnerability identification and evaluation are extremely important to mitigate the supply chain risk. We present an integrated method to assess the supply chain vulnerability. The potential failure mode of the supply chain vulnerability is analyzed through the SCOR model. Combining the fuzzy theory and the gray theory, the correlation degree of each vulnerability indicator can be calculated and the target improvements can be carried out. In order to verify the effectiveness of the proposed method, we use Kendall’s tau coefficient to measure the effect of different methods. The result shows that the presented method has the highest consistency in the assessment compared with the other two methods.

  10. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    Science.gov (United States)

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  11. [The theory of dependent-care--a conceptual framework for assessing, supporting, and promoting parental competencies].

    Science.gov (United States)

    Holoch, Elisabeth

    2010-02-01

    Parental competencies have influence on the professional health care needs of a child and its caregivers. One reason for this is the influence of parental competencies on the healthy development of the child. This applies especially to infants and young children. In order to develop their inborn abilities to regulate themselves and their behaviour, infants and young children are dependent on the perception of and appropriate response to their behaviour by the persons they are most closely attached to. The differentiation of self-regulating abilities is a precondition for a healthy development. The current rise of sleeping and feeding disorders, as well as interaction problems among infants and young children, indicates that parents are increasingly dependent on support in the perception and development of their parental competencies. Paediatric nurses can make an important contribution to this, where a concept of parental competencies, defined by nursing professionals, is available. The Theory of Dependent-Care and especially the concept of Dependent-Care Agency will be presented in this paper. It will be examined how they can provide a theoretical framework for the systematic assessment, support, and promotion of parental competencies by paediatric nurses. To conclude, issues for further investigation of parental Dependent-Care Agency and the necessity for a more detailed conceptualisation of the Theory of Dependent-Care will be demonstrated.

  12. The Impact of Harmonics Calculation Methods on Power Quality Assessment in Wind Farms

    DEFF Research Database (Denmark)

    Kocewiak, Lukasz Hubert; Hjerrild, Jesper; Bak, Claus Leth

    2010-01-01

    Different methods of calculating harmonics in measurements obtained from offshore wind farms are shown in this paper. Appropriate data processing methods are suggested for harmonics with different origin and nature. Enhancements of discrete Fourier transform application in order to reduce...... measurement data processing errors are proposed and compared with classical methods. Comparison of signal processing methods for harmonic studies is presented and application dependent on harmonics origin and nature recommended. Certain aspects related to magnitude and phase calculation in stationary...... measurement data are analysed and described. Qualitative indices of measurement data harmonic analysis in order to assess the calculation accuracy are suggested and used....

  13. Assessment of modern methods of human factor reliability analysis in PSA studies

    International Nuclear Information System (INIS)

    Holy, J.

    2001-12-01

    The report is structured as follows: Classical terms and objects (Probabilistic safety assessment as a framework for human reliability assessment; Human failure within the PSA model; Basic types of operator failure modelled in a PSA study and analyzed by HRA methods; Qualitative analysis of human reliability; Quantitative analysis of human reliability used; Process of analysis of nuclear reactor operator reliability in a PSA study); New terms and objects (Analysis of dependences; Errors of omission; Errors of commission; Error forcing context); and Overview and brief assessment of human reliability analysis (Basic characteristics of the methods; Assets and drawbacks of the use of each of HRA method; History and prospects of the use of the methods). (P.A.)

  14. Companies Credit Risk Assessment Methods for Investment Decision Making

    Directory of Open Access Journals (Sweden)

    Dovilė Peškauskaitė

    2017-06-01

    Full Text Available As the banks have tightened lending requirements, companies look for alternative sources of external funding. One of such is bonds issue. Unfortunately, corporate bonds issue as a source of funding is rare in Lithuania. This occurs because companies face with a lack of information, investors fear to take on credit risk. Credit risk is defined as a borrower’s failure to meet its obligation. Investors, in order to avoid credit risk, have to assess the state of the companies. The goal of the article is to determine the most informative methods of credit risk assessment. The article summarizes corporate lending sources, analyzes corporate default causes and credit risk assessment methods. The study based on the SWOT analysis shows that investors before making an investment decision should evaluate both the business risk,using qualitative method CAMPARI, and the financial risk, using financial ratio analysis.

  15. Going beyond the Millennium Ecosystem Assessment: an index system of human dependence on ecosystem services.

    Science.gov (United States)

    Yang, Wu; Dietz, Thomas; Liu, Wei; Luo, Junyan; Liu, Jianguo

    2013-01-01

    The Millennium Ecosystem Assessment (MA) estimated that two thirds of ecosystem services on the earth have degraded or are in decline due to the unprecedented scale of human activities during recent decades. These changes will have tremendous consequences for human well-being, and offer both risks and opportunities for a wide range of stakeholders. Yet these risks and opportunities have not been well managed due in part to the lack of quantitative understanding of human dependence on ecosystem services. Here, we propose an index of dependence on ecosystem services (IDES) system to quantify human dependence on ecosystem services. We demonstrate the construction of the IDES system using household survey data. We show that the overall index and sub-indices can reflect the general pattern of households' dependences on ecosystem services, and their variations across time, space, and different forms of capital (i.e., natural, human, financial, manufactured, and social capitals). We support the proposition that the poor are more dependent on ecosystem services and further generalize this proposition by arguing that those disadvantaged groups who possess low levels of any form of capital except for natural capital are more dependent on ecosystem services than those with greater control of capital. The higher value of the overall IDES or sub-index represents the higher dependence on the corresponding ecosystem services, and thus the higher vulnerability to the degradation or decline of corresponding ecosystem services. The IDES system improves our understanding of human dependence on ecosystem services. It also provides insights into strategies for alleviating poverty, for targeting priority groups of conservation programs, and for managing risks and opportunities due to changes of ecosystem services at multiple scales.

  16. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    International Nuclear Information System (INIS)

    Abdikamalov, Ernazar; Ott, Christian D.; O'Connor, Evan; Burrows, Adam; Dolence, Joshua C.; Löffler, Frank; Schnetter, Erik

    2012-01-01

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  17. A NEW MONTE CARLO METHOD FOR TIME-DEPENDENT NEUTRINO RADIATION TRANSPORT

    Energy Technology Data Exchange (ETDEWEB)

    Abdikamalov, Ernazar; Ott, Christian D.; O' Connor, Evan [TAPIR, California Institute of Technology, MC 350-17, 1200 E California Blvd., Pasadena, CA 91125 (United States); Burrows, Adam; Dolence, Joshua C. [Department of Astrophysical Sciences, Princeton University, Peyton Hall, Ivy Lane, Princeton, NJ 08544 (United States); Loeffler, Frank; Schnetter, Erik, E-mail: abdik@tapir.caltech.edu [Center for Computation and Technology, Louisiana State University, 216 Johnston Hall, Baton Rouge, LA 70803 (United States)

    2012-08-20

    Monte Carlo approaches to radiation transport have several attractive properties such as simplicity of implementation, high accuracy, and good parallel scaling. Moreover, Monte Carlo methods can handle complicated geometries and are relatively easy to extend to multiple spatial dimensions, which makes them potentially interesting in modeling complex multi-dimensional astrophysical phenomena such as core-collapse supernovae. The aim of this paper is to explore Monte Carlo methods for modeling neutrino transport in core-collapse supernovae. We generalize the Implicit Monte Carlo photon transport scheme of Fleck and Cummings and gray discrete-diffusion scheme of Densmore et al. to energy-, time-, and velocity-dependent neutrino transport. Using our 1D spherically-symmetric implementation, we show that, similar to the photon transport case, the implicit scheme enables significantly larger timesteps compared with explicit time discretization, without sacrificing accuracy, while the discrete-diffusion method leads to significant speed-ups at high optical depth. Our results suggest that a combination of spectral, velocity-dependent, Implicit Monte Carlo and discrete-diffusion Monte Carlo methods represents a robust approach for use in neutrino transport calculations in core-collapse supernovae. Our velocity-dependent scheme can easily be adapted to photon transport.

  18. Assessment of the setup dependence of detector response functions for mega-voltage linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Christopher; Simon, Tom; Simon, Bill; Dempsey, James F.; Kahler, Darren; Palta, Jatinder R.; Liu Chihray; Yan Guanghua [Sun Nuclear Inc., 425-A Pineda Court, Melbourne, Florida 32940 and Department of Radiation Oncology, University of Florida, P.O. Box 100385, Gainesville, Florida 32610-0385 (United States); NRE, 202 Nuclear Science Building, University of Florida, P.O. Box 118300, Gainesville, Florida 32611-8300 and Sun Nuclear Inc., 425-A Pineda Court, Melbourne, Florida 32940 (United States); Sun Nuclear Inc., 425-A Pineda Court, Melbourne, Florida 32940 (United States); ViewRay Inc., 2 Thermo Fisher Way, Oakwood Village, Ohio 44146 (United States); Department of Radiation Oncology, University of Florida, P.O. Box 100385, Gainesville, Florida 32610-0385 (United States)

    2010-02-15

    Purpose: Accurate modeling of beam profiles is important for precise treatment planning dosimetry. Calculated beam profiles need to precisely replicate profiles measured during machine commissioning. Finite detector size introduces perturbations into the measured profiles, which, in turn, impact the resulting modeled profiles. The authors investigate a method for extracting the unperturbed beam profiles from those measured during linear accelerator commissioning. Methods: In-plane and cross-plane data were collected for an Elekta Synergy linac at 6 MV using ionization chambers of volume 0.01, 0.04, 0.13, and 0.65 cm{sup 3} and a diode of surface area 0.64 mm{sup 2}. The detectors were orientated with the stem perpendicular to the beam and pointing away from the gantry. Profiles were measured for a 10x10 cm{sup 2} field at depths ranging from 0.8 to 25.0 cm and SSDs from 90 to 110 cm. Shaping parameters of a Gaussian response function were obtained relative to the Edge detector. The Gaussian function was deconvolved from the measured ionization chamber data. The Edge detector profile was taken as an approximation to the true profile, to which deconvolved data were compared. Data were also collected with CC13 and Edge detectors for additional fields and energies on an Elekta Synergy, Varian Trilogy, and Siemens Oncor linear accelerator and response functions obtained. Response functions were compared as a function of depth, SSD, and detector scan direction. Variations in the shaping parameter were introduced and the effect on the resulting deconvolution profiles assessed. Results: Up to 10% setup dependence in the Gaussian shaping parameter occurred, for each detector for a particular plane. This translated to less than a {+-}0.7 mm variation in the 80%-20% penumbral width. For large volume ionization chambers such as the FC65 Farmer type, where the cavity length to diameter ratio is far from 1, the scan direction produced up to a 40% difference in the shaping

  19. Assessment of the setup dependence of detector response functions for mega-voltage linear accelerators

    International Nuclear Information System (INIS)

    Fox, Christopher; Simon, Tom; Simon, Bill; Dempsey, James F.; Kahler, Darren; Palta, Jatinder R.; Liu Chihray; Yan Guanghua

    2010-01-01

    Purpose: Accurate modeling of beam profiles is important for precise treatment planning dosimetry. Calculated beam profiles need to precisely replicate profiles measured during machine commissioning. Finite detector size introduces perturbations into the measured profiles, which, in turn, impact the resulting modeled profiles. The authors investigate a method for extracting the unperturbed beam profiles from those measured during linear accelerator commissioning. Methods: In-plane and cross-plane data were collected for an Elekta Synergy linac at 6 MV using ionization chambers of volume 0.01, 0.04, 0.13, and 0.65 cm 3 and a diode of surface area 0.64 mm 2 . The detectors were orientated with the stem perpendicular to the beam and pointing away from the gantry. Profiles were measured for a 10x10 cm 2 field at depths ranging from 0.8 to 25.0 cm and SSDs from 90 to 110 cm. Shaping parameters of a Gaussian response function were obtained relative to the Edge detector. The Gaussian function was deconvolved from the measured ionization chamber data. The Edge detector profile was taken as an approximation to the true profile, to which deconvolved data were compared. Data were also collected with CC13 and Edge detectors for additional fields and energies on an Elekta Synergy, Varian Trilogy, and Siemens Oncor linear accelerator and response functions obtained. Response functions were compared as a function of depth, SSD, and detector scan direction. Variations in the shaping parameter were introduced and the effect on the resulting deconvolution profiles assessed. Results: Up to 10% setup dependence in the Gaussian shaping parameter occurred, for each detector for a particular plane. This translated to less than a ±0.7 mm variation in the 80%-20% penumbral width. For large volume ionization chambers such as the FC65 Farmer type, where the cavity length to diameter ratio is far from 1, the scan direction produced up to a 40% difference in the shaping parameter between in

  20. Diagnostic accuracy of the MMPI-2 to assess imbalances emphasising in people with substance dependence

    Directory of Open Access Journals (Sweden)

    Pablo González-Romero

    2017-07-01

    Full Text Available The acceptance and respect of the rules governing society and the family unit are essential pillars for the development of a therapeutic program for people with substance dependence disorders. This study proposes a double objective using the scales of the MMPI-2 detectors of mismatches emphasising: what information can provide and what the diagnostic accuracy of the MMPI-2 is to assess these mismatches. As a reference, psychopathic deviation (Pd, social introversion (Si, antisocial practices (ASP, social responsibility (Re, social unrest (SOD, introversion/low positive emotion (PSY-INTR, family problems (FAM, and conjugal stress (MDS were taken. Of the 226 participants, 113 are people with substance dependence and 113 have no dependence or any pathology. Their differences and diagnostic accuracy through the ROC curve were analysed. The results showed different contribution and diagnostic accuracy of the scales.

  1. A comprehensive environmental impact assessment method for shale gas development

    Directory of Open Access Journals (Sweden)

    Renjin Sun

    2015-03-01

    Full Text Available The great success of US commercial shale gas exploitation stimulates the shale gas development in China, subsequently, the corresponding supporting policies were issued in the 12th Five-Year Plan. But from the experience in the US shale gas development, we know that the resulted environmental threats are always an unavoidable issue, but no uniform and standard evaluation system has yet been set up in China. The comprehensive environment refers to the combination of natural ecological environment and external macro-environment. In view of this, we conducted a series of studies on how to set up a comprehensive environmental impact assessment system as well as the related evaluation methodology and models. First, we made an in-depth investigation into shale gas development procedures and any possible environmental impacts, and then compared, screened and modified environmental impact assessment methods for shale gas development. Also, we established an evaluating system and assessment models according to different status of the above two types of environment: the correlation matrix method was employed to assess the impacts on natural ecological environment and the optimization distance method was modified to evaluate the impacts on external macro-environment. Finally, we substitute the two subindexes into the comprehensive environmental impact assessment model and achieved the final numerical result of environmental impact assessment. This model can be used to evaluate if a shale gas project has any impact on environment, compare the impacts before and after a shale gas development project, or the impacts of different projects.

  2. Safety assessment and detection methods of genetically modified organisms.

    Science.gov (United States)

    Xu, Rong; Zheng, Zhe; Jiao, Guanglian

    2014-01-01

    Genetically modified organisms (GMOs), are gaining importance in agriculture as well as the production of food and feed. Along with the development of GMOs, health and food safety concerns have been raised. These concerns for these new GMOs make it necessary to set up strict system on food safety assessment of GMOs. The food safety assessment of GMOs, current development status of safety and precise transgenic technologies and GMOs detection have been discussed in this review. The recent patents about GMOs and their detection methods are also reviewed. This review can provide elementary introduction on how to assess and detect GMOs.

  3. Implementation of condition-dependent probabilistic risk assessment using surveillance data on passive components

    International Nuclear Information System (INIS)

    Lewandowski, Radoslaw; Denning, Richard; Aldemir, Tunc; Zhang, Jinsuo

    2016-01-01

    Highlights: • Condition-dependent probabilistic risk assessment (PRA). • Time-dependent characterization of plant-specific risk. • Containment bypass involving in secondary system piping and SCC in SG tubes. - Abstract: A great deal of surveillance data are collected for a nuclear power plant that reflect the changing condition of the plant as it ages. Although surveillance data are used to determine failure probabilities of active components for the plant’s probabilistic risk assessment (PRA) and to indicate the need for maintenance activities, they are not used in a structured manner to characterize the evolving risk of the plant. The present study explores the feasibility of using a condition-dependent PRA framework that takes a first principles approach to modeling the progression of degradation mechanisms to characterize evolving risk, periodically adapting the model to account for surveillance results. A case study is described involving a potential containment bypass accident sequence due to the progression of flow-accelerated corrosion in secondary system piping and stress corrosion cracking of steam generator tubes. In this sequence, a steam line break accompanied by failure to close of a main steam isolation valve results in depressurization of the steam generator and induces the rupture of one or more faulted steam generator tubes. The case study indicates that a condition-dependent PRA framework might be capable of providing early identification of degradation mechanisms important to plant risk.

  4. METHODS OF ASSESSMENT OF THE RELATIVE BIOLOGICAL EFFECTIVENESS OF NEUTRONS IN NEUTRON THERAPY

    Directory of Open Access Journals (Sweden)

    V. A. Lisin

    2017-01-01

    Full Text Available The relative biological effectiveness (RBE of fast neutrons is an important factor influencing the quality of neutron therapy therefore, the assessment of RBE is of great importance. Experimental and clinical studies as well as different mathematical and radiobiological models are used for assessing RBE. Research is conducted for neutron sources differing in the method of producing particles, energy and energy spectrum. Purpose: to find and analyze the dose-dependence of fast neutron RBE in neutron therapy using the U-120 cyclotron and NG-12I generator. Material and methods: The optimal method for assessing the relative biological effectiveness of neutrons for neutron therapy was described. To analyze the dependence of the RBE on neutron dose, the multi-target model of cell survival was applied. Results: The dependence of the RBE of neutrons produced from the U-120 cyclotron and NG-120 generator on the dose level was found for a single irradiation of biological objects. It was shown that the function of neutron dose was consistent with similar dependencies found by other authors in the experimental and clinical studies.

  5. Rapid assessment methods in eye care: An overview

    Directory of Open Access Journals (Sweden)

    Srinivas Marmamula

    2012-01-01

    Full Text Available Reliable information is required for the planning and management of eye care services. While classical research methods provide reliable estimates, they are prohibitively expensive and resource intensive. Rapid assessment (RA methods are indispensable tools in situations where data are needed quickly and where time- or cost-related factors prohibit the use of classical epidemiological surveys. These methods have been developed and field tested, and can be applied across almost the entire gamut of health care. The 1990s witnessed the emergence of RA methods in eye care for cataract, onchocerciasis, and trachoma and, more recently, the main causes of avoidable blindness and visual impairment. The important features of RA methods include the use of local resources, simplified sampling methodology, and a simple examination protocol/data collection method that can be performed by locally available personnel. The analysis is quick and easy to interpret. The entire process is inexpensive, so the survey may be repeated once every 5-10 years to assess the changing trends in disease burden. RA survey methods are typically linked with an intervention. This article provides an overview of the RA methods commonly used in eye care, and emphasizes the selection of appropriate methods based on the local need and context.

  6. Assessment of active methods for removal of LEO debris

    Science.gov (United States)

    Hakima, Houman; Emami, M. Reza

    2018-03-01

    This paper investigates the applicability of five active methods for removal of large low Earth orbit debris. The removal methods, namely net, laser, electrodynamic tether, ion beam shepherd, and robotic arm, are selected based on a set of high-level space mission constraints. Mission level criteria are then utilized to assess the performance of each redirection method in light of the results obtained from a Monte Carlo simulation. The simulation provides an insight into the removal time, performance robustness, and propellant mass criteria for the targeted debris range. The remaining attributes are quantified based on the models provided in the literature, which take into account several important parameters pertaining to each removal method. The means of assigning attributes to each assessment criterion is discussed in detail. A systematic comparison is performed using two different assessment schemes: Analytical Hierarchy Process and utility-based approach. A third assessment technique, namely the potential-loss analysis, is utilized to highlight the effect of risks in each removal methods.

  7. A method to identify dependencies between organizational factors using statistical independence test

    International Nuclear Information System (INIS)

    Kim, Y.; Chung, C.H.; Kim, C.; Jae, M.; Jung, J.H.

    2004-01-01

    A considerable number of studies on organizational factors in nuclear power plants have been made especially in recent years, most of which have assumed organizational factors to be independent. However, since organizational factors characterize the organization in terms of safety and efficiency etc. and there would be some factors that have close relations between them. Therefore, from whatever point of view, if we want to identify the characteristics of an organization, the dependence relationships should be considered to get an accurate result. In this study the organization of a reference nuclear power plant in Korea was analyzed for the trip cases of that plant using 20 organizational factors that Jacobs and Haber had suggested: 1) coordination of work, 2) formalization, 3) organizational knowledge, 4) roles and responsibilities, 5) external communication, 6) inter-department communications, 7) intra-departmental communications, 8) organizational culture, 9) ownership, 10) safety culture, 11) time urgency, 12) centralization, 13) goal prioritization, 14) organizational learning, 15) problem identification, 16) resource allocation, 17) performance evaluation, 18) personnel selection, 19) technical knowledge, and 20) training. By utilizing the results of the analysis, a method to identify the dependence relationships between organizational factors is presented. The statistical independence test for the analysis result of the trip cases is adopted to reveal dependencies. This method is geared to the needs to utilize many kinds of data that has been obtained as the operating years of nuclear power plants increase, and more reliable dependence relations may be obtained by using these abundant data

  8. Approximate method for solving the velocity dependent transport equation in a slab lattice

    International Nuclear Information System (INIS)

    Ferrari, A.

    1966-01-01

    A method is described that is intended to provide an approximate solution of the transport equation in a medium simulating a water-moderated plate filled reactor core. This medium is constituted by a periodic array of water channels and absorbing plates. The velocity dependent transport equation in slab geometry is included. The computation is performed in a water channel: the absorbing plates are accounted for by the boundary conditions. The scattering of neutrons in water is assumed isotropic, which allows the use of a double Pn approximation to deal with the angular dependence. This method is able to represent the discontinuity of the angular distribution at the channel boundary. The set of equations thus obtained is dependent only on x and v and the coefficients are independent on x. This solution suggests to try solutions involving Legendre polynomials. This scheme leads to a set of equations v dependent only. To obtain an explicit solution, a thermalization model must now be chosen. Using the secondary model of Cadilhac a solution of this set is easy to get. The numerical computations were performed with a particular secondary model, the well-known model of Wigner and Wilkins. (author) [fr

  9. Spanish adaptation of the NDSS (Nicotine Dependence Syndrome Scale) and assessment of nicotine-dependent individuals at primary care health centers in Spain.

    Science.gov (United States)

    Becoña, Elisardo; López, Ana; Fernández del Río, Elena; Míguez, Ma Carmen; Castro, Josefina

    2010-11-01

    The availability of adequate instruments for the assessment of nicotine dependence is an important factor that is relevant in the area of tobacco addiction. In this study, we present a Spanish validation of the Nicotine Dependence Syndrome Scale (NDSS) (Shiffman, Waters, & Hickcox, 2004). The sample was composed ofpatients, all daily smokers, who visited their General Practitioner (GP) at five Primary Health Care Centers in different cities of Spain (N = 637). The results indicated adequate reliability for the general factor that assesses nicotine dependence (NDSS-Total) (Cronbach's alpha = .76). Factor analysis confirms the five factors of the original validation: Drive, Continuity, Stereotypy, Priority, and Tolerance. It must be noted that reliability is adequate for the first, and moderate or low for the rest. The NDSS-T and its scales correlate significantly with the Fagerström Test for Nicotine Dependence (FTND), with the nicotine dependence criteria of the Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV) as assessed through the Structured Clinical Interview for DSM-IV (SCID), with carbon monoxide levels in expired air (CO), and with the number of cigarettes smoked. The ROC curve indicates that the NDSS-T has a score of .79 which is under the curve (.69 for the FTND), thus the prediction of nicotine dependence is adequate. We conclude that this instrument is useful (in terms of its total score NDSS-T) for assessing nicotine dependence for Spanish smokers (in Spain), as has been found in other countries, language groups, and cultures.

  10. Systematic evaluation of observational methods assessing biomechanical exposures at work

    DEFF Research Database (Denmark)

    Takala, Esa-Pekka; Irmeli, Pehkonen; Forsman, Mikael

    2009-01-01

    by sorting the methods according to the several items evaluated.   Numerous methods have been developed to assess physical workload (biomechanical exposures) in order to identify hazards leading to musculoskeletal disorders, to monitor the effects of ergonomic changes, and for research. No indvidual method...... between observers Potential users NIOSH Lifting Eq. NA X - O, R Arbouw M - - O ACGIH Lifting TLV M - - O MAC - - M O, W(?) ManTRA - - - O, R(?),W(?) NZ Code for MH - - - O, W(?) Washington state ergonomic rule M X M O, W(?) BackEST ML - M R   Correspondence with valid reference: HM = High to moderate, L......), and Washington state model. MAC (UK), ManTRA (Australia), and New Zealand code are widely used for the assessment of risks in MMH but we did not found formal studies on validity of these methods. The inter-observer repeatability of MAC and the Washington state model has been found to be moderate. Back...

  11. The full size validation of remanent life assessment methods

    International Nuclear Information System (INIS)

    Hepworth, J.K.; Williams, J.A.

    1988-03-01

    A range of possible life assessment techniques for the remanent life appraisal of creeping structures is available in the published literature. However, due to the safety implications, the true conservatism of such methods cannot be assessed on operating plant. Consequently, the CEGB set up a four vessel programme in the Pressure Vessel Test Facility at the Marchwood Engineering Laboratories of the CEGB to underwrite and quantify the accuracy of these methods. The application of two non-destructive methods, namely strain monitoring and hardness measurement, to the data generated during about 12,000 hours of testing is examined. The current state of development of these methods is reviewed. Finally, the future CEGB programme relating to these vessels is discussed. (author)

  12. Indoor air - assessment: Methods of analysis for environmental carcinogens

    International Nuclear Information System (INIS)

    Peterson, M.R.; Naugle, D.F.; Berry, M.A.

    1990-06-01

    The monograph describes, in a general way, published sampling procedures and analytical approaches for known and suspected carcinogens. The primary focus is upon carcinogens found in indoor air, although the methods described are applicable to other media or environments. In cases where there are no published methods for a particular pollutant in indoor air, methods developed for the workplace and for ambient air are included since they should be adaptable to indoor air. Known and suspected carcinogens have been grouped into six categories for the purposes of this and related work. The categories are radon, asbestos, organic compounds, inorganic species, particles, and non-ionizing radiation. Some methods of assessing exposure that are not specific to any particular pollutant category are covered in a separate section. The report is the fifth in a series of EPA/Environmental Criteria and Assessment Office Monographs

  13. The calculation of site-dependent earthquake motions -3. The method of fast fourier transform

    International Nuclear Information System (INIS)

    Simpson, I.C.

    1976-10-01

    The method of Fast Fourier transform (FFT) is applied to the problem of the determination of site-dependent earthquake motions, which takes account of local geological effects. A program, VELAY 1, which uses the FFT method has been written and is described in this report. The assumptions of horizontally stratified, homogeneous, isotropic, linearly viscoelastic layers and a normally incident plane seismic wave are made. Several examples are given, using VELAY 1, of modified surface acceleration-time histories obtained using a selected input acceleration-time history and a representative system of soil layers. There is a discussion concerning the soil properties that need to be measured in order to use VELAY 1 (and similar programs described in previous reports) and hence generate site-dependent ground motions suitable for aseismic design of a nuclear power plant at a given site. (author)

  14. Method of determining dispersion dependence of refractive index of nanospheres building opals

    Science.gov (United States)

    Kępińska, Mirosława; Starczewska, Anna; Duka, Piotr

    2017-11-01

    The method of determining dispersion dependence of refractive index of nanospheres building opals is presented. In this method basing on angular dependences of the spectral positions of Bragg diffraction minima on transmission spectra for opal series of known spheres diameter, the spectrum of effective refractive index for opals and then refractive index for material building opal's spheres is determined. The described procedure is used for determination of neff(λ) for opals and nsph(λ) for material which spheres building investigated opals are made of. The obtained results are compared with literature data of nSiO2(λ) considered in the analysis and interpretation of extremes related to the light diffraction at (hkl) SiO2 opal planes.

  15. Testing serial dependence by Random-shuffle surrogates and the Wayland method

    Energy Technology Data Exchange (ETDEWEB)

    Hirata, Yoshito [Department of Mathematical Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Aihara Complexity Modelling Project, ERATO, JST (Japan); Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)], E-mail: yoshito@sat.t.u-tokyo.ac.jp; Horai, Shunsuke [Aihara Complexity Modelling Project, ERATO, JST (Japan); Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Suzuki, Hideyuki [Department of Mathematical Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan); Aihara, Kazuyuki [Department of Mathematical Informatics, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Aihara Complexity Modelling Project, ERATO, JST (Japan); Institute of Industrial Science, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8505 (Japan)

    2007-10-22

    Given time series, a primary concern is existence of serial dependence and determinism. They are often tested with Random-shuffle surrogates, which totally break serial dependence, and the Wayland method. Since the statistic of the Wayland method fundamentally shows a smaller value for a more deterministic time series, for real-world data, we usually expect that the statistic for the original data is smaller than or equal to those of Random-shuffle surrogates. However, we show herewith an opposite result with wind data in high time resolution. We argue that this puzzling phenomenon can be produced by observational or dynamical noise, both of which may be produced by a low-dimensional deterministic system. Thus the one-sided test is dangerous.

  16. A hybrid transport-diffusion Monte Carlo method for frequency-dependent radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2012-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations in optically thick media. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many smaller Monte Carlo steps, thus improving the efficiency of the simulation. In this paper, we present an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold, as optical thickness is typically a decreasing function of frequency. Above this threshold we employ standard Monte Carlo, which results in a hybrid transport-diffusion scheme. With a set of frequency-dependent test problems, we confirm the accuracy and increased efficiency of our new DDMC method.

  17. Testing serial dependence by Random-shuffle surrogates and the Wayland method

    International Nuclear Information System (INIS)

    Hirata, Yoshito; Horai, Shunsuke; Suzuki, Hideyuki; Aihara, Kazuyuki

    2007-01-01

    Given time series, a primary concern is existence of serial dependence and determinism. They are often tested with Random-shuffle surrogates, which totally break serial dependence, and the Wayland method. Since the statistic of the Wayland method fundamentally shows a smaller value for a more deterministic time series, for real-world data, we usually expect that the statistic for the original data is smaller than or equal to those of Random-shuffle surrogates. However, we show herewith an opposite result with wind data in high time resolution. We argue that this puzzling phenomenon can be produced by observational or dynamical noise, both of which may be produced by a low-dimensional deterministic system. Thus the one-sided test is dangerous

  18. Risk and dose assessment methods in gamma knife QA

    International Nuclear Information System (INIS)

    Banks, W.W.; Jones, E.D.; Rathbun, P.

    1992-10-01

    Traditional methods used in assessing risk in nuclear power plants may be inappropriate to use in assessing medical radiation risks. The typical philosophy used in assessing nuclear reactor risks is machine dominated with only secondary attention paid to the human component, and only after critical machine failure events have been identified. In assessing the risk of a misadministrative radiation dose to patients, the primary source of failures seems to stem overwhelmingly, from the actions of people and only secondarily from machine mode failures. In essence, certain medical misadministrations are dominated by human events not machine failures. Radiological medical devices such as the Leksell Gamma Knife are very simple in design, have few moving parts, and are relatively free from the risks of wear when compared with a nuclear power plant. Since there are major technical differences between a gamma knife and a nuclear power plant, one must select a particular risk assessment method which is sensitive to these system differences and tailored to the unique medical aspects of the phenomena under study. These differences also generate major shifts in the philosophy and assumptions which drive the risk assessment (Machine-centered vs Person-centered) method. We were prompted by these basic differences to develop a person-centered approach to risk assessment which would reflect these basic philosophical and technological differences, have the necessary resolution in its metrics, and be highly reliable (repeatable). The risk approach chosen by the Livermore investigative team has been called the ''Relative Risk Profile Method'' and has been described in detail by Banks and Paramore, (1983)

  19. Accuracy, precision, and economic efficiency for three methods of thrips (Thysanoptera: Thripidae) population density assessment.

    Science.gov (United States)

    Sutherland, Andrew M; Parrella, Michael P

    2011-08-01

    Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.

  20. Development of fire risk assessment method caused by earthquake

    International Nuclear Information System (INIS)

    Mitomo, Nobuo; Matsukura, Hiroshi; Matsuoka, Takeshi; Suzuki, Kazutaka

    2000-01-01

    The purpose of this research is to establish the assessment method of the risk of the multiple fires caused by earthquake, in the framework of PSA. In order to establish this method, we have settled four tasks and started a five years research project in 1999 for five years. These results will be useful for not only nuclear power plants but also chemical plants, traffic systems etc. (author)

  1. Method of nuclear reactor control using a variable temperature load dependent set point

    International Nuclear Information System (INIS)

    Kelly, J.J.; Rambo, G.E.

    1982-01-01

    A method and apparatus for controlling a nuclear reactor in response to a variable average reactor coolant temperature set point is disclosed. The set point is dependent upon percent of full power load demand. A manually-actuated ''droop mode'' of control is provided whereby the reactor coolant temperature is allowed to drop below the set point temperature a predetermined amount wherein the control is switched from reactor control rods exclusively to feedwater flow

  2. An introduction to the adiabatic time-dependent Hartree-Fock method

    International Nuclear Information System (INIS)

    Giannoni, M.J.

    1984-05-01

    The aim of the adiabatic time-dependent Hartree-Fock method is to investigate the microscopic foundations of the phenomenological collective models. We briefly review the general formulation, which consists in deriving a Bohr-like Hamiltonian from a mean field theory, and discuss the limiting case where only a few collective variables participate to the motion. Some applications to soft nuclei and heavy ion collisions are presented

  3. First-Order Hyperbolic System Method for Time-Dependent Advection-Diffusion Problems

    Science.gov (United States)

    2014-03-01

    accuracy, with rapid convergence over each physical time step, typically less than five Newton iter - ations. 1 Contents 1 Introduction 3 2 Hyperbolic...however, we employ the Gauss - Seidel (GS) relaxation, which is also an O(N) method for the discretization arising from hyperbolic advection-diffusion system...advection-diffusion scheme. The linear dependency of the iterations on Table 1: Boundary layer problem ( Convergence criteria: Residuals < 10−8.) log10Re

  4. An analytical method for determining the temperature dependent moisture diffusivities of pumpkin seeds during drying process

    Energy Technology Data Exchange (ETDEWEB)

    Can, Ahmet [Department of Mechanical Engineering, University of Trakya, 22030 Edirne (Turkey)

    2007-02-15

    This paper presents an analytical method, which determines the moisture diffusion coefficients for the natural and forced convection hot air drying of pumpkin seeds and their temperature dependence. In order to obtain scientific data, the pumpkin seed drying process was investigated under both natural and forced hot air convection regimes. This paper presents the experimental results in which the drying air was heated by solar energy. (author)

  5. Representativeness of environmental impact assessment methods regarding Life Cycle Inventories.

    Science.gov (United States)

    Esnouf, Antoine; Latrille, Éric; Steyer, Jean-Philippe; Helias, Arnaud

    2018-04-15

    Life Cycle Assessment (LCA) characterises all the exchanges between human driven activities and the environment, thus representing a powerful approach for tackling the environmental impact of a production system. However, LCA practitioners must still choose the appropriate Life Cycle Impact Assessment (LCIA) method to use and are expected to justify this choice: impacts should be relevant facing the concerns of the study and misrepresentations should be avoided. This work aids practitioners in evaluating the adequacy between the assessed environmental issues and studied production system. Based on a geometrical standpoint of LCA framework, Life Cycle Inventories (LCIs) and LCIA methods were localized in the vector space spanned by elementary flows. A proximity measurement, the Representativeness Index (RI), is proposed to explore the relationship between those datasets (LCIs and LCIA methods) through an angular distance. RIs highlight LCIA methods that measure issues for which the LCI can be particularly harmful. A high RI indicates a close proximity between a LCI and a LCIA method, and highlights a better representation of the elementary flows by the LCIA method. To illustrate the benefits of the proposed approach, representativeness of LCIA methods regarding four electricity mix production LCIs from the ecoinvent database are presented. RIs for 18 LCIA methods (accounting for a total of 232 impact categories) were calculated on these LCIs and the relevance of the methods are discussed. RIs prove to be a criterion for distinguishing the different LCIA methods and could thus be employed by practitioners for deeper interpretations of LCIA results. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Significance and challenges of stereoselectivity assessing methods in drug metabolism

    Directory of Open Access Journals (Sweden)

    Zhuowei Shen

    2016-02-01

    Full Text Available Stereoselectivity in drug metabolism can not only influence the pharmacological activities, tolerability, safety, and bioavailability of drugs directly, but also cause different kinds of drug–drug interactions. Thus, assessing stereoselectivity in drug metabolism is of great significance for pharmaceutical research and development (R&D and rational use in clinic. Although there are various methods available for assessing stereoselectivity in drug metabolism, many of them have shortcomings. The indirect method of chromatographic methods can only be applicable to specific samples with functional groups to be derivatized or form complex with a chiral selector, while the direct method achieved by chiral stationary phases (CSPs is expensive. As a detector of chromatographic methods, mass spectrometry (MS is highly sensitive and specific, whereas the matrix interference is still a challenge to overcome. In addition, the use of nuclear magnetic resonance (NMR and immunoassay in chiral analysis are worth noting. This review presents several typical examples of drug stereoselective metabolism and provides a literature-based evaluation on current chiral analytical techniques to show the significance and challenges of stereoselectivity assessing methods in drug metabolism.

  7. Numerical method for solving the three-dimensional time-dependent neutron diffusion equation

    International Nuclear Information System (INIS)

    Khaled, S.M.; Szatmary, Z.

    2005-01-01

    A numerical time-implicit method has been developed for solving the coupled three-dimensional time-dependent multi-group neutron diffusion and delayed neutron precursor equations. The numerical stability of the implicit computation scheme and the convergence of the iterative associated processes have been evaluated. The computational scheme requires the solution of large linear systems at each time step. For this purpose, the point over-relaxation Gauss-Seidel method was chosen. A new scheme was introduced instead of the usual source iteration scheme. (author)

  8. Charmonium-nucleon interactions from the time-dependent HAL QCD method

    Science.gov (United States)

    Sugiura, Takuya; Ikeda, Yoichi; Ishii, Noriyoshi

    2018-03-01

    The charmonium-nucleon effective central interactions have been computed by the time-dependent HAL QCD method. This gives an updated result of a previous study based on the time-independent method, which is now known to be problematic because of the difficulty in achieving the ground-state saturation. We discuss that the result is consistent with the heavy quark symmetry. No bound state is observed from the analysis of the scattering phase shift; however, this shall lead to a future search of the hidden-charm pentaquarks by considering channel-coupling effects.

  9. The adaptive CCCG({eta}) method for efficient solution of time dependent partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Campos, F.F. [Universidade Federal de Minas Gerais, Belo Horizonte (Brazil); Birkett, N.R.C. [Oxford Univ. Computing Lab. (United Kingdom)

    1996-12-31

    The Controlled Cholesky factorisation has been shown to be a robust preconditioner for the Conjugate Gradient method. In this scheme the amount of fill-in is defined in terms of a parameter {eta}, the number of extra elements allowed per column. It is demonstrated how an optimum value of {eta} can be automatically determined when solving time dependent p.d.e.`s using an implicit time step method. A comparison between CCCG({eta}) and the standard ICCG solving parabolic problems on general grids shows CCCG({eta}) to be an efficient general purpose solver.

  10. Histological Grading of Hepatocellular Carcinomas with Intravoxel Incoherent Motion Diffusion-weighted Imaging: Inconsistent Results Depending on the Fitting Method.

    Science.gov (United States)

    Ichikawa, Shintaro; Motosugi, Utaroh; Hernando, Diego; Morisaka, Hiroyuki; Enomoto, Nobuyuki; Matsuda, Masanori; Onishi, Hiroshi

    2018-04-10

    To compare the abilities of three intravoxel incoherent motion (IVIM) imaging approximation methods to discriminate the histological grade of hepatocellular carcinomas (HCCs). Fifty-eight patients (60 HCCs) underwent IVIM imaging with 11 b-values (0-1000 s/mm 2 ). Slow (D) and fast diffusion coefficients (D * ) and the perfusion fraction (f) were calculated for the HCCs using the mean signal intensities in regions of interest drawn by two radiologists. Three approximation methods were used. First, all three parameters were obtained simultaneously using non-linear fitting (method A). Second, D was obtained using linear fitting (b = 500 and 1000), followed by non-linear fitting for D * and f (method B). Third, D was obtained by linear fitting, f was obtained using the regression line intersection and signals at b = 0, and non-linear fitting was used for D * (method C). A receiver operating characteristic analysis was performed to reveal the abilities of these methods to distinguish poorly-differentiated from well-to-moderately-differentiated HCCs. Inter-reader agreements were assessed using intraclass correlation coefficients (ICCs). The measurements of D, D * , and f in methods B and C (Az-value, 0.658-0.881) had better discrimination abilities than did those in method A (Az-value, 0.527-0.607). The ICCs of D and f were good to excellent (0.639-0.835) with all methods. The ICCs of D * were moderate with methods B (0.580) and C (0.463) and good with method A (0.705). The IVIM parameters may vary depending on the fitting methods, and therefore, further technical refinement may be needed.

  11. Analysis and Comparison of Objective Methods for Image Quality Assessment

    Directory of Open Access Journals (Sweden)

    P. S. Babkin

    2014-01-01

    Full Text Available The purpose of this work is research and modification of the reference objective methods for image quality assessment. The ultimate goal is to obtain a modification of formal assessments that more closely corresponds to the subjective expert estimates (MOS.In considering the formal reference objective methods for image quality assessment we used the results of other authors, which offer results and comparative analyzes of the most effective algorithms. Based on these investigations we have chosen two of the most successful algorithm for which was made a further analysis in the MATLAB 7.8 R 2009 a (PQS and MSSSIM. The publication focuses on the features of the algorithms, which have great importance in practical implementation, but are insufficiently covered in the publications by other authors.In the implemented modification of the algorithm PQS boundary detector Kirsch was replaced by the boundary detector Canny. Further experiments were carried out according to the method of the ITU-R VT.500-13 (01/2012 using monochrome images treated with different types of filters (should be emphasized that an objective assessment of image quality PQS is applicable only to monochrome images. Images were obtained with a thermal imaging surveillance system. The experimental results proved the effectiveness of this modification.In the specialized literature in the field of formal to evaluation methods pictures, this type of modification was not mentioned.The method described in the publication can be applied to various practical implementations of digital image processing.Advisability and effectiveness of using the modified method of PQS to assess the structural differences between the images are shown in the article and this will be used in solving the problems of identification and automatic control.

  12. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    International Nuclear Information System (INIS)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs

  13. Hyperspherical time-dependent method with semiclassical outgoing waves for double photoionization of helium

    International Nuclear Information System (INIS)

    Kazansky, A.K.; Selles, P.; Malegat, L.

    2003-01-01

    The hyperspherical time-dependent method with semiclassical outgoing waves for study of double photoionization of helium is presented. It is closely related to the hyperspherical R-matrix method with semiclassical outgoing waves [Phys. Rev. A 65, 032711 (2002)]: both split configuration space into two regions to solve the stationary inhomogeneous Schroedinger equation associated with the one-photon ionization problem, and both apply the same treatment to the outer region. However, the two methods differ radically in their treatments of the problem in the inner region: the most recent one applies a time-dependent approach for calculating the stationary wave function, while the previous one uses a R-matrix treatment. The excellent agreement observed between the triple differential cross sections obtained from these two basically different methods provides very strong support for both of them. Importantly, the very different numerical structures of both methods might make the most recent one a better candidate for investigating the near-threshold region

  14. Power Supply Interruption Costs: Models and Methods Incorporating Time Dependent Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Kjoelle, G.H.

    1996-12-01

    This doctoral thesis develops models and methods for estimation of annual interruption costs for delivery points, emphasizing the handling of time dependent patterns and uncertainties in the variables determining the annual costs. It presents an analytical method for calculation of annual expected interruption costs for delivery points in radial systems, based on a radial reliability model, with time dependent variables. And a similar method for meshed systems, based on a list of outage events, assuming that these events are found in advance from load flow and contingency analyses. A Monte Carlo simulation model is given which handles both time variations and stochastic variations in the input variables and is based on the same list of outage events. This general procedure for radial and meshed systems provides expectation values and probability distributions for interruption costs from delivery points. There is also a procedure for handling uncertainties in input variables by a fuzzy description, giving annual interruption costs as a fuzzy membership function. The methods are developed for practical applications in radial and meshed systems, based on available data from failure statistics, load registrations and customer surveys. Traditional reliability indices such as annual interruption time, power- and energy not supplied, are calculated as by-products. The methods are presented as algorithms and/or procedures which are available as prototypes. 97 refs., 114 figs., 62 tabs.

  15. An Identification Key for Selecting Methods for Sustainability Assessments

    Directory of Open Access Journals (Sweden)

    Michiel C. Zijp

    2015-03-01

    Full Text Available Sustainability assessments can play an important role in decision making. This role starts with selecting appropriate methods for a given situation. We observed that scientists, consultants, and decision-makers often do not systematically perform a problem analyses that guides the choice of the method, partly related to a lack of systematic, though sufficiently versatile approaches to do so. Therefore, we developed and propose a new step towards method selection on the basis of question articulation: the Sustainability Assessment Identification Key. The identification key was designed to lead its user through all important choices needed for comprehensive question articulation. Subsequently, methods that fit the resulting specific questions are suggested by the key. The key consists of five domains, of which three determine method selection and two the design or use of the method. Each domain consists of four or more criteria that need specification. For example in the domain “system boundaries”, amongst others, the spatial and temporal scales are specified. The key was tested (retrospectively on a set of thirty case studies. Using the key appeared to contribute to improved: (i transparency in the link between the question and method selection; (ii consistency between questions asked and answers provided; and (iii internal consistency in methodological design. There is latitude to develop the current initial key further, not only for selecting methods pertinent to a problem definition, but also as a principle for associated opportunities such as stakeholder identification.

  16. THE RISKS’ ASSESSMENT IN INNOVATIVE PROJECTS BY THE METHOD OF VERIFIED EQUIVALENTS

    Directory of Open Access Journals (Sweden)

    Анатолій Валентинович ШАХОВ

    2017-03-01

    Full Text Available The article describes the concept of "risk of innovation", identified the causes of the risk and the methods of eliminating of negative manifestations of the risk situations in innovative projects. The advantages and disadvantages of the method of correction of the discount rate and the method of equivalent annuities are considered. The methodical approach in assessing the expected effect of the innovative project based on the concept of probability-interval uncertainty is proposed in the article. It was established that the analyzed approaches can be used for the accounting of the risk of innovative projects. Project manager makes his choice using any method of risk assessment individually, depending on the extent and characteristics of the project, the degree of novelty and scale introduction of innovative products, the number of participants and the level of requirements of the foundation of project efficiency and other factors.

  17. Impact of mounting methods in computerized axiography on assessment of condylar inclination.

    Science.gov (United States)

    Schierz, Oliver; Wagner, Philipp; Rauch, Angelika; Reissmann, Daniel R

    2017-08-30

    Valid and reliable recording is a key requirement for accurately simulating individual jaw movements. Horizontal condylar inclination (HCI) and Bennett's angle were measured using a digital jaw tracker (Cadiax® Compact 2) in 27 young adults. Three mounting methods (paraocclusal tray adapter, periocclusal tray adapter, and tray adapter with mandibular clamp) were tested. The mean values of the HCI differed by up to 10° between the mounting methods; however, the values for Bennett's angle did not differ substantially. While the intersession reliability of the Bennett's angle assessment did not depend on the mounting method, the reliability of the HCI assessment was only fair to good for the paraocclusal mounting method but poor for both periocclusal mounting methods. For attaching the tracing bow of jaw trackers to the mandible, a paraocclusal tray adapter should be applied, to achieve the most reliable results.

  18. The discrete ordinates method for solving the azimuthally dependent transport equation in plane geometry

    International Nuclear Information System (INIS)

    Chalhoub, Ezzat Selim

    1997-01-01

    The method of discrete ordinates is applied to the solution of the slab albedo problem with azimuthal dependence in transport theory. A new set of quadratures appropriate to the problem is introduced. In addition to the ANISN code, modified to include the proposed formalism, two new programs, PEESNC and PEESNA, which were created on the basis of the discrete ordinates formalism, using the direct integration method and the analytic solution method respectively, are used in the generation of results for a few sample problems. Program PEESNC was created to validate the results obtained with the discrete ordinates method and the finite difference approximation (ANISN), while program PEESNA was developed in order to implement an analytical discrete ordinates formalism, which provides more accurate results. The obtained results for selected sample problems are compared with highly accurate numerical results published in the literature. Compared to ANISN and PEESNC, program PEESNA presents a greater efficiency in execution time and much more precise numerical results. (author)

  19. BREEAM [Building Research Establishment Environmental Assessment Method] BRE [Building Research Establishment] assessment method for buildings

    International Nuclear Information System (INIS)

    Baldwin, R.

    1994-01-01

    Buildings account for a large share of environmental impacts in their construction, use, and demolition. In western Europe, buildings account for ca 50% of primary energy use (hence CO 2 output), far outweighing the contribution of the transport and industrial sectors. Other impacts from building energy use include the use of chemicals such as chlorofluorocarbons for cooling. In the United Kingdom, the Building Research Establishment (BRE) has developed a certificate system for environmental labelling of buildings so that the performance of the building against a set of defined environmental criteria can be made visible to clients. This system thus rewards positive actions to improve the environmental performance of buildings and assists in marketing to an environmentally aware clientele. Issues included in assessments for awarding the certificate are addressed under three main headings: global issues and use of resources, local issues, and indoor issues. Global issues include ozone depletion and CO 2 emissions; local issues include public health and water conservation; and indoor issues include air quality and lighting. 8 refs., 1 tab

  20. Biological methods used to assess surface water quality

    Directory of Open Access Journals (Sweden)

    Szczerbiñska Natalia

    2015-12-01

    Full Text Available In accordance with the guidelines of the Water Framework Directive 2000/60 (WFD, both ecological and chemical statuses determine the assessment of surface waters. The profile of ecological status is based on the analysis of various biological components, and physicochemical and hydromorphological indicators complement this assessment. The aim of this article is to present the biological methods used in the assessment of water status with a special focus on bioassay, as well as to provide a review of methods of monitoring water status. Biological test methods include both biomonitoring and bioanalytics. Water biomonitoring is used to assess and forecast the status of water. These studies aim to collect data on water pollution and forecast its impact. Biomonitoring uses organisms which are characterized by particular vulnerability to contaminants. Bioindicator organisms are algae, fungi, bacteria, larval invertebrates, cyanobacteria, macroinvertebrates, and fish. Bioanalytics is based on the receptors of contaminants that can be biologically active substances. In bioanalytics, biosensors such as viruses, bacteria, antibodies, enzymes, and biotests are used to assess degrees of pollution.

  1. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  2. Quantitative assessment of breast density: comparison of different methods

    International Nuclear Information System (INIS)

    Qin Naishan; Guo Li; Dang Yi; Song Luxin; Wang Xiaoying

    2011-01-01

    Objective: To Compare different methods of quantitative breast density measurement. Methods: The study included sixty patients who underwent both mammography and breast MRI. The breast density was computed automatically on digital mammograms with R2 workstation, Two experienced radiologists read the mammograms and assessed the breast density with Wolfe and ACR classification respectively. Fuzzy C-means clustering algorithm (FCM) was used to assess breast density on MRI. Each assessment method was repeated after 2 weeks. Spearman and Pearson correlations of inter- and intrareader and intermodality were computed for density estimates. Results: Inter- and intrareader correlation of Wolfe classification were 0.74 and 0.65, and they were 0.74 and 0.82 for ACR classification respectively. Correlation between Wolfe and ACR classification was 0.77. High interreader correlation of 0.98 and intrareader correlation of 0.96 was observed with MR FCM measurement. And the correlation between digital mammograms and MRI was high in the assessment of breast density (r=0.81, P<0.01). Conclusion: High correlation of breast density estimates on digital mammograms and MRI FCM suggested the former could be used as a simple and accurate method. (authors)

  3. Methods of assessment of whole body 241Am content

    International Nuclear Information System (INIS)

    Foltanova, S.; Malatova, I.; Klisak, J.

    1998-01-01

    This paper discuss an influence of different skull phantoms on efficiency of the measurement. Description of some methods of an assessment of the 241 Am content in the human skeleton from measurements performed over long bones of the human body is also offered. (authors)

  4. Assessment of Environmental Problems and Methods of Waste ...

    African Journals Online (AJOL)

    This study assessed the environmental problems and methods of waste management in Ado-Ekiti, Nigeria. Waste management is the collection, transportation, processing, recycling or disposal of waste materials, usually the one produced by human activities in an effort to reduce their effect on human health or on local ...

  5. ASSESSMENT OF WORK-SPACE AND WORK-METHOD DESIGNS ...

    African Journals Online (AJOL)

    related injuries among its workforce. This research assessed work-space (WsD) and work-method designs (WmD), level of compliance with recommended standards (RSs) and effects on workers' wellbeing. Clearances for services in 55 supine ...

  6. The Annemarie Roeper Method of Qualitative Assessment: My Journey

    Science.gov (United States)

    Beneventi, Anne

    2016-01-01

    The Annemarie Roeper Method of Qualitative Assessment (QA) establishes an extremely rich set of procedures for revealing students' strengths as well as opportunities for the development of bright young people. This article explores the ways in which the QA process serves as a sterling example of a holistic, authentic system for recognizing…

  7. Myths and Misconceptions about Using Qualitative Methods in Assessment

    Science.gov (United States)

    Harper, Shaun R.; Kuh, George D.

    2007-01-01

    The value of qualitative assessment approaches has been underestimated primarily because they are often juxtaposed against long-standing quantitative traditions and the widely accepted premise that the best research produces generalizable and statistically significant findings. Institutional researchers avoid qualitative methods for at least three…

  8. An assessment method for system innovation and transition (AMSIT)

    NARCIS (Netherlands)

    Bos, Marc W.; Hofman, Erwin; Kuhlmann, Stefan

    2016-01-01

    To address comprehensive system innovations that may occur in a future transition, a suitable ex ante assessment method is required. The technological innovation system approach is useful for the retrospective study of the conditions for success or failure of innovation trajectories, and the

  9. Critical temperature: A quantitative method of assessing cold tolerance

    Science.gov (United States)

    D.H. DeHayes; M.W., Jr. Williams

    1989-01-01

    Critical temperature (Tc), defined as the highest temperature at which freezing injury to plant tissues can be detected, provides a biologically meaningful and statistically defined assessment of the relative cold tolerance of plant tissues. A method is described for calculating critical temperatures in laboratory freezing studies that use...

  10. Reliability and Validity of the Research Methods Skills Assessment

    Science.gov (United States)

    Smith, Tamarah; Smith, Samantha

    2018-01-01

    The Research Methods Skills Assessment (RMSA) was created to measure psychology majors' statistics knowledge and skills. The American Psychological Association's Guidelines for the Undergraduate Major in Psychology (APA, 2007, 2013) served as a framework for development. Results from a Rasch analysis with data from n = 330 undergraduates showed…

  11. A Comparison of Assessment Methods and Raters in Product Creativity

    Science.gov (United States)

    Lu, Chia-Chen; Luh, Ding-Bang

    2012-01-01

    Although previous studies have attempted to use different experiences of raters to rate product creativity by adopting the Consensus Assessment Method (CAT) approach, the validity of replacing CAT with another measurement tool has not been adequately tested. This study aimed to compare raters with different levels of experience (expert ves.…

  12. Assessment of reliability of Greulich and Pyle (gp) method for ...

    African Journals Online (AJOL)

    Background: Greulich and Pyle standards are the most widely used age estimation standards all over the world. The applicability of the Greulich and Pyle standards to populations which differ from their reference population is often questioned. This study aimed to assess the reliability of Greulich and Pyle (GP) method for ...

  13. The AHP method used in assessment of foundry enterprise position

    Directory of Open Access Journals (Sweden)

    J. Szymszal

    2008-10-01

    Full Text Available Complex assessment of activity of a selected foundry enterprise based on a modern AHP (Analytic Hierarchy Process method has beenpresented. Having defined the areas of analysis, which include: marketing (products, distribution channels, sales organisation and client concentration, personnel (skills, managerial abilities, organisation climate, effectiveness of incentives, personnel fluctuations, production (availability of raw materials, technical level of production, effective use of production capacities, organisation and management (foundry structure, organisation culture, management performance, the analysis was made using the weighted sum of evaluations. The second step consisted in a comparative assessment of Foundry position using Saaty’s scale modified by Weber and the AHP method with examinationof a hierarchy structure involving the main (parent problem and its direct evolution into sub-problems. The assessment of Foundryposition made by AHP enables introducing changes and/or innovations which are expected to improve the overall productioneffectiveness.

  14. Condition Assessment for Wastewater Pipes: Method for Assessing Cracking and Surface Damage of Concrete Pipes

    OpenAIRE

    Hauge, Petter

    2013-01-01

    The objective of the Master Thesis has been to provide an improved method for condition assessment, which will give a better correlation between Condition class and actual Condition of concrete pipes with cracking and/or surface damages. Additionally improvement of the characterization of cracking (SR) and surface (KO) damages was a sub goal.Based on the findings described in my Thesis and my Specialization Project (Hauge 2012), I recommend that the Norwegian condition assessment method based...

  15. Comparison study on qualitative and quantitative risk assessment methods for urban natural gas pipeline network.

    Science.gov (United States)

    Han, Z Y; Weng, W G

    2011-05-15

    In this paper, a qualitative and a quantitative risk assessment methods for urban natural gas pipeline network are proposed. The qualitative method is comprised of an index system, which includes a causation index, an inherent risk index, a consequence index and their corresponding weights. The quantitative method consists of a probability assessment, a consequences analysis and a risk evaluation. The outcome of the qualitative method is a qualitative risk value, and for quantitative method the outcomes are individual risk and social risk. In comparison with previous research, the qualitative method proposed in this paper is particularly suitable for urban natural gas pipeline network, and the quantitative method takes different consequences of accidents into consideration, such as toxic gas diffusion, jet flame, fire ball combustion and UVCE. Two sample urban natural gas pipeline networks are used to demonstrate these two methods. It is indicated that both of the two methods can be applied to practical application, and the choice of the methods depends on the actual basic data of the gas pipelines and the precision requirements of risk assessment. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  16. Applying Multi-Criteria Analysis Methods for Fire Risk Assessment

    Directory of Open Access Journals (Sweden)

    Pushkina Julia

    2015-11-01

    Full Text Available The aim of this paper is to prove the application of multi-criteria analysis methods for optimisation of fire risk identification and assessment process. The object of this research is fire risk and risk assessment. The subject of the research is studying the application of analytic hierarchy process for modelling and influence assessment of various fire risk factors. Results of research conducted by the authors can be used by insurance companies to perform the detailed assessment of fire risks on the object and to calculate a risk extra charge to an insurance premium; by the state supervisory institutions to determine the compliance of a condition of object with requirements of regulations; by real state owners and investors to carry out actions for decrease in degree of fire risks and minimisation of possible losses.

  17. Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics

    Science.gov (United States)

    Guo, Qiang

    Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of

  18. Standard guide for three methods of assessing buried steel tanks

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1998-01-01

    1.1 This guide covers procedures to be implemented prior to the application of cathodic protection for evaluating the suitability of a tank for upgrading by cathodic protection alone. 1.2 Three procedures are described and identified as Methods A, B, and C. 1.2.1 Method A—Noninvasive with primary emphasis on statistical and electrochemical analysis of external site environment corrosion data. 1.2.2 Method B—Invasive ultrasonic thickness testing with external corrosion evaluation. 1.2.3 Method C—Invasive permanently recorded visual inspection and evaluation including external corrosion assessment. 1.3 This guide presents the methodology and the procedures utilizing site and tank specific data for determining a tank's condition and the suitability for such tanks to be upgraded with cathodic protection. 1.4 The tank's condition shall be assessed using Method A, B, or C. Prior to assessing the tank, a preliminary site survey shall be performed pursuant to Section 8 and the tank shall be tightness test...

  19. Total System Performance Assessment-License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  20. Total System Performance Assessment - License Application Methods and Approach

    International Nuclear Information System (INIS)

    McNeish, J.

    2003-01-01

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document

  1. Analytical resource assessment method for continuous (unconventional) oil and gas accumulations - The "ACCESS" Method

    Science.gov (United States)

    Crovelli, Robert A.; revised by Charpentier, Ronald R.

    2012-01-01

    The U.S. Geological Survey (USGS) periodically assesses petroleum resources of areas within the United States and the world. The purpose of this report is to explain the development of an analytic probabilistic method and spreadsheet software system called Analytic Cell-Based Continuous Energy Spreadsheet System (ACCESS). The ACCESS method is based upon mathematical equations derived from probability theory. The ACCESS spreadsheet can be used to calculate estimates of the undeveloped oil, gas, and NGL (natural gas liquids) resources in a continuous-type assessment unit. An assessment unit is a mappable volume of rock in a total petroleum system. In this report, the geologic assessment model is defined first, the analytic probabilistic method is described second, and the spreadsheet ACCESS is described third. In this revised version of Open-File Report 00-044 , the text has been updated to reflect modifications that were made to the ACCESS program. Two versions of the program are added as appendixes.

  2. Development and comparison of Bayesian modularization method in uncertainty assessment of hydrological models

    Science.gov (United States)

    Li, L.; Xu, C.-Y.; Engeland, K.

    2012-04-01

    With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD

  3. Methods of Blood Oxygen Level-Dependent Magnetic Resonance Imaging Analysis for Evaluating Renal Oxygenation

    Directory of Open Access Journals (Sweden)

    Fen Chen

    2018-03-01

    Full Text Available Blood oxygen level-dependent magnetic resonance imaging (BOLD MRI has recently been utilized as a noninvasive tool for evaluating renal oxygenation. Several methods have been proposed for analyzing BOLD images. Regional ROI selection is the earliest and most widely used method for BOLD analysis. In the last 20 years, many investigators have used this method to evaluate cortical and medullary oxygenation in patients with ischemic nephropathy, hypertensive nephropathy, diabetic nephropathy, chronic kidney disease (CKD, acute kidney injury and renal allograft rejection. However, clinical trials of BOLD MRI using regional ROI selection revealed that it was difficult to distinguish the renal cortico-medullary zones with this method, and that it was susceptible to observer variability. To overcome these deficiencies, several new methods were proposed for analyzing BOLD images, including the compartmental approach, fractional hypoxia method, concentric objects (CO method and twelve-layer concentric objects (TLCO method. The compartmental approach provides an algorithm to judge whether the pixel belongs to the cortex or medulla. Fractional kidney hypoxia, measured by using BOLD MRI, was negatively correlated with renal blood flow, tissue perfusion and glomerular filtration rate (GFR in patients with atherosclerotic renal artery stenosis. The CO method divides the renal parenchyma into six or twelve layers of thickness in each coronal slice of BOLD images and provides a R2* radial profile curve. The slope of the R2* curve associated positively with eGFR in CKD patients. Indeed, each method invariably has advantages and disadvantages, and there is generally no consensus method so far. Undoubtedly, analytic approaches for BOLD MRI with better reproducibility would assist clinicians in monitoring the degree of kidney hypoxia and thus facilitating timely reversal of tissue hypoxia.

  4. Gauge-Invariant Formulation of Time-Dependent Configuration Interaction Singles Method

    Directory of Open Access Journals (Sweden)

    Takeshi Sato

    2018-03-01

    Full Text Available We propose a gauge-invariant formulation of the channel orbital-based time-dependent configuration interaction singles (TDCIS method [Phys. Rev. A, 74, 043420 (2006], one of the powerful ab initio methods to investigate electron dynamics in atoms and molecules subject to an external laser field. In the present formulation, we derive the equations of motion (EOMs in the velocity gauge using gauge-transformed time-dependent, not fixed, orbitals that are equivalent to the conventional EOMs in the length gauge using fixed orbitals. The new velocity-gauge EOMs avoid the use of the length-gauge dipole operator, which diverges at large distance, and allows us to exploit computational advantages of the velocity-gauge treatment over the length-gauge one, e.g., a faster convergence in simulations with intense and long-wavelength lasers, and the feasibility of exterior complex scaling as an absorbing boundary. The reformulated TDCIS method is applied to an exactly solvable model of one-dimensional helium atom in an intense laser field to numerically demonstrate the gauge invariance. We also discuss the consistent method for evaluating the time derivative of an observable, which is relevant, e.g., in simulating high-harmonic generation.

  5. Assessment and comparison of methods for solar ultraviolet radiation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Leszczynski, K

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.).

  6. Assessment and comparison of methods for solar ultraviolet radiation measurements

    International Nuclear Information System (INIS)

    Leszczynski, K.

    1995-06-01

    In the study, the different methods to measure the solar ultraviolet radiation are compared. The methods included are spectroradiometric, erythemally weighted broadband and multi-channel measurements. The comparison of the different methods is based on a literature review and assessments of optical characteristics of the spectroradiometer Optronic 742 of the Finnish Centre for Radiation and Nuclear Safety (STUK) and of the erythemally weighted Robertson-Berger type broadband radiometers Solar Light models 500 and 501 of the Finnish Meteorological Institute and STUK. An introduction to the sources of error in solar UV measurements, to methods for radiometric characterization of UV radiometers together with methods for error reduction are presented. Reviews on experiences from world-wide UV monitoring efforts and instrumentation as well as on the results from international UV radiometer intercomparisons are also presented. (62 refs.)

  7. New method of scoliosis assessment: preliminary results using computerized photogrammetry.

    Science.gov (United States)

    Aroeira, Rozilene Maria Cota; Leal, Jefferson Soares; de Melo Pertence, Antônio Eustáquio

    2011-09-01

    A new method for nonradiographic evaluation of scoliosis was independently compared with the Cobb radiographic method, for the quantification of scoliotic curvature. To develop a protocol for computerized photogrammetry, as a nonradiographic method, for the quantification of scoliosis, and to mathematically relate this proposed method with the Cobb radiographic method. Repeated exposure to radiation of children can be harmful to their health. Nevertheless, no nonradiographic method until now proposed has gained popularity as a routine method for evaluation, mainly due to a low correspondence to the Cobb radiographic method. Patients undergoing standing posteroanterior full-length spine radiographs, who were willing to participate in this study, were submitted to dorsal digital photography in the orthostatic position with special surface markers over the spinous process, specifically the vertebrae C7 to L5. The radiographic and photographic images were sent separately for independent analysis to two examiners, trained in quantification of scoliosis for the types of images received. The scoliosis curvature angles obtained through computerized photogrammetry (the new method) were compared to those obtained through the Cobb radiographic method. Sixteen individuals were evaluated (14 female and 2 male). All presented idiopathic scoliosis, and were between 21.4 ± 6.1 years of age; 52.9 ± 5.8 kg in weight; 1.63 ± 0.05 m in height, with a body mass index of 19.8 ± 0.2. There was no statistically significant difference between the scoliosis angle measurements obtained in the comparative analysis of both methods, and a mathematical relationship was formulated between both methods. The preliminary results presented demonstrate equivalence between the two methods. More studies are needed to firmly assess the potential of this new method as a coadjuvant tool in the routine following of scoliosis treatment.

  8. Measuring global oil trade dependencies: An application of the point-wise mutual information method

    International Nuclear Information System (INIS)

    Kharrazi, Ali; Fath, Brian D.

    2016-01-01

    Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.

  9. Fragility assessment method of Concrete Wall Subjected to Impact Loading

    International Nuclear Information System (INIS)

    Hahm, Daegi; Shin, Sang Shup; Choi, In-Kil

    2014-01-01

    These studies have been aimed to verify and ensure the safety of the targeted walls and structures especially in the viewpoint of the deterministic approach. However, recently, the regulation and the assessment of the safety of the nuclear power plants (NPPs) against to an aircraft impact are strongly encouraged to adopt a probabilistic approach, i.e., the probabilistic risk assessment of an aircraft impact. In Korea, research to develop aircraft impact risk quantification technology was initiated in 2012 by Korea Atomic Energy Research Institute (KAERI). In this paper, for the one example of the probabilistic safety assessment approach, a method to estimate the failure probability and fragility of concrete wall subjected to impact loading caused by missiles or engine parts of aircrafts will be introduced. This method and the corresponding results will be used for the total technical roadmap and the procedure to assess the aircraft impact risk (Fig.1). A method and corresponding results of the estimation of the failure probability and fragility for a concrete wall subjected to impact loadings caused by missiles or engine parts of aircrafts was introduced. The detailed information of the target concrete wall in NPP, and the example aircraft engine model is considered safeguard information (SGI), and is not contained in this paper

  10. Collinear and transverse momentum dependent parton densities obtained with a parton branching method

    Energy Technology Data Exchange (ETDEWEB)

    Lelek, Aleksandra

    2017-10-15

    We present a solution of the DGLAP evolution equations, written in terms of Sudakov form factors to describe the branching and no-branching probabilities, using a parton branching Monte Carlo method. We demonstrate numerically that this method reproduces the semi-analytical solutions. We show how this method can be used to determine Transverse Momentum Dependent (TMD) parton distribution functions, in addition to the usual integrated parton distributions functions. We discuss numerical effects of the boundary of soft gluon resolution scale parameter on the resulting parton distribution functions. We show that a very good fit of the integrated TMDs to high precision HERA data can be obtained over a large range in x and Q{sup 2}.

  11. Collinear and transverse momentum dependent parton densities obtained with a parton branching method

    International Nuclear Information System (INIS)

    Lelek, Aleksandra

    2017-10-01

    We present a solution of the DGLAP evolution equations, written in terms of Sudakov form factors to describe the branching and no-branching probabilities, using a parton branching Monte Carlo method. We demonstrate numerically that this method reproduces the semi-analytical solutions. We show how this method can be used to determine Transverse Momentum Dependent (TMD) parton distribution functions, in addition to the usual integrated parton distributions functions. We discuss numerical effects of the boundary of soft gluon resolution scale parameter on the resulting parton distribution functions. We show that a very good fit of the integrated TMDs to high precision HERA data can be obtained over a large range in x and Q 2 .

  12. Research on probabilistic assessment method based on the corroded pipeline assessment criteria

    International Nuclear Information System (INIS)

    Zhang Guangli; Luo, Jinheng; Zhao Xinwei; Zhang Hua; Zhang Liang; Zhang Yi

    2012-01-01

    Pipeline integrity assessments are performed using conventional deterministic approaches, even though there are many uncertainties about the parameters in the pipeline integrity assessment. In this paper, a probabilistic assessment method is provided for the gas pipeline with corrosion defects based on the current corroded pipe evaluation criteria, and the failure probability of corroded pipelines due to the uncertainties of loadings, material property and measurement accuracy is estimated using Monte-Carlo technique. Furthermore, the sensitivity analysis approach is introduced to rank the influence of various random variables to the safety of pipeline. And the method to determine the critical defect size based on acceptable failure probability is proposed. Highlights: ► The folias factor in pipeline corrosion assessment methods was analyzed. ► The probabilistic method was applied in corrosion assessment methods. ► The influence of assessment variables to the reliability of pipeline was ranked. ► The acceptable failure probability was used to determine the critical defect size.

  13. Evaluation of two streamlined life cycle assessment methods

    International Nuclear Information System (INIS)

    Hochschomer, Elisabeth; Finnveden, Goeran; Johansson, Jessica

    2002-02-01

    Two different methods for streamlined life cycle assessment (LCA) are described: the MECO-method and SLCA. Both methods are tested on an already made case-study on cars fuelled with petrol or ethanol, and electric cars with electricity produced from hydro power or coal. The report also contains some background information on LCA and streamlined LCA, and a deschption of the case study used. The evaluation of the MECO and SLCA-methods are based on a comparison of the results from the case study as well as practical aspects. One conclusion is that the SLCA-method has some limitations. Among the limitations are that the whole life-cycle is not covered, it requires quite a lot of information and there is room for arbitrariness. It is not very flexible instead it difficult to develop further. We are therefore not recommending the SLCA-method. The MECO-method does in comparison show several attractive features. It is also interesting to note that the MECO-method produces information that is complementary compared to a more traditional quantitative LCA. We suggest that the MECO method needs some further development and adjustment to Swedish conditions

  14. Influence of Flow Sequencing Attributed to Climate Change and Climate Variability on the Assessment of Water-dependent Ecosystem Outcomes

    Science.gov (United States)

    Wang, J.; Nathan, R.; Horne, A.

    2017-12-01

    Traditional approaches to characterize water-dependent ecosystem outcomes in response to flow have been based on time-averaged hydrological indicators, however there is increasing recognition for the need to characterize ecological processes that are highly dependent on the sequencing of flow conditions (i.e. floods and droughts). This study considers the representation of flow regimes when considering assessment of ecological outcomes, and in particular, the need to account for sequencing and variability of flow. We conducted two case studies - one in the largely unregulated Ovens River catchment and one in the highly regulated Murray River catchment (both located in south-eastern Australia) - to explore the importance of flow sequencing to the condition of a typical long-lived ecological asset in Australia, the River Red Gum forests. In the first, the Ovens River case study, the implications of representing climate change using different downscaling methods (annual scaling, monthly scaling, quantile mapping, and weather generator method) on the sequencing of flows and resulting ecological outcomes were considered. In the second, the Murray River catchment, sequencing within a historic drought period was considered by systematically making modest adjustments on an annual basis to the hydrological records. In both cases, the condition of River Red Gum forests was assessed using an ecological model that incorporates transitions between ecological conditions in response to sequences of required flow components. The results of both studies show the importance of considering how hydrological alterations are represented when assessing ecological outcomes. The Ovens case study showed that there is significant variation in the predicted ecological outcomes when different downscaling techniques are applied. Similarly, the analysis in the Murray case study showed that the drought as it historically occurred provided one of the best possible outcomes for River Red Gum

  15. Development and comparison in uncertainty assessment based Bayesian modularization method in hydrological modeling

    Science.gov (United States)

    Li, Lu; Xu, Chong-Yu; Engeland, Kolbjørn

    2013-04-01

    SummaryWith respect to model calibration, parameter estimation and analysis of uncertainty sources, various regression and probabilistic approaches are used in hydrological modeling. A family of Bayesian methods, which incorporates different sources of information into a single analysis through Bayes' theorem, is widely used for uncertainty assessment. However, none of these approaches can well treat the impact of high flows in hydrological modeling. This study proposes a Bayesian modularization uncertainty assessment approach in which the highest streamflow observations are treated as suspect information that should not influence the inference of the main bulk of the model parameters. This study includes a comprehensive comparison and evaluation of uncertainty assessments by our new Bayesian modularization method and standard Bayesian methods using the Metropolis-Hastings (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions were used in combination with standard Bayesian method: the AR(1) plus Normal model independent of time (Model 1), the AR(1) plus Normal model dependent on time (Model 2) and the AR(1) plus Multi-normal model (Model 3). The results reveal that the Bayesian modularization method provides the most accurate streamflow estimates measured by the Nash-Sutcliffe efficiency and provide the best in uncertainty estimates for low, medium and entire flows compared to standard Bayesian methods. The study thus provides a new approach for reducing the impact of high flows on the discharge uncertainty assessment of hydrological models via Bayesian method.

  16. Assessment of Methods for Estimating Risk to Birds from ...

    Science.gov (United States)

    The U.S. EPA Ecological Risk Assessment Support Center (ERASC) announced the release of the final report entitled, Assessment of Methods for Estimating Risk to Birds from Ingestion of Contaminated Grit Particles. This report evaluates approaches for estimating the probability of ingestion by birds of contaminated particles such as pesticide granules or lead particles (i.e. shot or bullet fragments). In addition, it presents an approach for using this information to estimate the risk of mortality to birds from ingestion of lead particles. Response to ERASC Request #16

  17. Social assessment methods recommendation report: Draft: Revision 1

    International Nuclear Information System (INIS)

    1986-05-01

    This report recommends an approach to the Salt Repository Project Office (SRPO) for assessing the social impacts of a high-level nuclear waste repository. The report establishes several criteria for selecting an approach and then describes and evaluates existing social assessment approaches against the selection criteria. Based upon these evaluations a recommendation is made. The proposed modifications include suggestions to strengthen the approach by including elements of other methods. Suggestions for the development of community surveys and local leader interviews are also made. 64 refs., 4 figs., 14 tabs

  18. Validating the JobFit system functional assessment method

    Energy Technology Data Exchange (ETDEWEB)

    Jenny Legge; Robin Burgess-Limerick

    2007-05-15

    Workplace injuries are costing the Australian coal mining industry and its communities $410 Million a year. This ACARP study aims to meet those demands by developing a safe, reliable and valid pre-employment functional assessment tool. All JobFit System Pre-Employment Functional Assessments (PEFAs) consist of a musculoskeletal screen, balance test, aerobic fitness test and job-specific postural tolerances and material handling tasks. The results of each component are compared to the applicant's job demands and an overall PEFA score between 1 and 4 is given with 1 being the better score. The reliability study and validity study were conducted concurrently. The reliability study examined test-retest, intra-tester and inter-tester reliability of the JobFit System Functional Assessment Method. Overall, good to excellent reliability was found, which was sufficient to be used for comparison with injury data for determining the validity of the assessment. The overall assessment score and material handling tasks had the greatest reliability. The validity study compared the assessment results of 336 records from a Queensland underground and open cut coal mine with their injury records. A predictive relationship was found between PEFA score and the risk of a back/trunk/shoulder injury from manual handling. An association was also found between PEFA score of 1 and increased length of employment. Lower aerobic fitness test results had an inverse relationship with injury rates. The study found that underground workers, regardless of PEFA score, were more likely to have an injury when compared to other departments. No relationship was found between age and risk of injury. These results confirm the validity of the JobFit System Functional Assessment method.

  19. Life cycle integrated thermoeconomic assessment method for energy conversion systems

    International Nuclear Information System (INIS)

    Kanbur, Baris Burak; Xiang, Liming; Dubey, Swapnil; Choo, Fook Hoong; Duan, Fei

    2017-01-01

    Highlights: • A new LCA integrated thermoeconomic approach is presented. • The new unit fuel cost is found 4.8 times higher than the classic method. • The new defined parameter increased the sustainability index by 67.1%. • The case studies are performed for countries with different CO 2 prices. - Abstract: Life cycle assessment (LCA) based thermoeconomic modelling has been applied for the evaluation of energy conversion systems since it provided more comprehensive and applicable assessment criteria. This study proposes an improved thermoeconomic method, named as life cycle integrated thermoeconomic assessment (LCiTA), which combines the LCA based enviroeconomic parameters in the production steps of the system components and fuel with the conventional thermoeconomic method for the energy conversion systems. A micro-cogeneration system is investigated and analyzed with the LCiTA method, the comparative studies show that the unit cost of fuel by using the LCiTA method is 3.8 times higher than the conventional thermoeconomic model. It is also realized that the enviroeconomic parameters during the operation of the system components do not have significant impacts on the system streams since the exergetic parameters are dominant in the thermoeconomic calculations. Moreover, the improved sustainability index is found roundly 67.2% higher than the previously defined sustainability index, suggesting that the enviroeconomic and thermoeconomic parameters decrease the impact of the exergy destruction in the sustainability index definition. To find the feasible operation conditions for the micro-cogeneration system, different assessment strategies are presented. Furthermore, a case study for Singapore is conducted to see the impact of the forecasted carbon dioxide prices on the thermoeconomic performance of the micro-cogeneration system.

  20. Development of environmental risk assessment framework using index method

    International Nuclear Information System (INIS)

    Ali, M.W.; Wu, Y.

    2000-01-01

    This paper presents a newly developed framework for assessing the risk from events which are considered to be major accidents to the environment according to the classifications by the United Kingdom Department of Environment (DoE). The application of an environmental risk assessment framework using the newly developed index method is demonstrated by means of a case study. The framework makes use of Environmental Hazard Index (EHI) method by the United Kingdom AEA Technology for releases to river, but improves it by taking account to toxic dose rather than concentration; taking account of long-term effects including persistence and bio accumulation, not just short term effects; extending the method to all aspects of environment, not just rivers; and allowing account to be taken of design changes to mitigate the risk. The development of the framework has also led to a revision of the tolerability criteria to be used with the framework proposed earlier by weakness and recommend further work to improve this newly proposed environmental risk assessment framework. From the study, it is recommended that the environmental risk assessment framework be applied to a wide range of other case studies in order to further improve it. The framework should be modified to maintain consistency when the DoE revises its definitions of major accidents to the environment. Ease-of-use of the framework (and any other environmental framework) would be aided by the compilation of databases for environmental toxicity, river data and available consequence models. Further work could also be done to suggest methods of mitigating the risk and including them as numerical factors within method. (author)

  1. Can mixed assessment methods make biology classes more equitable?

    Science.gov (United States)

    Cotner, Sehoya; Ballen, Cissy J

    2017-01-01

    Many factors have been proposed to explain the attrition of women in science, technology, engineering and math fields, among them the lower performance of women in introductory courses resulting from deficits in incoming preparation. We focus on the impact of mixed methods of assessment, which minimizes the impact of high-stakes exams and rewards other methods of assessment such as group participation, low-stakes quizzes and assignments, and in-class activities. We hypothesized that these mixed methods would benefit individuals who otherwise underperform on high-stakes tests. Here, we analyze gender-based performance trends in nine large (N > 1000 students) introductory biology courses in fall 2016. Females underperformed on exams compared to their male counterparts, a difference that does not exist with other methods of assessment that compose course grade. Further, we analyzed three case studies of courses that transitioned their grading schemes to either de-emphasize or emphasize exams as a proportion of total course grade. We demonstrate that the shift away from an exam emphasis consequently benefits female students, thereby closing gaps in overall performance. Further, the exam performance gap itself is reduced when the exams contribute less to overall course grade. We discuss testable predictions that follow from our hypothesis, and advocate for the use of mixed methods of assessments (possibly as part of an overall shift to active learning techniques). We conclude by challenging the student deficit model, and suggest a course deficit model as explanatory of these performance gaps, whereby the microclimate of the classroom can either raise or lower barriers to success for underrepresented groups in STEM.

  2. Can mixed assessment methods make biology classes more equitable?

    Directory of Open Access Journals (Sweden)

    Sehoya Cotner

    Full Text Available Many factors have been proposed to explain the attrition of women in science, technology, engineering and math fields, among them the lower performance of women in introductory courses resulting from deficits in incoming preparation. We focus on the impact of mixed methods of assessment, which minimizes the impact of high-stakes exams and rewards other methods of assessment such as group participation, low-stakes quizzes and assignments, and in-class activities. We hypothesized that these mixed methods would benefit individuals who otherwise underperform on high-stakes tests. Here, we analyze gender-based performance trends in nine large (N > 1000 students introductory biology courses in fall 2016. Females underperformed on exams compared to their male counterparts, a difference that does not exist with other methods of assessment that compose course grade. Further, we analyzed three case studies of courses that transitioned their grading schemes to either de-emphasize or emphasize exams as a proportion of total course grade. We demonstrate that the shift away from an exam emphasis consequently benefits female students, thereby closing gaps in overall performance. Further, the exam performance gap itself is reduced when the exams contribute less to overall course grade. We discuss testable predictions that follow from our hypothesis, and advocate for the use of mixed methods of assessments (possibly as part of an overall shift to active learning techniques. We conclude by challenging the student deficit model, and suggest a course deficit model as explanatory of these performance gaps, whereby the microclimate of the classroom can either raise or lower barriers to success for underrepresented groups in STEM.

  3. Numerical method for time-dependent localized corrosion analysis with moving boundaries by combining the finite volume method and voxel method

    International Nuclear Information System (INIS)

    Onishi, Yuki; Takiyasu, Jumpei; Amaya, Kenji; Yakuwa, Hiroshi; Hayabusa, Keisuke

    2012-01-01

    Highlights: ► A novel numerical method to analyze time dependent localized corrosion is developed. ► It takes electromigration, mass diffusion, chemical reactions, and moving boundaries. ► Our method perfectly satisfies the conservation of mass and electroneutrality. ► The behavior of typical crevice corrosion is successfully simulated. ► Both verification and validation of our method are carried out. - Abstract: A novel numerical method for time-dependent localized corrosion analysis is presented. Electromigration, mass diffusion, chemical reactions, and moving boundaries are considered in the numerical simulation of localized corrosion of engineering alloys in an underwater environment. Our method combines the finite volume method (FVM) and the voxel method. The FVM is adopted in the corrosion rate calculation so that the conservation of mass is satisfied. A newly developed decoupled algorithm with a projection method is introduced in the FVM to decouple the multiphysics problem into the electrostatic, mass transport, and chemical reaction analyses with electroneutrality maintained. The polarization curves for the corroding metal are used as boundary conditions for the metal surfaces to calculate the corrosion rates. The voxel method is adopted in updating the moving boundaries of cavities without remeshing and mesh-to-mesh solution mapping. Some modifications of the standard voxel method, which represents the boundaries as zigzag-shaped surfaces, are introduced to generate smooth surfaces. Our method successfully reproduces the numerical and experimental results of a capillary electrophoresis problem. Furthermore, the numerical results are qualitatively consistent with the experimental results for several examples of crevice corrosion.

  4. Asymptotic equilibrium diffusion analysis of time-dependent Monte Carlo methods for grey radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2004-01-01

    The equations of nonlinear, time-dependent radiative transfer are known to yield the equilibrium diffusion equation as the leading-order solution of an asymptotic analysis when the mean-free path and mean-free time of a photon become small. We apply this same analysis to the Fleck-Cummings, Carter-Forest, and N'kaoua Monte Carlo approximations for grey (frequency-independent) radiative transfer. Although Monte Carlo simulation usually does not require the discretizations found in deterministic transport techniques, Monte Carlo methods for radiative transfer require a time discretization due to the nonlinearities of the problem. If an asymptotic analysis of the equations used by a particular Monte Carlo method yields an accurate time-discretized version of the equilibrium diffusion equation, the method should generate accurate solutions if a time discretization is chosen that resolves temperature changes, even if the time steps are much larger than the mean-free time of a photon. This analysis is of interest because in many radiative transfer problems, it is a practical necessity to use time steps that are large compared to a mean-free time. Our asymptotic analysis shows that: (i) the N'kaoua method has the equilibrium diffusion limit, (ii) the Carter-Forest method has the equilibrium diffusion limit if the material temperature change during a time step is small, and (iii) the Fleck-Cummings method does not have the equilibrium diffusion limit. We include numerical results that verify our theoretical predictions

  5. Application of geosites assessment method in geopark context

    Science.gov (United States)

    Martin, Simon; Perret, Amandine; Renau, Pierre; Cartier-Moulin, Olivier; Regolini-Bissig, Géraldine

    2014-05-01

    The regional natural park of the Monts d'Ardèche (Ardèche and Haute-Loire departments, France) is candidate to the European Geopark Network (EGN) in 2014. The area has a wide geodiversity - with rocks from Cambrian to Pleistocene (basalt flows) - and interesting features like phonolitic protrusions, maars and granite boulders fields. Around 115 sites were selected and documented through a geosites inventory carried out in the territory. This pre-selection was supervised by the Ardèche Geological Society and is therefore expert advice based. In the context of EGN candidature, these potential geosites were assessed with a simplified method. It follows the spirit of the method from the University of Lausanne (Reynard et al., 2007) and its recent developments: assessment of the scientific (central) value and of a set of additional values (ecological and cultural). As this assessment aimed to offer a management tool to the future geopark's authorities, a special focus was given to management aspects. In particular, the opportunities to use the site for education (from schools to universities) and for tourism as well as the existence of protection and of interpretive facilities were documented and assessed. Several interesting conclusions may be drawn from this case study: (1) expert assessment is effective when it is based on a pre-existing inventory which is well structured and documented; (2) even simplified, an assessment method is a very useful framework to expert assessment as it focuses the discussions on most important points and helps to balance the assessment; (3) whereas the inventory can be extensively detailed and partly academic, the assessment in the geopark context is objective-driven in order to answer management needs. The place of the geosites assessment among the three key players of a geopark construction process (i.e. territory's managers, local geoscientists and EGN) is also discussed. This place can be defined as the point of consensus of needs

  6. Assessing Need for Medication-Assisted Treatment for Opiate-Dependent Prison Inmates

    Science.gov (United States)

    Albizu-García, Carmen E.; Caraballo, José Noel; Caraballo-Correa, Glorimar; Hernández-Viver, Adriana; Román-Badenas, Luis

    2012-01-01

    Individuals with a history of heroin dependence are overrepresented in American correctional facilities and 75% of inmates with a drug use disorder do not receive treatment during incarceration or after release. Medication-assisted treatment (MAT) with opiate agonists, such as methadone or buprenorphine, constitute standard of care; to guide planning for an expansion of drug treatment services in correctional facilities, a needs assessment was conducted at the Department of Correction and Rehabilitation (DCR) of Puerto Rico (PR). We report on the research process, the findings that informed our recommendations for the PCR to expand MAT for eligible inmates, and lessons learned. PMID:22263714

  7. Assessing nicotine dependence in adolescent E-cigarette users: The 4-item Patient-Reported Outcomes Measurement Information System (PROMIS) Nicotine Dependence Item Bank for electronic cigarettes.

    Science.gov (United States)

    Morean, Meghan E; Krishnan-Sarin, Suchitra; S O'Malley, Stephanie

    2018-04-26

    Adolescent e-cigarette use (i.e., "vaping") likely confers risk for developing nicotine dependence. However, there have been no studies assessing e-cigarette nicotine dependence in youth. We evaluated the psychometric properties of the 4-item Patient-Reported Outcomes Measurement Information System Nicotine Dependence Item Bank for E-cigarettes (PROMIS-E) for assessing youth e-cigarette nicotine dependence and examined risk factors for experiencing stronger dependence symptoms. In 2017, 520 adolescent past-month e-cigarette users completed the PROMIS-E during a school-based survey (50.5% female, 84.8% White, 16.22[1.19] years old). Adolescents also reported on sex, grade, race, age at e-cigarette use onset, vaping frequency, nicotine e-liquid use, and past-month cigarette smoking. Analyses included conducting confirmatory factor analysis and examining the internal consistency of the PROMIS-E. Bivariate correlations and independent-samples t-tests were used to examine unadjusted relationships between e-cigarette nicotine dependence and the proposed risk factors. Regression models were run in which all potential risk factors were entered as simultaneous predictors of PROMIS-E scores. The single-factor structure of the PROMIS-E was confirmed and evidenced good internal consistency. Across models, larger PROMIS-E scores were associated with being in a higher grade, initiating e-cigarette use at an earlier age, vaping more frequently, using nicotine e-liquid (and higher nicotine concentrations), and smoking cigarettes. Adolescent e-cigarette users reported experiencing nicotine dependence, which was assessed using the psychometrically sound PROMIS-E. Experiencing stronger nicotine dependence symptoms was associated with characteristics that previously have been shown to confer risk for frequent vaping and tobacco cigarette dependence. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A quantitative assessment method for Ascaris eggs on hands

    DEFF Research Database (Denmark)

    Jeandron, Aurelie; Ensink, Jeroen H. J.; Thamsborg, Stig Milan

    2014-01-01

    The importance of hands in the transmission of soil transmitted helminths, especially Ascaris and Trichuris infections, is under-researched. This is partly because of the absence of a reliable method to quantify the number of eggs on hands. Therefore, the aim of this study was to develop a method...... to assess the number of Ascaris eggs on hands and determine the egg recovery rate of the method. Under laboratory conditions, hands were seeded with a known number of Ascaris eggs, air dried and washed in a plastic bag retaining the washing water, in order to determine recovery rates of eggs for four...... different detergents (cationic [benzethonium chloride 0.1% and cetylpyridinium chloride CPC 0.1%], anionic [7X 1% - quadrafos, glycol ether, and dioctyl sulfoccinate sodium salt] and non-ionic [Tween80 0.1% -polyethylene glycol sorbitan monooleate]) and two egg detection methods (McMaster technique...

  9. Impact of Different Obesity Assessment Methods after Acute Coronary Syndromes

    Directory of Open Access Journals (Sweden)

    Caroline N. M. Nunes

    2014-07-01

    Full Text Available Background: Abdominal obesity is an important cardiovascular risk factor. Therefore, identifying the best method for measuring waist circumference (WC is a priority. Objective: To evaluate the eight methods of measuring WC in patients with acute coronary syndrome (ACS as a predictor of cardiovascular complications during hospitalization. Methods: Prospective study of patients with ACS. The measurement of WC was performed by eight known methods: midpoint between the last rib and the iliac crest (1, point of minimum circumference (2; immediately above the iliac crest (3, umbilicus (4, one inch above the umbilicus (5, one centimeter above the umbilicus (6, smallest rib and (7 the point of greatest circumference around the waist (8. Complications included: angina, arrhythmia, heart failure, cardiogenic shock, hypotension, pericarditis and death. Logistic regression tests were used for predictive factors. Results: A total of 55 patients were evaluated. During the hospitalization period, which corresponded on average to seven days, 37 (67% patients had complications, with the exception of death, which was not observed in any of the cases. Of these complications, the only one that was associated with WC was angina, and with every cm of WC increase, the risk for angina increased from 7.5 to 9.9%, depending on the measurement site. It is noteworthy the fact that there was no difference between the different methods of measuring WC as a predictor of angina. Conclusion: The eight methods of measuring WC are also predictors of recurrent angina after acute coronary syndromes.

  10. Personality assessment of substance-dependent patients in a therapeutic community.

    Science.gov (United States)

    Moffett, L A; Steinberg, S L; Rohde, P

    1996-01-01

    The design and implementation of a personality assessment system for severely substance-dependent men in a therapeutic community (TC) are described. The system was designed from a treatment utility perspective (Hayes, Nelson, & Jarrett, 1987) and uses the Personality Research Form E (Jackson, 1984) to provide each patient with feedback (a) describing his normal personality traits, (b) predicting his probable pattern of adjustment to the treatment setting, and (c) prescribing specific actions he can take to address potentially problematic behaviors. Discussing the the results with the patient helps him cope with the TC. Reviewing the assessment results with the staff promotes their empathy for the patient as a person whose behavior can be understood as an interaction of his personality with the specific demands of the TC rather than seeing the patient in exclusively pathological terms. Specific suggestions for behavior change guide both the patient and the staff and are potentially useful in various treatment settings.

  11. Error assessment in recombinant baculovirus titration: evaluation of different methods.

    Science.gov (United States)

    Roldão, António; Oliveira, Rui; Carrondo, Manuel J T; Alves, Paula M

    2009-07-01

    The success of baculovirus/insect cells system in heterologous protein expression depends on the robustness and efficiency of the production workflow. It is essential that process parameters are controlled and include as little variability as possible. The multiplicity of infection (MOI) is the most critical factor since irreproducible MOIs caused by inaccurate estimation of viral titers hinder batch consistency and process optimization. This lack of accuracy is related to intrinsic characteristics of the method such as the inability to distinguish between infectious and non-infectious baculovirus. In this study, several methods for baculovirus titration were compared. The most critical issues identified were the incubation time and cell concentration at the time of infection. These variables influence strongly the accuracy of titers and must be defined for optimal performance of the titration method. Although the standard errors of the methods varied significantly (7-36%), titers were within the same order of magnitude; thus, viral titers can be considered independent of the method of titration. A cost analysis of the baculovirus titration methods used in this study showed that the alamarblue, real time Q-PCR and plaque assays were the most expensive techniques. The remaining methods cost on average 75% less than the former methods. Based on the cost, time and error analysis undertaken in this study, the end-point dilution assay, microculture tetrazolium assay and flow cytometric assay were found to be the techniques that combine all these three main factors better. Nevertheless, it is always recommended to confirm the accuracy of the titration either by comparison with a well characterized baculovirus reference stock or by titration using two different methods and verification of the variability of results.

  12. Does the determination of inorganic arsenic in rice depend on the method?

    DEFF Research Database (Denmark)

    de la Calle, Maria Beatriz; Emteborg, Håkan; Linsinger, Thomas P.J.

    2011-01-01

    , on the determination of total and inorganic arsenic (As) in rice. The main aim of this PT was to judge the state of the art of analytical capability for the determination of total and inorganic As in rice. For this reason, participation in this exercise was open to laboratories from all over the world. Some 98...... laboratories reported results for total As and 32 for inorganic As. The main conclusions of IMEP-107 were that the concentration of inorganic As determined in rice does not depend on the analytical method applied and that introduction of a maximum level for inorganic As in rice should not be postponed because...

  13. A method for reducing energy dependence of thermoluminescence dosimeter response by means of filters

    International Nuclear Information System (INIS)

    Bapat, V.N.

    1980-01-01

    This work describes the application of the method of partial surface shielding for reducing the energy dependence of the X-ray and γ-ray response of a dosimeter containing a CaSO 4 :Dy thermoluminescent phosphor mixed with KCl. in pellet form. Results are given of approximate computation of filter combinations that accomplish this aim, and of experimental verifications. Incorporation of the described filter combination makes it possible to use this relatively sensitive dosimeter for environmental radiation monitoring. A similar approach could be applied to any type of dosimeter in the form of a thin pellet or wafer. (author)

  14. Comparison of the surface friction model with the time-dependent Hartree-Fock method

    International Nuclear Information System (INIS)

    Froebrich, P.

    1984-01-01

    A comparison is made between the classical phenomenological surface friction model and a time-dependent Hartree-Fock study by Dhar for the system 208 Pb+ 74 Ge at E/sub lab/(Pb) = 1600 MeV. The general trends for energy loss, mean values for charge and mass, interaction times and energy-angle correlations turn out to be fairly similar in both methods. However, contrary to Dhar, the events close to capture are interpreted as normal deep-inelastic, i.e., not as fast fission processes

  15. Development of a method for personal, spatiotemporal exposure assessment.

    Science.gov (United States)

    Adams, Colby; Riggs, Philip; Volckens, John

    2009-07-01

    This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.

  16. Re-assessing copepod growth using the Moult Rate method

    DEFF Research Database (Denmark)

    Hirst, Andrew G.; Keister, J. E.; Richardson, A. J.

    2014-01-01

    Estimating growth and production rates of mesozooplankton, and copepods in particular, is important in describing flows of material and energy though pelagic systems. Over the past 30 years, the Moult Rate (MR) method has been used to estimate juvenile copepod growth rates in ∼40 papers. Yet the MR......-moulting stage, e.g. copepodite stage 5 to adult. We performed experiments with Calanus pacificus to estimate growth of stage C5 using an alternative method. We found that the error size and sign varied between mass type (i.e. DW, C and N). Recommendations for practical future assessments of growth in copepods...

  17. Methods for assessing the effects of dehydration on cognitive function.

    Science.gov (United States)

    Lieberman, Harris R

    2012-11-01

    Studying the effects of dehydration on cognitive function presents a variety of unique and difficult challenges to investigators. These challenges, which are addressed in this article, can be divided into three general categories: 1) choosing an appropriate method of generating a consistent level of dehydration; 2) determining and effectively employing appropriate and sensitive measures of cognitive state; and 3) adequately controlling the many confounding factors that interfere with assessment of cognitive function. The design and conduct of studies on the effects of dehydration on cognitive function should carefully consider various methodological issues, and investigators should carefully weigh the benefits and disadvantages of particular methods and procedures. © 2012 International Life Sciences Institute.

  18. Employment of kernel methods on wind turbine power performance assessment

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.

    2015-01-01

    A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...

  19. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  20. Improved GIS-based Methods for Traffic Noise Impact Assessment

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Bloch, Karsten Sand

    1996-01-01

    When vector-based GIS-packages are used for traffic noise impact assessments, the buffer-technique is usually employed for the study: 1. For each road segment buffer-zones representing different noise-intervals are generated, 2. The buffers from all road segments are smoothed together, and 3....... The number of buildings within the buffers are enumerated. This technique provides an inaccurate assessment of the noise diffusion since it does not correct for buildings barrier and reflection to noise. The paper presents the results from a research project where the traditional noise buffer technique...... was compared with a new method which includes these corrections. Both methods follow the Common Nordic Noise Calculation Model, although the traditional buffer technique ignores parts of the model. The basis for the work was a digital map of roads and building polygons, combined with a traffic- and road...

  1. Possibilities and methods for biochemical assessment of radiation injury

    Energy Technology Data Exchange (ETDEWEB)

    Minkova, M [Meditsinska Akademiya, Sofia (Bulgaria). Nauchen Inst. po Rentgenologiya i Radiobiologiya

    1986-01-01

    An extensitive review (77 references) is made of the application of biochemical diagnostic methods for assessment of radiation diseases. A brief characteristics of several biochemical indicators is given: deoxycytidine, thymidine, rho-aminoisocarboxylic acid, DNA-ase, nucleic acids. Influence of such factors as age, sex, season etc. is studied by means of functional biochemical indicators as: creatine, triptophanic metabolites, 5-hydroxy-indolacetic acid, biogenic amines, serum proteins, enzymes, etc.

  2. PATHOS: a quick screening method for assessing sexual addiction.

    Science.gov (United States)

    Johnson, Pennie; Cashwell, Craig S; Cress, Jim; Barber, Tim; Dunn, Mary Clayton

    2013-01-01

    Pastors may understand that sex addiction exists and are frequently faced with people who need non-clinical and clinical services for the addiction. However, the pastoral counselors have no quick reliable method of assessing them. The purpose of this article is to define sexual addiction and provide information about a tool called PATHOS that can be used in clinical and non-clinical settings to identify potential sex addicts.

  3. Wind resource in metropolitan France: assessment methods, variability and trends

    International Nuclear Information System (INIS)

    Jourdier, Benedicte

    2015-01-01

    France has one of the largest wind potentials in Europe, yet far from being fully exploited. The wind resource and energy yield assessment is a key step before building a wind farm, aiming at predicting the future electricity production. Any over-estimation in the assessment process puts in jeopardy the project's profitability. This has been the case in the recent years, when wind farm managers have noticed that they produced less than expected. The under-production problem leads to questioning both the validity of the assessment methods and the inter-annual wind variability. This thesis tackles these two issues. In a first part are investigated the errors linked to the assessment methods, especially in two steps: the vertical extrapolation of wind measurements and the statistical modelling of wind-speed data by a Weibull distribution. The second part investigates the inter-annual to decadal variability of wind speeds, in order to understand how this variability may have contributed to the under-production and so that it is better taken into account in the future. (author) [fr

  4. Performance assessment plans and methods for the Salt Repository Project

    International Nuclear Information System (INIS)

    1984-08-01

    This document presents the preliminary plans and anticipated methods of the Salt Repository Project (SRP) for assessing the postclosure and radiological aspects of preclosure performance of a nuclear waste repository in salt. This plan is intended to be revised on an annual basis. The emphasis in this preliminary effort is on the method of conceptually dividing the system into three subsystems (the very near field, the near field, and the far field) and applying models to analyze the behavior of each subsystem and its individual components. The next revision will contain more detailed plans being developed as part of Site Characterization Plan (SCP) activities. After a brief system description, this plan presents the performance targets which have been established for nuclear waste repositories by regulatory agencies (Chapter 3). The SRP approach to modeling, including sensitivity and uncertainty techniques is then presented (Chapter 4). This is followed by a discussion of scenario analysis (Chapter 5), a presentation of preliminary data needs as anticipated by the SRP (Chapter 6), and a presentation of the SRP approach to postclosure assessment of the very near field, the near field, and the far field (Chapters 7, 8, and 9, respectively). Preclosure radiological assessment is discussed in Chapter 10. Chapter 11 presents the SRP approach to code verification and validation. Finally, the Appendix lists all computer codes anticipated for use in performance assessments. The list of codes will be updated as plans are revised

  5. Dogmas in the assessment of usability evaluation methods

    DEFF Research Database (Denmark)

    Hornbæk, Kasper

    2010-01-01

    Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work ...... research approaches that may help move beyond the dogmas. In particular, we emphasise detailed studies of evaluation processes, assessments of the impact of UEMs on design carried out in real-world systems development and analyses of how UEMs may be combined......Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this critique by describing seven dogmas in recent work...... on UEMs. The dogmas include using inadequate procedures and measures for assessment, focusing on win-lose outcomes, holding simplistic models of how usability evaluators work, concentrating on evaluation rather than on design and working from the assumption that usability problems are real. We discuss...

  6. A direct method for soil-structure interaction analysis based on frequency-dependent soil masses

    International Nuclear Information System (INIS)

    Danisch, R.; Delinic, K.; Marti, J.; Trbojevic, V.M.

    1993-01-01

    In a soil-structure interaction analysis, the soil, as a subsystem of the global vibrating system, exerts a strong influence on the response of the nuclear reactor building to the earthquake excitation. The volume of resources required for dealing with the soil have led to a number of different types of frequency-domain solutions, most of them based on the impedance function approach. These procedures require coupling the soil to the lumped-mass finite-element model of the reactor building. In most practical cases, the global vibrating system is analysed in the time domain (i.e. modal time history, linear or non-linear direct time-integration). Hence, it follows that the frequency domain solution for soil must be converted to an 'equivalent' soil model in the time domain. Over the past three decades, different approaches have been developed and used for earthquake analysis of nuclear power plants. In some cases, difficulties experienced in modelling the soil have affected the methods of global analysis, thus leading to approaches like the substructuring technique, e.g. 3-step method. In the practical applications, the limitations of each specific method must be taken into account in order to avoid unrealistic results. The aim of this paper is to present the recent development on an equivalent SDOF system for soil including frequency-dependent soil masses. The method will be compared with the classical 3-step method. (author)

  7. A proposed impact assessment method for genetically modified plants (AS-GMP Method)

    International Nuclear Information System (INIS)

    Jesus-Hitzschky, Katia Regina Evaristo de; Silveira, Jose Maria F.J. da

    2009-01-01

    An essential step in the development of products based on biotechnology is an assessment of their potential economic impacts and safety, including an evaluation of the potential impact of transgenic crops and practices related to their cultivation on the environment and human or animal health. The purpose of this paper is to provide an assessment method to evaluate the impact of biotechnologies that uses quantifiable parameters and allows a comparative analysis between conventional technology and technologies using GMOs. This paper introduces a method to perform an impact analysis associated with the commercial release and use of genetically modified plants, the Assessment System GMP Method. The assessment is performed through indicators that are arranged according to their dimension criterion likewise: environmental, economic, social, capability and institutional approach. To perform an accurate evaluation of the GMP specific indicators related to genetic modification are grouped in common fields: genetic insert features, GM plant features, gene flow, food/feed field, introduction of the GMP, unexpected occurrences and specific indicators. The novelty is the possibility to include specific parameters to the biotechnology under assessment. In this case by case analysis the factors of moderation and the indexes are parameterized to perform an available assessment.

  8. Frequency-dependant homogenized properties of composite using spectral analysis method

    International Nuclear Information System (INIS)

    Ben Amor, M; Ben Ghozlen, M H; Lanceleur, P

    2010-01-01

    An inverse procedure is proposed to determine the material constants of multilayered composites using a spectral analysis homogenization method. Recursive process gives interfacial displacement perpendicular to layers in term of deepness. A fast-Fourier transform (FFT) procedure has been used in order to extract the wave numbers propagating in the multilayer. The upper frequency bound of this homogenization domain is estimated. Inside the homogenization domain, we discover a maximum of three planes waves susceptible to propagate in the medium. A consistent algorithm is adopted to develop an inverse procedure for the determination of the materials constants of multidirectional composite. The extracted wave numbers are used as the inputs for the procedure. The outputs are the elastic constants of multidirectional composite. Using this method, the frequency dependent effective elastic constants are obtained and example for [0/90] composites is given.

  9. A Bayesian method for assessing multiscalespecies-habitat relationships

    Science.gov (United States)

    Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.

    2017-01-01

    ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and

  10. Assessing Security of Supply: Three Methods Used in Finland

    Science.gov (United States)

    Sivonen, Hannu

    Public Private Partnership (PPP) has an important role in securing supply in Finland. Three methods are used in assessing the level of security of supply. First, in national expert groups, a linear mathematical model has been used. The model is based on interdependency estimates. It ranks societal functions or its more detailed components, such as items in the food supply chain, according to the effect and risk pertinent to the interdependencies. Second, the security of supply is assessed in industrial branch committees (clusters and pools) in the form of indicators. The level of security of supply is assessed against five generic factors (dimension 1) and tens of business branch specific functions (dimension 2). Third, in two thousand individual critical companies, the maturity of operational continuity management is assessed using Capability Maturity Model (CMM) in an extranet application. The pool committees and authorities obtain an anonymous summary. The assessments are used in allocating efforts for securing supply. The efforts may be new instructions, training, exercising, and in some cases, investment and regulation.

  11. Impact of Different Obesity Assessment Methods after Acute Coronary Syndromes

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Caroline N. M.; Minicucci, Marcos F.; Farah, Elaine; Fusco, Daniéliso; Azevedo, Paula S.; Paiva, Sergio A. R.; Zornoff, Leonardo A. M., E-mail: lzornoff@cardiol.br [Faculdade de Medicina de Botucatu, Botucatu, SP (Brazil)

    2014-07-15

    Abdominal obesity is an important cardiovascular risk factor. Therefore, identifying the best method for measuring waist circumference (WC) is a priority. To evaluate the eight methods of measuring WC in patients with acute coronary syndrome (ACS) as a predictor of cardiovascular complications during hospitalization. Prospective study of patients with ACS. The measurement of WC was performed by eight known methods: midpoint between the last rib and the iliac crest (1), point of minimum circumference (2); immediately above the iliac crest (3), umbilicus (4), one inch above the umbilicus (5), one centimeter above the umbilicus (6), smallest rib and (7) the point of greatest circumference around the waist (8). Complications included: angina, arrhythmia, heart failure, cardiogenic shock, hypotension, pericarditis and death. Logistic regression tests were used for predictive factors. A total of 55 patients were evaluated. During the hospitalization period, which corresponded on average to seven days, 37 (67%) patients had complications, with the exception of death, which was not observed in any of the cases. Of these complications, the only one that was associated with WC was angina, and with every cm of WC increase, the risk for angina increased from 7.5 to 9.9%, depending on the measurement site. It is noteworthy the fact that there was no difference between the different methods of measuring WC as a predictor of angina. The eight methods of measuring WC are also predictors of recurrent angina after acute coronary syndromes.

  12. Impact of Different Obesity Assessment Methods after Acute Coronary Syndromes

    International Nuclear Information System (INIS)

    Nunes, Caroline N. M.; Minicucci, Marcos F.; Farah, Elaine; Fusco, Daniéliso; Azevedo, Paula S.; Paiva, Sergio A. R.; Zornoff, Leonardo A. M.

    2014-01-01

    Abdominal obesity is an important cardiovascular risk factor. Therefore, identifying the best method for measuring waist circumference (WC) is a priority. To evaluate the eight methods of measuring WC in patients with acute coronary syndrome (ACS) as a predictor of cardiovascular complications during hospitalization. Prospective study of patients with ACS. The measurement of WC was performed by eight known methods: midpoint between the last rib and the iliac crest (1), point of minimum circumference (2); immediately above the iliac crest (3), umbilicus (4), one inch above the umbilicus (5), one centimeter above the umbilicus (6), smallest rib and (7) the point of greatest circumference around the waist (8). Complications included: angina, arrhythmia, heart failure, cardiogenic shock, hypotension, pericarditis and death. Logistic regression tests were used for predictive factors. A total of 55 patients were evaluated. During the hospitalization period, which corresponded on average to seven days, 37 (67%) patients had complications, with the exception of death, which was not observed in any of the cases. Of these complications, the only one that was associated with WC was angina, and with every cm of WC increase, the risk for angina increased from 7.5 to 9.9%, depending on the measurement site. It is noteworthy the fact that there was no difference between the different methods of measuring WC as a predictor of angina. The eight methods of measuring WC are also predictors of recurrent angina after acute coronary syndromes

  13. Time-dependent crack growth in Alloy 718: An interim assessment

    International Nuclear Information System (INIS)

    James, L.A.

    1982-08-01

    Previous results on the time-dependent nature of fatigue-crack propagation (FCP) in Alloy 718 at elevated temperatures were reviewed. Additional experiments were conducted to further define certain aspects of the time-dependent crack growth behavior. it was found that loading waveform influenced FCP behavior, with tensile hold-times producing higher growth rates than continuous cycling at the same frequency. Crack growth rates under hold-time conditions tended to increase with decreasing grain size. Finally, experiments were conducted which tended to cast some doubt upon the ability of linear-elastic fracture mechanics (LEFM) techniques to characterize cracking behavior in this alloy under hold-time conditions. However, since a superior correlating parameter has not yet been proven, it is suggested that LEFM methods be used in the interim with appropriate safety factors to account for the potential errors. 34 refs., 10 figs., 4 tabs

  14. A rapid reliability estimation method for directed acyclic lifeline networks with statistically dependent components

    International Nuclear Information System (INIS)

    Kang, Won-Hee; Kliese, Alyce

    2014-01-01

    Lifeline networks, such as transportation, water supply, sewers, telecommunications, and electrical and gas networks, are essential elements for the economic and societal functions of urban areas, but their components are highly susceptible to natural or man-made hazards. In this context, it is essential to provide effective pre-disaster hazard mitigation strategies and prompt post-disaster risk management efforts based on rapid system reliability assessment. This paper proposes a rapid reliability estimation method for node-pair connectivity analysis of lifeline networks especially when the network components are statistically correlated. Recursive procedures are proposed to compound all network nodes until they become a single super node representing the connectivity between the origin and destination nodes. The proposed method is applied to numerical network examples and benchmark interconnected power and water networks in Memphis, Shelby County. The connectivity analysis results show the proposed method's reasonable accuracy and remarkable efficiency as compared to the Monte Carlo simulations

  15. Dependence of the legume seeds vigour on their maturity and method of harvest

    Directory of Open Access Journals (Sweden)

    Stanisław Grzesiuk

    2014-01-01

    Full Text Available Several methods were used to study 'the vigour and viability of legume seeds (Pisum sativum L. cv. Hamil, Piston arvense L. cv. Mazurska and Lupinus luteus L. cv. Tomik harvested at three main stages of seed repening (green, wax and full. The seeds were tested immediately after harvest (series A and after two weeks of storage in pods (series B. It was found that: 1 the vigour of ripening legume seeds increases with maturation; 2 post-harvest storage in pods increases the degree of ripeness and. consequently. vigour; 3 seeds attain full vigour later than full viability; 4 seed leachate conductivity method gives erroneous results in assessing the vigour of immature seeds: 5 full vigour of maturing seeds of various degrees of ripeness can be determined by simultaneous application of both biological (eg. seedling growth analysis, VI and biochemical (e.g. total dehydrogenase activity methods.

  16. Efficient exact-exchange time-dependent density-functional theory methods and their relation to time-dependent Hartree-Fock.

    Science.gov (United States)

    Hesselmann, Andreas; Görling, Andreas

    2011-01-21

    A recently introduced time-dependent exact-exchange (TDEXX) method, i.e., a response method based on time-dependent density-functional theory that treats the frequency-dependent exchange kernel exactly, is reformulated. In the reformulated version of the TDEXX method electronic excitation energies can be calculated by solving a linear generalized eigenvalue problem while in the original version of the TDEXX method a laborious frequency iteration is required in the calculation of each excitation energy. The lowest eigenvalues of the new TDEXX eigenvalue equation corresponding to the lowest excitation energies can be efficiently obtained by, e.g., a version of the Davidson algorithm appropriate for generalized eigenvalue problems. Alternatively, with the help of a series expansion of the new TDEXX eigenvalue equation, standard eigensolvers for large regular eigenvalue problems, e.g., the standard Davidson algorithm, can be used to efficiently calculate the lowest excitation energies. With the help of the series expansion as well, the relation between the TDEXX method and time-dependent Hartree-Fock is analyzed. Several ways to take into account correlation in addition to the exact treatment of exchange in the TDEXX method are discussed, e.g., a scaling of the Kohn-Sham eigenvalues, the inclusion of (semi)local approximate correlation potentials, or hybrids of the exact-exchange kernel with kernels within the adiabatic local density approximation. The lowest lying excitations of the molecules ethylene, acetaldehyde, and pyridine are considered as examples.

  17. Microbiological study of lactic acid bacteria in kefir grains by culture-dependent and culture-independent methods.

    Science.gov (United States)

    Chen, Hsi-Chia; Wang, Sheng-Yao; Chen, Ming-Ju

    2008-05-01

    Lactic acid bacteria (LAB) in different original kefir grains were first assessed using polymerase chain reaction-denaturing gradient gel electrophoresis (PCR-DGGE) by a culture-dependent way, and were further confirmed by DNA sequencing techniques. Results indicated that a combined method of cultivation with PCR-DGGE and subsequent DNA sequencing could successfully identify four LAB strains from three kefir grains from Taiwan (named Hsinchu, Mongolia and Ilan). Lactobacillus kefiri accounted, in the three kefir grains, for at least half of the isolated colonies while Lb. kefiranofaciens was the second most frequently isolated species. Leuconostoc mesenteroides was less frequently found but still in the three kefir grains conversely to Lactococcus lactis which based on culture-dependent isolation was only found in two of the kefir grains. It was interesting to find that all three kefir grains contain similar LAB species. Furthermore, the DGGE as a culture-independent method was also applied to detect the LAB strains. Results indicated that Lb. kefiranofaciens was found in all three kefir grains, whereas Lb. kefiri was only observed in Hsinchu kefir grain and Lc. lactis was found in both Mongolia and Ilan samples. Two additional strains, Pseudomonas spp. and E. coli, were also detected in kefir grains.

  18. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic; Nouy, Anthony

    2017-01-01

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  19. Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations

    KAUST Repository

    Giraldi, Loic

    2017-06-30

    This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.

  20. Assessment of diagnostic methods for determining degradation of check valves

    International Nuclear Information System (INIS)

    Haynes, H.D.; Farmer, W.S.

    1992-01-01

    The Oak Ridge National Laboratory (ORNL) has carried out a comprehensive aging assessment of check valves in support of the Nuclear Plant Aging Research (NPAR) program. This paper provides a summary of the ORNL check valve aging assessment with emphasis on the identification, evaluation, and application of check valve monitoring methods and techniques. Several check valve monitoring methods are described and compared. These methods include: acoustic emission monitoring, ultrasonic inspection, magnetic flux signature analysis (MFSA), external magnetics. These diagnostic technologies were shown to be useful in determining check valve condition (e.g., disc position, disc motion, and seat leakage), although none of the methods was by itself, successful in monitoring all three condition indicators. The combination of acoustic emission with either ultrasonics or one of the magnetic technologies, however, yields a monitoring system that succeeds in providing the sensitivity to detect all major check valve operating conditions. Other areas covered in the paper include descriptions of relevant regulatory issues, utility group activities, and interactions ORNL has had with outside organizations for the purpose of disseminating research results

  1. Collaborative framework for PIV uncertainty quantification: comparative assessment of methods

    International Nuclear Information System (INIS)

    Sciacchitano, Andrea; Scarano, Fulvio; Neal, Douglas R; Smith, Barton L; Warner, Scott O; Vlachos, Pavlos P; Wieneke, Bernhard

    2015-01-01

    A posteriori uncertainty quantification of particle image velocimetry (PIV) data is essential to obtain accurate estimates of the uncertainty associated with a given experiment. This is particularly relevant when measurements are used to validate computational models or in design and decision processes. In spite of the importance of the subject, the first PIV uncertainty quantification (PIV-UQ) methods have been developed only in the last three years. The present work is a comparative assessment of four approaches recently proposed in the literature: the uncertainty surface method (Timmins et al 2012), the particle disparity approach (Sciacchitano et al 2013), the peak ratio criterion (Charonko and Vlachos 2013) and the correlation statistics method (Wieneke 2015). The analysis is based upon experiments conducted for this specific purpose, where several measurement techniques are employed simultaneously. The performances of the above approaches are surveyed across different measurement conditions and flow regimes. (paper)

  2. A Systems-Level Approach to Building Sustainable Assessment Cultures: Moderation, Quality Task Design and Dependability of Judgement

    Science.gov (United States)

    Colbert, Peta; Wyatt-Smith, Claire; Klenowski, Val

    2012-01-01

    This article considers the conditions that are necessary at system and local levels for teacher assessment to be valid, reliable and rigorous. With sustainable assessment cultures as a goal, the article examines how education systems can support local-level efforts for quality learning and dependable teacher assessment. This is achieved through…

  3. Simplified methods to assess thermal fatigue due to turbulent mixing

    International Nuclear Information System (INIS)

    Hannink, M.H.C.; Timperi, A.

    2011-01-01

    Thermal fatigue is a safety relevant damage mechanism in pipework of nuclear power plants. A well-known simplified method for the assessment of thermal fatigue due to turbulent mixing is the so-called sinusoidal method. Temperature fluctuations in the fluid are described by a sinusoidally varying signal at the inner wall of the pipe. Because of limited information on the thermal loading conditions, this approach generally leads to overconservative results. In this paper, a new assessment method is presented, which has the potential of reducing the overconservatism of existing procedures. Artificial fluid temperature signals are generated by superposition of harmonic components with different amplitudes and frequencies. The amplitude-frequency spectrum of the components is modelled by a formula obtained from turbulence theory, whereas the phase differences are assumed to be randomly distributed. Lifetime predictions generated with the new simplified method are compared with lifetime predictions based on real fluid temperature signals, measured in an experimental setup of a mixing tee. Also, preliminary steady-state Computational Fluid Dynamics (CFD) calculations of the total power of the fluctuations are presented. The total power is needed as an input parameter for the spectrum formula in a real-life application. Solution of the transport equation for the total power was included in a CFD code and comparisons with experiments were made. The newly developed simplified method for generating the temperature signal is shown to be adequate for the investigated geometry and flow conditions, and demonstrates possibilities of reducing the conservatism of the sinusoidal method. CFD calculations of the total power show promising results, but further work is needed to develop the approach. (author)

  4. Delineating species with DNA barcodes: a case of taxon dependent method performance in moths.

    Directory of Open Access Journals (Sweden)

    Mari Kekkonen

    Full Text Available The accelerating loss of biodiversity has created a need for more effective ways to discover species. Novel algorithmic approaches for analyzing sequence data combined with rapidly expanding DNA barcode libraries provide a potential solution. While several analytical methods are available for the delineation of operational taxonomic units (OTUs, few studies have compared their performance. This study compares the performance of one morphology-based and four DNA-based (BIN, parsimony networks, ABGD, GMYC methods on two groups of gelechioid moths. It examines 92 species of Finnish Gelechiinae and 103 species of Australian Elachistinae which were delineated by traditional taxonomy. The results reveal a striking difference in performance between the two taxa with all four DNA-based methods. OTU counts in the Elachistinae showed a wider range and a relatively low (ca. 65% OTU match with reference species while OTU counts were more congruent and performance was higher (ca. 90% in the Gelechiinae. Performance rose when only monophyletic species were compared, but the taxon-dependence remained. None of the DNA-based methods produced a correct match with non-monophyletic species, but singletons were handled well. A simulated test of morphospecies-grouping performed very poorly in revealing taxon diversity in these small, dull-colored moths. Despite the strong performance of analyses based on DNA barcodes, species delineated using single-locus mtDNA data are best viewed as OTUs that require validation by subsequent integrative taxonomic work.

  5. A pseudospectral matrix method for time-dependent tensor fields on a spherical shell

    International Nuclear Information System (INIS)

    Brügmann, Bernd

    2013-01-01

    We construct a pseudospectral method for the solution of time-dependent, non-linear partial differential equations on a three-dimensional spherical shell. The problem we address is the treatment of tensor fields on the sphere. As a test case we consider the evolution of a single black hole in numerical general relativity. A natural strategy would be the expansion in tensor spherical harmonics in spherical coordinates. Instead, we consider the simpler and potentially more efficient possibility of a double Fourier expansion on the sphere for tensors in Cartesian coordinates. As usual for the double Fourier method, we employ a filter to address time-step limitations and certain stability issues. We find that a tensor filter based on spin-weighted spherical harmonics is successful, while two simplified, non-spin-weighted filters do not lead to stable evolutions. The derivatives and the filter are implemented by matrix multiplication for efficiency. A key technical point is the construction of a matrix multiplication method for the spin-weighted spherical harmonic filter. As example for the efficient parallelization of the double Fourier, spin-weighted filter method we discuss an implementation on a GPU, which achieves a speed-up of up to a factor of 20 compared to a single core CPU implementation

  6. Concentration dependence of biotransformation in fish liver S9: Optimizing substrate concentrations to estimate hepatic clearance for bioaccumulation assessment.

    Science.gov (United States)

    Lo, Justin C; Allard, Gayatri N; Otton, S Victoria; Campbell, David A; Gobas, Frank A P C

    2015-12-01

    In vitro bioassays to estimate biotransformation rate constants of contaminants in fish are currently being investigated to improve bioaccumulation assessments of hydrophobic contaminants. The present study investigates the relationship between chemical substrate concentration and in vitro biotransformation rate of 4 environmental contaminants (9-methylanthracene, pyrene, chrysene, and benzo[a]pyrene) in rainbow trout (Oncorhynchus mykiss) liver S9 fractions and methods to determine maximum first-order biotransformation rate constants. Substrate depletion experiments using a series of initial substrate concentrations showed that in vitro biotransformation rates exhibit strong concentration dependence, consistent with a Michaelis-Menten kinetic model. The results indicate that depletion rate constants measured at initial substrate concentrations of 1 μM (a current convention) could underestimate the in vitro biotransformation potential and may cause bioconcentration factors to be overestimated if in vitro biotransformation rates are used to assess bioconcentration factors in fish. Depletion rate constants measured using thin-film sorbent dosing experiments were not statistically different from the maximum depletion rate constants derived using a series of solvent delivery-based depletion experiments for 3 of the 4 test chemicals. Multiple solvent delivery-based depletion experiments at a range of initial concentrations are recommended for determining the concentration dependence of in vitro biotransformation rates in fish liver fractions, whereas a single sorbent phase dosing experiment may be able to provide reasonable approximations of maximum depletion rates of very hydrophobic substances. © 2015 SETAC.

  7. Diversity of endophytic bacteria of Dendrobium officinale based on culture-dependent and culture-independent methods

    Directory of Open Access Journals (Sweden)

    Cong Pei

    2017-01-01

    Full Text Available Culture-dependent and culture-independent methods were compared and evaluated in the study of the endophytic diversity of Dendrobium officinale. Culture-independent methods consisted of polymerase chain reaction–denaturing gradient gel electrophoresis (PCR-DGGE and metagenome methods. According to the results, differences were found between the three methods. Three phyla, namely Firmicutes, Proteobacteria, and Actinobacteria, were detected using the culture-dependent method, and two phyla, Firmicutes and Proteobacteria, were detected by the DGGE method. Using the metagenome method, four major phyla were determined, including Proteobacteria (76.54%, Actinobacteria (18.56%, Firmicutes (2.27%, and Bacteroidetes (1.56%. A distinct trend was obtained at the genus level in terms of the method and the corresponding number of genera determined. There were 449 genera and 16 genera obtained from the metagenome and DGGE methods, respectively, and only 7 genera were obtained through the culture-dependent method. By comparison, all the genera from the culture-dependent and DGGE methods were contained in the members determined using the metagenome method. Overall, culture-dependent methods are limited to ‘finding’ endophytic bacteria in plants. DGGE is an alternative to investigating primary diversity patterns; however, the metagenome method is still the best choice for determining the endophytic profile in plants. It is essential to use multiphasic approaches to study cultured and uncultured microbes.

  8. Geomorphometry-based method of landform assessment for geodiversity

    Science.gov (United States)

    Najwer, Alicja; Zwoliński, Zbigniew

    2015-04-01

    Climate variability primarily induces the variations in the intensity and frequency of surface processes and consequently, principal changes in the landscape. As a result, abiotic heterogeneity may be threatened and the key elements of the natural diversity even decay. The concept of geodiversity was created recently and has rapidly gained the approval of scientists around the world. However, the problem recognition is still at an early stage. Moreover, little progress has been made concerning its assessment and geovisualisation. Geographical Information System (GIS) tools currently provide wide possibilities for the Earth's surface studies. Very often, the main limitation in that analysis is acquisition of geodata in appropriate resolution. The main objective of this study was to develop a proceeding algorithm for the landform geodiversity assessment using geomorphometric parameters. Furthermore, final maps were compared to those resulting from thematic layers method. The study area consists of two peculiar valleys, characterized by diverse landscape units and complex geological setting: Sucha Woda in Polish part of Tatra Mts. and Wrzosowka in Sudetes Mts. Both valleys are located in the National Park areas. The basis for the assessment is a proper selection of geomorphometric parameters with reference to the definition of geodiversity. Seven factor maps were prepared for each valley: General Curvature, Topographic Openness, Potential Incoming Solar Radiation, Topographic Position Index, Topographic Wetness Index, Convergence Index and Relative Heights. After the data integration and performing the necessary geoinformation analysis, the next step with a certain degree of subjectivity is score classification of the input maps using an expert system and geostatistical analysis. The crucial point to generate the final maps of geodiversity by multi-criteria evaluation (MCE) with GIS-based Weighted Sum technique is to assign appropriate weights for each factor map by

  9. A qualitative method proposal to improve environmental impact assessment

    Energy Technology Data Exchange (ETDEWEB)

    Toro, Javier, E-mail: jjtoroca@unal.edu.co [Institute of Environmental Studies, National University of Colombia at Bogotá (Colombia); Requena, Ignacio, E-mail: requena@decsai.ugr.es [Department of Computer Science and Artificial Intelligence, University of Granada (Spain); Duarte, Oscar, E-mail: ogduartev@unal.edu.co [National University of Colombia at Bogotá, Department of Electrical Engineering and Electronics (Colombia); Zamorano, Montserrat, E-mail: zamorano@ugr.es [Department of Civil Engineering, University of Granada (Spain)

    2013-11-15

    In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes the inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown.

  10. Human reliability analysis methods for probabilistic safety assessment

    International Nuclear Information System (INIS)

    Pyy, P.

    2000-11-01

    Human reliability analysis (HRA) of a probabilistic safety assessment (PSA) includes identifying human actions from safety point of view, modelling the most important of them in PSA models, and assessing their probabilities. As manifested by many incidents and studies, human actions may have both positive and negative effect on safety and economy. Human reliability analysis is one of the areas of probabilistic safety assessment (PSA) that has direct applications outside the nuclear industry. The thesis focuses upon developments in human reliability analysis methods and data. The aim is to support PSA by extending the applicability of HRA. The thesis consists of six publications and a summary. The summary includes general considerations and a discussion about human actions in the nuclear power plant (NPP) environment. A condensed discussion about the results of the attached publications is then given, including new development in methods and data. At the end of the summary part, the contribution of the publications to good practice in HRA is presented. In the publications, studies based on the collection of data on maintenance-related failures, simulator runs and expert judgement are presented in order to extend the human reliability analysis database. Furthermore, methodological frameworks are presented to perform a comprehensive HRA, including shutdown conditions, to study reliability of decision making, and to study the effects of wrong human actions. In the last publication, an interdisciplinary approach to analysing human decision making is presented. The publications also include practical applications of the presented methodological frameworks. (orig.)

  11. Assessment of a novel method for teaching veterinary parasitology.

    Science.gov (United States)

    Pereira, Mary Mauldin; Yvorchuk-St Jean, Kathleen E; Wallace, Charles E; Krecek, Rosina C

    2014-01-01

    A student-centered innovative method of teaching veterinary parasitology was launched and evaluated at the Ross University School of Veterinary Medicine (RUSVM) in St. Kitts, where Parasitology is a required course for second-semester veterinary students. A novel method, named Iron Parasitology, compared lecturer-centered teaching with student-centered teaching and assessed the retention of parasitology knowledge of students in their second semester and again when they reached their seventh semester. Members of five consecutive classes chose to participate in Iron Parasitology with the opportunity to earn an additional 10 points toward their final grade by demonstrating their knowledge, communication skills, clarity of message, and creativity in the Iron Parasitology exercise. The participants and nonparticipants were assessed using seven parameters. The initial short-term study parameters used to evaluate lecturer- versus student-centered teaching were age, gender, final Parasitology course grade without Iron Parasitology, RUSVM overall grade point average (GPA), RUSVM second-semester GPA, overall GPA before RUSVM, and prerequisite GPA before RUSVM. The long-term reassessment study assessed retention of parasitology knowledge in members of the seventh-semester class who had Iron Parasitology as a tool in their second semester. These students were invited to complete a parasitology final examination during their seventh semester. There were no statistically significant differences for the parameters measured in the initial study. In addition, Iron Parasitology did not have an effect on the retention scores in the reassessment study.

  12. A qualitative method proposal to improve environmental impact assessment

    International Nuclear Information System (INIS)

    Toro, Javier; Requena, Ignacio; Duarte, Oscar; Zamorano, Montserrat

    2013-01-01

    In environmental impact assessment, qualitative methods are used because they are versatile and easy to apply. This methodology is based on the evaluation of the strength of the impact by grading a series of qualitative attributes that can be manipulated by the evaluator. The results thus obtained are not objective, and all too often impacts are eliminated that should be mitigated with corrective measures. However, qualitative methodology can be improved if the calculation of Impact Importance is based on the characteristics of environmental factors and project activities instead on indicators assessed by evaluators. In this sense, this paper proposes the inclusion of the vulnerability of environmental factors and the potential environmental impact of project activities. For this purpose, the study described in this paper defined Total Impact Importance and specified a quantification procedure. The results obtained in the case study of oil drilling in Colombia reflect greater objectivity in the evaluation of impacts as well as a positive correlation between impact values, the environmental characteristics at and near the project location, and the technical characteristics of project activities. -- Highlights: • Concept of vulnerability has been used to calculate the importance impact assessment. • This paper defined Total Impact Importance and specified a quantification procedure. • The method includes the characteristics of environmental and project activities. • The application has shown greater objectivity in the evaluation of impacts. • Better correlation between impact values, environment and the project has been shown

  13. Survey and evaluation of aging risk assessment methods and applications

    International Nuclear Information System (INIS)

    Sanzo, D.; Kvam, P.; Apostolakis, G.; Wu, J.; Milici, T.; Ghoniem, N.; Guarro, S.

    1994-11-01

    The US Nuclear Regulatory Commission initiated the nuclear power plant aging research program about 6 years ago to gather information about nuclear power plant aging. Since then, this program has collected a significant amount of information, largely qualitative, on plant aging and its potential effects on plant safety. However, this body of knowledge has not yet been integrated into formalisms that can be used effectively and systematically to assess plant risk resulting from aging, although models for assessing the effect of increasing failure rates on core damage frequency have been proposed. This report surveys the work on the aging of systems, structures, and components (SSCs) of nuclear power plants, as well as associated data bases. We take a critical look at the need to revise probabilistic risk assessments (PRAs) so that they will include the contribution to risk from plant aging, the adequacy of existing methods for evaluating this contribution, and the adequacy of the data that have been used in these evaluation methods. We identify a preliminary framework for integrating the aging of SSCs into the PRA and include the identification of necessary data for such an integration

  14. A New Method for Spatial Health Risk Assessment of Pollutants

    Directory of Open Access Journals (Sweden)

    Mohamad Sakizadeh*

    2017-03-01

    Full Text Available Background: The area of contaminated lands exposed to the health risk of environmental pollutants is a matter of argument. In this study, a new method was developed to estimate the amount of area that is exposed to higher than normal levels of Cr, Mn, and V. Methods: Overall, 170 soil samples were collected from the upper 10 cm of soil in an arid area in central part of Iran in Semnan Province. The values of Cr, Mn, and V were detected by ICP-OES technique. A geostatistical method known as sequential Gaussian co-simulation was applied to consider the spatial risk of these toxic elements. Results: The moderate spatial dependence of Cr indicates the contribution of both intrinsic and extrinsic factor to the levels of this heavy metal in the study area, whereas, Mn and V can be attributed to intrinsic factors (such as lithology. There has not been any significant influence due to agricultural practices on the Cr values in the region. The surface of contaminated area for manganese, produced by risk curve on surface method, was higher than chromium and vanadium. Conclusion: The produced risk curves as rendered in this study can be adopted in similar studies to help managers to estimate the total area that requires cleanup action.

  15. A self-adapting and altitude-dependent regularization method for atmospheric profile retrievals

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2009-03-01

    Full Text Available MIPAS is a Fourier transform spectrometer, operating onboard of the ENVISAT satellite since July 2002. The online retrieval algorithm produces geolocated profiles of temperature and of volume mixing ratios of six key atmospheric constituents: H2O, O3, HNO3, CH4, N2O and NO2. In the validation phase, oscillations beyond the error bars were observed in several profiles, particularly in CH4 and N2O.

    To tackle this problem, a Tikhonov regularization scheme has been implemented in the retrieval algorithm. The applied regularization is however rather weak in order to preserve the vertical resolution of the profiles.

    In this paper we present a self-adapting and altitude-dependent regularization approach that detects whether the analyzed observations contain information about small-scale profile features, and determines the strength of the regularization accordingly. The objective of the method is to smooth out artificial oscillations as much as possible, while preserving the fine detail features of the profile when related information is detected in the observations.

    The proposed method is checked for self consistency, its performance is tested on MIPAS observations and compared with that of some other regularization schemes available in the literature. In all the considered cases the proposed scheme achieves a good performance, thanks to its altitude dependence and to the constraints employed, which are specific of the inversion problem under consideration. The proposed method is generally applicable to iterative Gauss-Newton algorithms for the retrieval of vertical distribution profiles from atmospheric remote sounding measurements.

  16. A fully automated temperature-dependent resistance measurement setup using van der Pauw method

    Science.gov (United States)

    Pandey, Shivendra Kumar; Manivannan, Anbarasu

    2018-03-01

    The van der Pauw (VDP) method is widely used to identify the resistance of planar homogeneous samples with four contacts placed on its periphery. We have developed a fully automated thin film resistance measurement setup using the VDP method with the capability of precisely measuring a wide range of thin film resistances from few mΩ up to 10 GΩ under controlled temperatures from room-temperature up to 600 °C. The setup utilizes a robust, custom-designed switching network board (SNB) for measuring current-voltage characteristics automatically at four different source-measure configurations based on the VDP method. Moreover, SNB is connected with low noise shielded coaxial cables that reduce the effect of leakage current as well as the capacitance in the circuit thereby enhancing the accuracy of measurement. In order to enable precise and accurate resistance measurement of the sample, wide range of sourcing currents/voltages are pre-determined with the capability of auto-tuning for ˜12 orders of variation in the resistances. Furthermore, the setup has been calibrated with standard samples and also employed to investigate temperature dependent resistance (few Ω-10 GΩ) measurements for various chalcogenide based phase change thin films (Ge2Sb2Te5, Ag5In5Sb60Te30, and In3SbTe2). This setup would be highly helpful for measurement of temperature-dependent resistance of wide range of materials, i.e., metals, semiconductors, and insulators illuminating information about structural change upon temperature as reflected by change in resistances, which are useful for numerous applications.

  17. Blood oxygen-level dependent functional assessment of cerebrovascular reactivity: Feasibility for intraoperative 3 Tesla MRI.

    Science.gov (United States)

    Fierstra, Jorn; Burkhardt, Jan-Karl; van Niftrik, Christiaan Hendrik Bas; Piccirelli, Marco; Pangalu, Athina; Kocian, Roman; Neidert, Marian Christoph; Valavanis, Antonios; Regli, Luca; Bozinov, Oliver

    2017-02-01

    To assess the feasibility of functional blood oxygen-level dependent (BOLD) MRI to evaluate intraoperative cerebrovascular reactivity (CVR) at 3 Tesla field strength. Ten consecutive neurosurgical subjects scheduled for a clinical intraoperative MRI examination were enrolled in this study. In addition to the clinical protocol a BOLD sequence was implemented with three cycles of 44 s apnea to calculate CVR values on a voxel-by-voxel basis throughout the brain. The CVR range was then color-coded and superimposed on an anatomical volume to create high spatial resolution CVR maps. Ten subjects (mean age 34.8 ± 13.4; 2 females) uneventfully underwent the intraoperative BOLD protocol, with no complications occurring. Whole-brain CVR for all subjects was (mean ± SD) 0.69 ± 0.42, whereas CVR was markedly higher for tumor subjects as compared to vascular subjects, 0.81 ± 0.44 versus 0.33 ± 0.10, respectively. Furthermore, color-coded functional maps could be robustly interpreted for a whole-brain assessment of CVR. We demonstrate that intraoperative BOLD MRI is feasible in creating functional maps to assess cerebrovascular reactivity throughout the brain in subjects undergoing a neurosurgical procedure. Magn Reson Med 77:806-813, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  18. Methods for assessing NPP containment pressure boundary integrity

    International Nuclear Information System (INIS)

    Naus, D.J.; Ellingwood, B.R.; Graves, H.L.

    2004-01-01

    Research is being conducted to address aging of the containment pressure boundary in light-water reactor plants. Objectives of this research are to (1) understand the significant factors relating to corrosion occurrence, efficacy of inspection, and structural capacity reduction of steel containments and of liners of concrete containments; (2) provide the U.S. Nuclear Regulatory Commission (USNRC) reviewers a means of establishing current structural capacity margins or estimating future residual structural capacity margins for steel containments and concrete containments as limited by liner integrity; and (3) provide recommendations, as appropriate, on information to be requested of licensees for guidance that could be utilized by USNRC reviewers in assessing the seriousness of reported incidences of containment degradation. Activities include development of a degradation assessment methodology; reviews of techniques and methods for inspection and repair of containment metallic pressure boundaries; evaluation of candidate techniques for inspection of inaccessible regions of containment metallic pressure boundaries; establishment of a methodology for reliability-based condition assessments of steel containments and liners; and fragility assessments of steel containments with localized corrosion

  19. Developing the RIAM method (rapid impact assessment matrix) in the context of impact significance assessment

    International Nuclear Information System (INIS)

    Ijaes, Asko; Kuitunen, Markku T.; Jalava, Kimmo

    2010-01-01

    In this paper the applicability of the RIAM method (rapid impact assessment matrix) is evaluated in the context of impact significance assessment. The methodological issues considered in the study are: 1) to test the possibilities of enlarging the scoring system used in the method, and 2) to compare the significance classifications of RIAM and unaided decision-making to estimate the consistency between these methods. The data used consisted of projects for which funding had been applied for via the European Union's Regional Development Trust in the area of Central Finland. Cases were evaluated with respect to their environmental, social and economic impacts using an assessment panel. The results showed the scoring framework used in RIAM could be modified according to the problem situation at hand, which enhances its application potential. However the changes made in criteria B did not significantly affect the final ratings of the method, which indicates the high importance of criteria A1 (importance) and A2 (magnitude) to the overall results. The significance classes obtained by the two methods diverged notably. In general the ratings given by RIAM tended to be smaller compared to intuitive judgement implying that the RIAM method may be somewhat conservative in character.

  20. Antioxidant activity of wine assessed by different in vitro methods

    Directory of Open Access Journals (Sweden)

    Di Lorenzo Chiara

    2017-01-01

    Full Text Available Epidemiological studies have suggested that a diet rich in antioxidant compounds could help in counteracting the effects of reactive oxygen species, reducing the risk factors for chronic diseases. The moderate consumption of wine, especially red wine, has been associated with the reduction in mortalities from cardiovascular diseases. One of the possible reasons for the protective effect of wine can be identified in the high content of polyphenols (mainly flavonoids, which have significant antioxidant activity. Even though several in vitro tests have been developed for the measure of the antioxidant property, no method has showed a satisfactory correlation with the in vivo situation. On these bases, the aim of this study was the application and comparison of different in vitro methods to assess the antioxidant activity of red, rosé and white wines. The methods were: 1 Folin-Cocalteau's assay for the quantification of total polyphenol content; 2 the DPPH (1,1-diphenyl-2-picrylhydrazyl spectrophotometric assay and the Trolox Equivalent Antioxidant Capacity (TEAC spectrophotometric assay for measuring the antioxidant activity of samples; 3 High Performance Thin Layer Chromatography for separation of phenolic substances and assessment of the associated antioxidant activity; 4 electrochemical detection by using a biosensor. Although all the approaches show some limitations, this battery of tests offers a more reliable body of data on the antioxidant activity of vine derivatives.

  1. Electromechanical impedance method to assess dental implant stability

    International Nuclear Information System (INIS)

    Tabrizi, Aydin; Rizzo, Piervincenzo; Ochs, Mark W

    2012-01-01

    The stability of a dental implant is a prerequisite for supporting a load-bearing prosthesis and establishment of a functional bone–implant system. Reliable and noninvasive methods able to assess the bone interface of dental and orthopedic implants (osseointegration) are increasingly demanded for clinical diagnosis and direct prognosis. In this paper, we propose the electromechanical impedance method as a novel approach for the assessment of dental implant stability. Nobel Biocare ® implants with a size of 4.3 mm diameter ×13 mm length were placed inside bovine bones that were then immersed in a solution of nitric acid to allow material degradation. The degradation simulated the inverse process of bone healing. The implant–bone systems were monitored by bonding a piezoceramic transducer (PZT) to the implants’ abutment and measuring the admittance of the PZT over time. It was found that the PZT’s admittance and the statistical features associated with its analysis are sensitive to the degradation of the bones and can be correlated to the loss of calcium measured by means of the atomic absorption spectroscopy method. The present study shows promising results and may pave the road towards an innovative approach for the noninvasive monitoring of dental implant stability and integrity. (paper)

  2. Total System Performance Assessment - License Application Methods and Approach

    Energy Technology Data Exchange (ETDEWEB)

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  3. The commission errors search and assessment (CESA) method

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V. N

    2007-05-15

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  4. The commission errors search and assessment (CESA) method

    International Nuclear Information System (INIS)

    Reer, B.; Dang, V. N.

    2007-05-01

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  5. Methods for assessing autophagy and autophagic cell death.

    Science.gov (United States)

    Tasdemir, Ezgi; Galluzzi, Lorenzo; Maiuri, M Chiara; Criollo, Alfredo; Vitale, Ilio; Hangen, Emilie; Modjtahedi, Nazanine; Kroemer, Guido

    2008-01-01

    Autophagic (or type 2) cell death is characterized by the massive accumulation of autophagic vacuoles (autophagosomes) in the cytoplasm of cells that lack signs of apoptosis (type 1 cell death). Here we detail and critically assess a series of methods to promote and inhibit autophagy via pharmacological and genetic manipulations. We also review the techniques currently available to detect autophagy, including transmission electron microscopy, half-life assessments of long-lived proteins, detection of LC3 maturation/aggregation, fluorescence microscopy, and colocalization of mitochondrion- or endoplasmic reticulum-specific markers with lysosomal proteins. Massive autophagic vacuolization may cause cellular stress and represent a frustrated attempt of adaptation. In this case, cell death occurs with (or in spite of) autophagy. When cell death occurs through autophagy, on the contrary, the inhibition of the autophagic process should prevent cellular demise. Accordingly, we describe a strategy for discriminating cell death with autophagy from cell death through autophagy.

  6. Cognitive assessment in mathematics with the least squares distance method.

    Science.gov (United States)

    Ma, Lin; Çetin, Emre; Green, Kathy E

    2012-01-01

    This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.

  7. Screening-Level Ecological Risk Assessment Methods, Revision 3

    Energy Technology Data Exchange (ETDEWEB)

    Mirenda, Richard J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2012-08-16

    This document provides guidance for screening-level assessments of potential adverse impacts to ecological resources from release of environmental contaminants at the Los Alamos National Laboratory (LANL or the Laboratory). The methods presented are based on two objectives, namely: to provide a basis for reaching consensus with regulators, managers, and other interested parties on how to conduct screening-level ecological risk investigations at the Laboratory; and to provide guidance for ecological risk assessors under the Environmental Programs (EP) Directorate. This guidance promotes consistency, rigor, and defensibility in ecological screening investigations and in reporting those investigation results. The purpose of the screening assessment is to provide information to the risk managers so informed riskmanagement decisions can be made. This document provides examples of recommendations and possible risk-management strategies.

  8. Assessing Education Needs at Tertiary Level: The Focus Group Method

    Directory of Open Access Journals (Sweden)

    Elena-Mirela Samfira

    2015-10-01

    Full Text Available The goal of the paper is to point out the advantages and disadvantages of the focus group method in assessing the education needs of teachers and students in veterinary medicine. It is the first stage of a wider research aiming at developing problem-based teaching and learning methodologies in the field of veterinary medicine. The materials used consisted of literature documents on focus group as a research method in social sciences. The authors studied the literature available in the field and synthesised its main advantages and disadvantages. The paper is the first of this kind in Romania. Results show that there is no agreement yet on the advantages and disadvantages of this method. The research limitation is that there is almost no Romanian literature on focus group as a method. The usefulness of the paper is obvious: it allows other researchers in the field of education see the benefits of using such a research method. The originality of the paper consists in the fact that there has been no such research so far in Romanian higher education. Based on the results of the focus groups organised, the authors will design and implement a problem-based learning methodology for the students in veterinary medicine.

  9. Assessment of disease burden among army personnel and dependents in Lucknow city

    Directory of Open Access Journals (Sweden)

    Anil Ahuja

    2015-01-01

    Full Text Available Introduction: Oral health is a valuable asset for an individual. The oral cavity has a significant role to play in providing a satisfactory lifestyle including proper mastication, phonetics, esthetics, appearance, communication abilities and an overall emotional well-being. Very fewer studies have been carried out in the past on disease burden of army personnel and their dependents. Materials and Methods: This study was carried out on 2160 army personnel and their dependents reporting to Command Military Dental Center, Lucknow. The study population was screened for caries, periodontal status and prosthetic status and treatment need, oral hygiene practice and prevalence of the tobacco habit. All relevant information was noted into a Proforma. Statistical analysis was performed using SPSS 16.0 version (Chicago, Inc., USA. The results are presented in percentage and mean (±standard deviation. The unpaired t-test and Chi-square test were used. The P < 0.05 was considered as significant. Results: The oral hygiene awareness is adequate among serving, and dependents and practices of oral hygiene were also adequate. The higher prevalence of the tobacco habit was found among young army personnel than older. There was a significant association of smoking and periodontal disease. Leukoplakia was common oral mucosal lesion between smokers. Conclusion: This study will help to access dental disease occurrence rate and evaluate treatment needs and also to formulate a plan for augmentation of resources. The study will also create awareness about oral hygiene practices and oral habits among army personnel and their dependents.

  10. Dynamic Assessment in Iranian EFL Classrooms: A Post- method Enquiry

    Directory of Open Access Journals (Sweden)

    Seyed Javad Es-hagi Sardrood

    2011-11-01

    Full Text Available Derived from the emerging paradigm shift in English language teaching and assessment, there has been a renewal of interest in dynamic assessment (DA to be used as an alternative to the traditional static testing in language classrooms. However, to date, DA practice has been mostly limited to clinical treatments of children with learning disabilities, and it has not been widely incorporated into the EFL contexts. In order to find out the reasons behind the slow trend of DA practice, this research adopted a framework, based on the post method pedagogical principles and recommendations, to delve into the prospect of methodological realization of DA approaches in Iranian EFL classrooms. To this end, two instruments, a questionnaire and an interview were developed to explore the practicality of DA through seeking 51 Iranian EFL teachers' perception of DA practice in their classrooms. The results indicated that most of the teachers were negative about the practice of DA in their classrooms and believed that a full-fledged implementation of DA in Iranian EFL classrooms is too demanding. The feasibility of DA in Iranian EFL classrooms, where teachers are deprived of DA training, guideline, and technological resources, is questioned seriously due to the factors such as time-constrained nature of DA procedures, large number of students in EFL classrooms, the common practice of static tests as the mainstream, and overreliance on the teachers' teaching and assessment abilities. The paper suggests the framework of inquiry in this study, which was derived from the post method pedagogy, to be utilized as a blueprint for a critical appraisal of any alternative method or theory which is introduced into ELT contexts.

  11. Sequential sampling: a novel method in farm animal welfare assessment.

    Science.gov (United States)

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall

  12. A transaction assessment method for allocation of transmission services

    Science.gov (United States)

    Banunarayanan, Venkatasubramaniam

    The purpose of this research is to develop transaction assessment methods for allocating transmission services that are provided by an area/utility to power transactions. Transmission services are the services needed to deliver, or provide the capacity to deliver, real and reactive power from one or more supply points to one or more delivery points. As the number of transactions increase rapidly in the emerging deregulated environment, accurate quantification of the transmission services an area/utility provides to accommodate a transaction is becoming important, because then appropriate pricing schemes can be developed to compensate for the parties that provide these services. The Allocation methods developed are based on the "Fair Resource Allocation Principle" and they determine for each transaction the following: the flowpath of the transaction (both real and reactive power components), generator reactive power support from each area/utility, real power loss support from each area/utility. Further, allocation methods for distributing the cost of relieving congestion on transmission lines caused by transactions are also developed. The main feature of the proposed methods is representation of actual usage of the transmission services by the transactions. The proposed method is tested extensively on a variety of systems. The allocation methods developed in this thesis for allocation of transmission services to transactions is not only useful in studying the impact of transactions on a transmission system in a multi-transaction case, but they are indeed necessary to meet the criteria set forth by FERC with regard to pricing based on actual usage. The "consistency" of the proposed allocation methods has also been investigated and tested.

  13. Systematic evaluation of observational methods assessing biomechanical exposures at work

    DEFF Research Database (Denmark)

    Takala, Esa-Pekka; Pehkonen, Irmeli; Forsman, Mikael

    2010-01-01

    the use of technical instruments. Generally, the observations showed moderate to good agreement with the corresponding assessments made from video recordings; agreement was the best for large-scale body postures and work actions. Postures of wrist and hand as well as trunk rotation seemed to be more...... difficult to observe correctly. Intra- and inter-observer repeatability were reported for 7 and 17 methods, respectively, and were judged mostly to be good or moderate. CONCLUSIONS: With training, observers can reach consistent results on clearly visible body postures and work activities. Many observational...

  14. A comparison of methods of assessment of scintigraphic colon transit.

    Science.gov (United States)

    Freedman, Patricia Noel; Goldberg, Paul A; Fataar, Abdul Basier; Mann, Michael M

    2006-06-01

    There is no standard method of analysis of scintigraphic colonic transit investigation. This study was designed to compare 4 techniques. Sixteen subjects (median age, 37.5 y; range, 21-61 y), who had sustained a spinal cord injury more than a year before the study, were given a pancake labeled with 10-18 MBq of (111)In bound to resin beads to eat. Anterior and posterior images were acquired with a gamma-camera 3 h after the meal and then 3 times a day for the next 4 d. Seven regions of interest, outlining the ascending colon, hepatic flexure, transverse colon, splenic flexure, descending colon, rectosigmoid, and total abdominal activity at each time point, were drawn on the anterior and posterior images. The counts were decay corrected and the geometric mean (GM), for each region, at each time point calculated. The GM was used to calculate the percentage of the initial total abdominal activity in each region, at each time point. Colonic transit was assessed in 4 ways: (a) Three independent nuclear medicine physicians visually assessed transit on the analog images and classified subjects into 5 categories of colonic transit (rapid, intermediate, generalized delay, right-sided delay, or left-sided delay). (b) Parametric images were constructed from the percentage activity in each region at each time point. (c) The arrival and clearance times of the activity in the right and left colon were plotted as time-activity curves. (d) The geometric center of the distribution of the activity was calculated and plotted on a graph versus time. The results of these 4 methods were compared using an agreement matrix. Though simple to perform, the visual assessment was unreliable. The best agreement occurred between the parametric images and the arrival and clearance times of the activity in the right and left colon. The different methods of assessment do not produce uniform results. The best option for evaluating colonic transit appears to be a combination of the analog images

  15. Assessment of nucleonic methods and data for fusion reactors

    International Nuclear Information System (INIS)

    Dudziak, D.J.

    1976-01-01

    An assessment is provided of nucleonic methods, codes, and data necessary for a sound experimental fusion power reactor (EPR) technology base. Gaps in the base are identified and specific development recommendations are made in three areas: computational tools, nuclear data, and integral experiments. The current status of the first two areas is found to be sufficiently inadequate that viable engineering design of an EPR is precluded at this time. However, a program to provide the necessary data and computational capability is judged to be a low-risk effort

  16. Assessment of plutonium security effect using import premium method

    International Nuclear Information System (INIS)

    Ohkubo, Hiroo; Aoyagi, Tadashi; Kikuchi, Masahiro; Suzuki, Atsuyuki.

    1994-01-01

    A mathematical formulation was developed to describe the concept of import premium method, which can quantify a security effect of demand reduction of imports by introducing the alternative before its supply disruption (or variation) may happen. Next, by using this formula, a security value of plutonium use (especially, fast breeder reactor), defined as a contributor to reduction of possibilities of disruption (or variation) of natural uranium supply was estimated. From these studies, it is concluded that although the formula proposed here is simplified, it may be available for assessing an energy security if only we prepare the data concerning future motions of supply and demand curves. (author)

  17. Internal dosimetry hazard and risk assessments: methods and applications

    International Nuclear Information System (INIS)

    Roberts, G.A.

    2006-01-01

    Routine internal dose exposures are typically (in the UK nuclear industry) less than external dose exposures: however, the costs of internal dosimetry monitoring programmes can be significantly greater than those for external dosimetry. For this reason decisions on when to apply routine monitoring programmes, and the nature of these programmes, can be more critical than for external dosimetry programmes. This paper describes various methods for performing hazard and risk assessments which are being developed by RWE NUKEM Limited Approved Dosimetry Services to provide an indication when routine internal dosimetry monitoring should be considered. (author)

  18. Improved time-dependent harmonic oscillator method for vibrationally inelastic collisions

    International Nuclear Information System (INIS)

    DePristo, A.E.

    1985-01-01

    A quantal solution to vibrationally inelastic collisions is presented based upon a linear expansion of the interaction potential around the time-dependent classical positions of all translational and vibrational degrees of freedom. The full time-dependent wave function is a product of a Gaussian translational wave packet and a multidimensional harmonic oscillator wave function, both centered around the appropriate classical position variables. The computational requirements are small since the initial vibrational coordinates are the equilibrium values in the classical trajectory (i.e., phase space sampling does not occur). Different choices of the initial width of the translational wave packet and the initial classical translational momenta are possible, and two combinations are investigated. The first involves setting the initial classical momenta equal to the quantal expectation value, and varying the width to satisfy normalization of the transition probability matrix. The second involves adjusting the initial classical momenta to ensure detailed balancing for each set of transitions, i→f and f→i, and varying the width to satisfy normalization. This choice illustrates the origin of the empirical correction of using the arithmetic average momenta as the initial classical momenta in the forced oscillator approximation. Both methods are tested for the collinear collision systems CO 2 --(He, Ne), and are found to be accurate except for near-resonant vibration--vibration exchange at low initial kinetic energies

  19. A variable-order time-dependent neutron transport method for nuclear reactor kinetics using analytically-integrated space-time characteristics

    International Nuclear Information System (INIS)

    Hoffman, A. J.; Lee, J. C.

    2013-01-01

    A new time-dependent neutron transport method based on the method of characteristics (MOC) has been developed. Whereas most spatial kinetics methods treat time dependence through temporal discretization, this new method treats time dependence by defining the characteristics to span space and time. In this implementation regions are defined in space-time where the thickness of the region in time fulfills an analogous role to the time step in discretized methods. The time dependence of the local source is approximated using a truncated Taylor series expansion with high order derivatives approximated using backward differences, permitting the solution of the resulting space-time characteristic equation. To avoid a drastic increase in computational expense and memory requirements due to solving many discrete characteristics in the space-time planes, the temporal variation of the boundary source is similarly approximated. This allows the characteristics in the space-time plane to be represented analytically rather than discretely, resulting in an algorithm comparable in implementation and expense to one that arises from conventional time integration techniques. Furthermore, by defining the boundary flux time derivative in terms of the preceding local source time derivative and boundary flux time derivative, the need to store angularly-dependent data is avoided without approximating the angular dependence of the angular flux time derivative. The accuracy of this method is assessed through implementation in the neutron transport code DeCART. The method is employed with variable-order local source representation to model a TWIGL transient. The results demonstrate that this method is accurate and more efficient than the discretized method. (authors)

  20. Method for assessing reliability of a network considering probabilistic safety assessment

    International Nuclear Information System (INIS)

    Cepin, M.

    2005-01-01

    A method for assessment of reliability of the network is developed, which uses the features of the fault tree analysis. The method is developed in a way that the increase of the network under consideration does not require significant increase of the model. The method is applied to small examples of network consisting of a small number of nodes and a small number of their connections. The results give the network reliability. They identify equipment, which is to be carefully maintained in order that the network reliability is not reduced, and equipment, which is a candidate for redundancy, as this would improve network reliability significantly. (author)

  1. An implicit fast Fourier transform method for integration of the time dependent Schrodinger or diffusion equation

    International Nuclear Information System (INIS)

    Ritchie, A.B.; Riley, M.E.

    1997-06-01

    The authors have found that the conventional exponentiated split operator procedure is subject to difficulties in energy conservation when solving the time-dependent Schrodinger equation for Coulombic systems. By rearranging the kinetic and potential energy terms in the temporal propagator of the finite difference equations, one can find a propagation algorithm for three dimensions that looks much like the Crank-Nicholson and alternating direction implicit methods for one- and two-space-dimensional partial differential equations. They report comparisons of this novel implicit split operator procedure with the conventional exponentiated split operator procedure on hydrogen atom solutions. The results look promising for a purely numerical approach to certain electron quantum mechanical problems

  2. A hybrid method for decision making with dependence & feedback under incomplete information

    Directory of Open Access Journals (Sweden)

    Chen Weijie

    2016-01-01

    Full Text Available This paper presents a hybrid method to tackle multiple criteria decision making problems with incomplete weight information in the context of fuzzy soft sets. In order to determine the weights of criteria, we develop a comprehensive two-stage framework. Stage One: We first define the distance between two fuzzy soft numbers. Next, we establish an optimization model based on ideal point of attribute values, by which the attrib-ute weights can be determined. Stage Two: To get the global weights, we use fuzzy cognitive maps to depict the dependent and feedback effect among criteria. Next, we require constructing fuzzy soft set to decide the desirable alternative. Finally, a case study is given to clarify the proposed approach of this paper.

  3. Theoretical treatment of photodissociation of water by time-dependent quantum mechanical methods

    International Nuclear Information System (INIS)

    Weide, K.

    1993-01-01

    An algorithm for wavepacket propagation, based on Kosloff's method of expansion of the time evolution operator in terms of Chebychev polynomials, and some details of its implementation are described. With the programs developed, quantum-mechanical calculations for up to three independent molecular coordinates are possible and feasible and therefore photodissociation of non-rotating triatomic molecules can be treated exactly. The angular degree of freedom here is handled by expansion in terms of free diatomic rotor states. The time-dependent wave packet picture is compared with the more traditional view of stationary wave functions, and both are used to interpret computational results where appropriate. Two-dimensional calculations have been performed to explain several experimental observations about water photodissociation. All calculations are based on ab initio potential energy surfaces, and it is explained in each case why it is reasonable to neglect the third degree of freedom. Many experimental results are reproduced quantitatively. (orig.) [de

  4. Numerical solution of the time dependent neutron transport equation by the method of the characteristics

    International Nuclear Information System (INIS)

    Talamo, Alberto

    2013-01-01

    This study presents three numerical algorithms to solve the time dependent neutron transport equation by the method of the characteristics. The algorithms have been developed taking into account delayed neutrons and they have been implemented into the novel MCART code, which solves the neutron transport equation for two-dimensional geometry and an arbitrary number of energy groups. The MCART code uses regular mesh for the representation of the spatial domain, it models up-scattering, and takes advantage of OPENMP and OPENGL algorithms for parallel computing and plotting, respectively. The code has been benchmarked with the multiplication factor results of a Boiling Water Reactor, with the analytical results for a prompt jump transient in an infinite medium, and with PARTISN and TDTORT results for cross section and source transients. The numerical simulations have shown that only two numerical algorithms are stable for small time steps

  5. Numerical solution of the time dependent neutron transport equation by the method of the characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto, E-mail: alby@anl.gov [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Lemont, IL 60439 (United States)

    2013-05-01

    This study presents three numerical algorithms to solve the time dependent neutron transport equation by the method of the characteristics. The algorithms have been developed taking into account delayed neutrons and they have been implemented into the novel MCART code, which solves the neutron transport equation for two-dimensional geometry and an arbitrary number of energy groups. The MCART code uses regular mesh for the representation of the spatial domain, it models up-scattering, and takes advantage of OPENMP and OPENGL algorithms for parallel computing and plotting, respectively. The code has been benchmarked with the multiplication factor results of a Boiling Water Reactor, with the analytical results for a prompt jump transient in an infinite medium, and with PARTISN and TDTORT results for cross section and source transients. The numerical simulations have shown that only two numerical algorithms are stable for small time steps.

  6. Time dependence linear transport III convergence of the discrete ordinate method

    International Nuclear Information System (INIS)

    Wilson, D.G.

    1983-01-01

    In this paper the uniform pointwise convergence of the discrete ordinate method for weak and strong solutions of the time dependent, linear transport equation posed in a multidimensional, rectangular parallelepiped with partially reflecting walls is established. The first result is that a sequence of discrete ordinate solutions converges uniformly on the quadrature points to a solution of the continuous problem provided that the corresponding sequence of truncation errors for the solution of the continuous problem converges to zero in the same manner. The second result is that continuity of the solution with respect to the velocity variables guarantees that the truncation erros in the quadrature formula go the zero and hence that the discrete ordinate approximations converge to the solution of the continuous problem as the discrete ordinate become dense. An existence theory for strong solutions of the the continuous problem follows as a result

  7. Implementation of a method for calculating temperature-dependent resistivities in the KKR formalism

    Science.gov (United States)

    Mahr, Carsten E.; Czerner, Michael; Heiliger, Christian

    2017-10-01

    We present a method to calculate the electron-phonon induced resistivity of metals in scattering-time approximation based on the nonequilibrium Green's function formalism. The general theory as well as its implementation in a density-functional theory based Korringa-Kohn-Rostoker code are described and subsequently verified by studying copper as a test system. We model the thermal expansion by fitting a Debye-Grüneisen curve to experimental data. Both the electronic and vibrational structures are discussed for different temperatures, and employing a Wannier interpolation of these quantities we evaluate the scattering time by integrating the electron linewidth on a triangulation of the Fermi surface. Based thereupon, the temperature-dependent resistivity is calculated and found to be in good agreement with experiment. We show that the effect of thermal expansion has to be considered in the whole calculation regime. Further, for low temperatures, an accurate sampling of the Fermi surface becomes important.

  8. Fitting methods for constructing energy-dependent efficiency curves and their application to ionization chamber measurements

    International Nuclear Information System (INIS)

    Svec, A.; Schrader, H.

    2002-01-01

    An ionization chamber without and with an iron liner (absorber) was calibrated by a set of radionuclide activity standards of the Physikalisch-Technische Bundesanstalt (PTB). The ionization chamber is used as a secondary standard measuring system for activity at the Slovak Institute of Metrology (SMU). Energy-dependent photon-efficiency curves were established for the ionization chamber in defined measurement geometry without and with the liner, and radionuclide efficiencies were calculated. Programmed calculation with an analytical efficiency function and a nonlinear regression algorithm of Microsoft (MS) Excel for fitting was used. Efficiencies from bremsstrahlung of pure beta-particle emitters were calibrated achieving a 10% accuracy level. Such efficiency components are added to obtain the total radionuclide efficiency of photon emitters after beta decay. The method yields differences of experimental and calculated radionuclide efficiencies for most of the photon-emitting radionuclides in the order of a few percent

  9. Time-dependent resilience assessment and improvement of urban infrastructure systems

    Science.gov (United States)

    Ouyang, Min; Dueñas-Osorio, Leonardo

    2012-09-01

    This paper introduces an approach to assess and improve the time-dependent resilience of urban infrastructure systems, where resilience is defined as the systems' ability to resist various possible hazards, absorb the initial damage from hazards, and recover to normal operation one or multiple times during a time period T. For different values of T and its position relative to current time, there are three forms of resilience: previous resilience, current potential resilience, and future potential resilience. This paper mainly discusses the third form that takes into account the systems' future evolving processes. Taking the power transmission grid in Harris County, Texas, USA as an example, the time-dependent features of resilience and the effectiveness of some resilience-inspired strategies, including enhancement of situational awareness, management of consumer demand, and integration of distributed generators, are all simulated and discussed. Results show a nonlinear nature of resilience as a function of T, which may exhibit a transition from an increasing function to a decreasing function at either a threshold of post-blackout improvement rate, a threshold of load profile with consumer demand management, or a threshold number of integrated distributed generators. These results are further confirmed by studying a typical benchmark system such as the IEEE RTS-96. Such common trends indicate that some resilience strategies may enhance infrastructure system resilience in the short term, but if not managed well, they may compromise practical utility system resilience in the long run.

  10. Temperature dependence of the shear modulus of soft tissues assessed by ultrasound

    Energy Technology Data Exchange (ETDEWEB)

    Sapin-de Brosses, E; Gennisson, J-L; Pernot, M; Fink, M; Tanter, M [Langevin Institute (CNRS UMR 7587), INSERM ERL U979, ESPCI ParisTech, 10 rue Vauquelin, 75 005 Paris (France)], E-mail: emilie.sapin@espci.fr

    2010-03-21

    Soft tissue stiffness was shown to significantly change after thermal ablation. To better understand this phenomenon, the study aims (1) to quantify and explain the temperature dependence of soft tissue stiffness for different organs, (2) to investigate the potential relationship between stiffness changes and thermal dose and (3) to study the reversibility or irreversibility of stiffness changes. Ex vivo bovine liver and muscle samples (N = 3 and N = 20, respectively) were slowly heated and cooled down into a thermally controlled saline bath. Temperatures were assessed by thermocouples. Sample stiffness (shear modulus) was provided by the quantitative supersonic shear imaging technique. Changes in liver stiffness are observed only after 45 deg. C. In contrast, between 25 deg. C and 65 deg. C, muscle stiffness varies in four successive steps that are consistent with the thermally induced proteins denaturation reported in the literature. After a 6 h long heating and cooling process, the final muscle stiffness can be either smaller or bigger than the initial one, depending on the stiffness at the end of the heating. Another important result is that stiffness changes are linked to thermal dose. Given the high sensitivity of ultrasound to protein denaturation, this study gives promising prospects for the development of ultrasound-guided HIFU systems.

  11. Temperature dependence of the shear modulus of soft tissues assessed by ultrasound

    International Nuclear Information System (INIS)

    Sapin-de Brosses, E; Gennisson, J-L; Pernot, M; Fink, M; Tanter, M

    2010-01-01

    Soft tissue stiffness was shown to significantly change after thermal ablation. To better understand this phenomenon, the study aims (1) to quantify and explain the temperature dependence of soft tissue stiffness for different organs, (2) to investigate the potential relationship between stiffness changes and thermal dose and (3) to study the reversibility or irreversibility of stiffness changes. Ex vivo bovine liver and muscle samples (N = 3 and N = 20, respectively) were slowly heated and cooled down into a thermally controlled saline bath. Temperatures were assessed by thermocouples. Sample stiffness (shear modulus) was provided by the quantitative supersonic shear imaging technique. Changes in liver stiffness are observed only after 45 deg. C. In contrast, between 25 deg. C and 65 deg. C, muscle stiffness varies in four successive steps that are consistent with the thermally induced proteins denaturation reported in the literature. After a 6 h long heating and cooling process, the final muscle stiffness can be either smaller or bigger than the initial one, depending on the stiffness at the end of the heating. Another important result is that stiffness changes are linked to thermal dose. Given the high sensitivity of ultrasound to protein denaturation, this study gives promising prospects for the development of ultrasound-guided HIFU systems.

  12. Integration of Density Dependence and Concentration Response Models Provides an Ecologically Relevant Assessment of Populations Exposed to Toxicants

    Science.gov (United States)

    The assessment of toxic exposure on wildlife populations involves the integration of organism level effects measured in toxicity tests (e.g., chronic life cycle) and population models. These modeling exercises typically ignore density dependence, primarily because information on ...

  13. Are three methods better than one? A comparative assessment of usability evaluation methods in an EHR.

    Science.gov (United States)

    Walji, Muhammad F; Kalenderian, Elsbeth; Piotrowski, Mark; Tran, Duong; Kookal, Krishna K; Tokede, Oluwabunmi; White, Joel M; Vaderhobli, Ram; Ramoni, Rachel; Stark, Paul C; Kimmes, Nicole S; Lagerweij, Maxim; Patel, Vimla L

    2014-05-01

    To comparatively evaluate the effectiveness of three different methods involving end-users for detecting usability problems in an EHR: user testing, semi-structured interviews and surveys. Data were collected at two major urban dental schools from faculty, residents and dental students to assess the usability of a dental EHR for developing a treatment plan. These included user testing (N=32), semi-structured interviews (N=36), and surveys (N=35). The three methods together identified a total of 187 usability violations: 54% via user testing, 28% via the semi-structured interview and 18% from the survey method, with modest overlap. These usability problems were classified into 24 problem themes in 3 broad categories. User testing covered the broadest range of themes (83%), followed by the interview (63%) and survey (29%) methods. Multiple evaluation methods provide a comprehensive approach to identifying EHR usability challenges and specific problems. The three methods were found to be complementary, and thus each can provide unique insights for software enhancement. Interview and survey methods were found not to be sufficient by themselves, but when used in conjunction with the user testing method, they provided a comprehensive evaluation of the EHR. We recommend using a multi-method approach when testing the usability of health information technology because it provides a more comprehensive picture of usability challenges. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Towards a Quality Assessment Method for Learning Preference Profiles in Negotiation

    Science.gov (United States)

    Hindriks, Koen V.; Tykhonov, Dmytro

    In automated negotiation, information gained about an opponent's preference profile by means of learning techniques may significantly improve an agent's negotiation performance. It therefore is useful to gain a better understanding of how various negotiation factors influence the quality of learning. The quality of learning techniques in negotiation are typically assessed indirectly by means of comparing the utility levels of agreed outcomes and other more global negotiation parameters. An evaluation of learning based on such general criteria, however, does not provide any insight into the influence of various aspects of negotiation on the quality of the learned model itself. The quality may depend on such aspects as the domain of negotiation, the structure of the preference profiles, the negotiation strategies used by the parties, and others. To gain a better understanding of the performance of proposed learning techniques in the context of negotiation and to be able to assess the potential to improve the performance of such techniques a more systematic assessment method is needed. In this paper we propose such a systematic method to analyse the quality of the information gained about opponent preferences by learning in single-instance negotiations. The method includes measures to assess the quality of a learned preference profile and proposes an experimental setup to analyse the influence of various negotiation aspects on the quality of learning. We apply the method to a Bayesian learning approach for learning an opponent's preference profile and discuss our findings.

  15. The “ductility exhaustion” method for static strength assessment of fusion structures

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Vaughan, E-mail: vaughan.thompson@ccfe.ac.uk; Vizvary, Zsolt

    2015-10-15

    Graphical abstract: - Highlights: • Reduced conservatism and more complex geometry. • Assessment process simplified. • Gives insight into real material behaviour – virtual proof test. • Leads onto structural failure modelling. • Ductility exhaustion and global plastic collapse structural assessment. - Abstract: The traditional method for static strength assessment of structures uses elastic stresses computed along critical ligaments and then divided into categories depending on their nature e.g. bending/membrane and primary/secondary. More recently, highly realistic plastic simulations are possible using FE (finite elements) which offer useful advantages over the traditional approach including (a) more accurate modelling of complex geometries, (b) a more straightforward assessment process and (c) a less conservative approach. The plastic analysis must consider both global and local effects, and the paper looks in detail at the “ductility exhaustion” method for the latter. Simple test cases show how the method can be applied in both the Abaqus and ANSYS FE Codes and for the case of a JET beryllium tile, the method has improved reserve factors for disruption loads considerably to the point where the lower operating temperature can be safely lowered from 200 °C to 100 °C where the low ductility of beryllium is an issue.

  16. A hierarchical network modeling method for railway tunnels safety assessment

    Science.gov (United States)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Liu, Xumin

    2017-02-01

    Using network theory to model risk-related knowledge on accidents is regarded as potential very helpful in risk management. A large amount of defects detection data for railway tunnels is collected in autumn every year in China. It is extremely important to discover the regularities knowledge in database. In this paper, based on network theories and by using data mining techniques, a new method is proposed for mining risk-related regularities to support risk management in railway tunnel projects. A hierarchical network (HN) model which takes into account the tunnel structures, tunnel defects, potential failures and accidents is established. An improved Apriori algorithm is designed to rapidly and effectively mine correlations between tunnel structures and tunnel defects. Then an algorithm is presented in order to mine the risk-related regularities table (RRT) from the frequent patterns. At last, a safety assessment method is proposed by consideration of actual defects and possible risks of defects gained from the RRT. This method cannot only generate the quantitative risk results but also reveal the key defects and critical risks of defects. This paper is further development on accident causation network modeling methods which can provide guidance for specific maintenance measure.

  17. A CRITICAL ASSESSMENT OF PHOTOMETRIC REDSHIFT METHODS: A CANDELS INVESTIGATION

    International Nuclear Information System (INIS)

    Dahlen, Tomas; Ferguson, Henry C.; Mobasher, Bahram; Faber, Sandra M.; Barro, Guillermo; Guo, Yicheng; Finkelstein, Steven L.; Finlator, Kristian; Fontana, Adriano; Gruetzbauch, Ruth; Johnson, Seth; Pforr, Janine; Dickinson, Mark E.; Salvato, Mara; Wuyts, Stijn; Wiklind, Tommy; Acquaviva, Viviana; Huang, Jiasheng; Huang, Kuang-Han; Newman, Jeffrey A.

    2013-01-01

    We present results from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) photometric redshift methods investigation. In this investigation, the results from 11 participants, each using a different combination of photometric redshift code, template spectral energy distributions (SEDs), and priors, are used to examine the properties of photometric redshifts applied to deep fields with broadband multi-wavelength coverage. The photometry used includes U-band through mid-infrared filters and was derived using the TFIT method. Comparing the results, we find that there is no particular code or set of template SEDs that results in significantly better photometric redshifts compared to others. However, we find that codes producing the lowest scatter and outlier fraction utilize a training sample to optimize photometric redshifts by adding zero-point offsets, template adjusting, or adding extra smoothing errors. These results therefore stress the importance of the training procedure. We find a strong dependence of the photometric redshift accuracy on the signal-to-noise ratio of the photometry. On the other hand, we find a weak dependence of the photometric redshift scatter with redshift and galaxy color. We find that most photometric redshift codes quote redshift errors (e.g., 68% confidence intervals) that are too small compared to that expected from the spectroscopic control sample. We find that all codes show a statistically significant bias in the photometric redshifts. However, the bias is in all cases smaller than the scatter; the latter therefore dominates the errors. Finally, we find that combining results from multiple codes significantly decreases the photometric redshift scatter and outlier fraction. We discuss different ways of combining data to produce accurate photometric redshifts and error estimates

  18. Measurement of Cue-Induced Craving in Human Methamphetamine- Dependent Subjects New Methodological Hopes for Reliable Assessment of Treatment Efficacy

    Directory of Open Access Journals (Sweden)

    Zahra Alam Mehrjerdi

    2011-09-01

    Full Text Available Methamphetamine (MA is a highly addictive psychostimulant drug with crucial impacts on individuals on various levels. Exposure to methamphetamine-associated cues in laboratory can elicit measureable craving and autonomic reactivity in most individuals with methamphetamine dependence and the cue reactivity can model how craving would result in continued drug seeking behaviors and relapse in real environments but study on this notion is still limited. In this brief article, the authors review studies on cue-induced craving in human methamphetamine- dependent subjects in a laboratory-based approach. Craving for methamphetamine is elicited by a variety of methods in laboratory such as paraphernalia, verbal and visual cues and imaginary scripts. In this article, we review the studies applying different cues as main methods of craving incubation in laboratory settings. The brief reviewed literature provides strong evidence that craving for methamphetamine in laboratory conditions is significantly evoked by different cues. Cue-induced craving has important treatment and clinical implications for psychotherapists and clinicians when we consider the role of induced craving in evoking intense desire or urge to use methamphetamine after or during a period of successful craving prevention program. Elicited craving for methamphetamine in laboratory conditions is significantly influenced by methamphetamine-associated cues and results in rapid craving response toward methamphetamine use. This notion can be used as a main core for laboratory-based assessment of treatment efficacy for methamphetamine-dependent patients. In addition, the laboratory settings for studying craving can bridge the gap between somehow-non-reliable preclinical animal model studies and budget demanding randomized clinical trials.

  19. Reliability assessment of fiber optic communication lines depending on external factors and diagnostic errors

    Science.gov (United States)

    Bogachkov, I. V.; Lutchenko, S. S.

    2018-05-01

    The article deals with the method for the assessment of the fiber optic communication lines (FOCL) reliability taking into account the effect of the optical fiber tension, the temperature influence and the built-in diagnostic equipment errors of the first kind. The reliability is assessed in terms of the availability factor using the theory of Markov chains and probabilistic mathematical modeling. To obtain a mathematical model, the following steps are performed: the FOCL state is defined and validated; the state graph and system transitions are described; the system transition of states that occur at a certain point is specified; the real and the observed time of system presence in the considered states are identified. According to the permissible value of the availability factor, it is possible to determine the limiting frequency of FOCL maintenance.

  20. Using different methods to assess the discomfort during car driving.

    Science.gov (United States)

    Ravnik, David; Otáhal, Stanislav; Dodic Fikfak, Metoda

    2008-03-01

    This study investigated the discomfort caused by car driving. Discomfort estimates were achieved by self-administered questionnaire, measured by different testing methods, and through the goniometry of principal angles. Data from a total of 200 non-professional drivers who fulfilled the questionnaire was analysed. 118 subjects were analysed by goniometry and 30 drivers were assessed using the OWAS (Ovaco orking Posture Analysis), RULA (Rapid Upper Limb Assessment), and CORLETT tests. The aim of this paper was to assess the appearance of the discomfort and to find some correlations between drivers' postures. Results suggest that different levels of discomfort are perceived in different body regions when driving cars. Differences appear mostly between the genders concerning the discomfort. With the questionnaire and the different estimation techniques, it is possible to identify 'at risk' drivers and ensure urgent attention when necessary. It can be concluded that the questionnare and the CORLETT test are good in predicting location of discomfort. TheB org CRI10scale is good indicator of the level of the discomfort, while OWAS and RULA can appraise the body posture to predict discomfort appearance. According to the goniometry data, the drivers posture could be one of the contributing factors in appearing of discomfort.

  1. Cumulative Risk Assessment Toolbox: Methods and Approaches for the Practitioner

    Directory of Open Access Journals (Sweden)

    Margaret M. MacDonell

    2013-01-01

    Full Text Available The historical approach to assessing health risks of environmental chemicals has been to evaluate them one at a time. In fact, we are exposed every day to a wide variety of chemicals and are increasingly aware of potential health implications. Although considerable progress has been made in the science underlying risk assessments for real-world exposures, implementation has lagged because many practitioners are unaware of methods and tools available to support these analyses. To address this issue, the US Environmental Protection Agency developed a toolbox of cumulative risk resources for contaminated sites, as part of a resource document that was published in 2007. This paper highlights information for nearly 80 resources from the toolbox and provides selected updates, with practical notes for cumulative risk applications. Resources are organized according to the main elements of the assessment process: (1 planning, scoping, and problem formulation; (2 environmental fate and transport; (3 exposure analysis extending to human factors; (4 toxicity analysis; and (5 risk and uncertainty characterization, including presentation of results. In addition to providing online access, plans for the toolbox include addressing nonchemical stressors and applications beyond contaminated sites and further strengthening resource accessibility to support evolving analyses for cumulative risk and sustainable communities.

  2. Methods for the integral assessment of energy-related problems

    International Nuclear Information System (INIS)

    Hirschberg, S.; Suter, P.

    1995-01-01

    The present paper presents a number of methods for a comprehensive assessment of energy systems, discusses their merits and limitations, and provides some result examples. The areas addressed include environmental impacts, risks and economic aspects. Three step Life Cycle Analysis (LCA) has been used to analyse environmental impacts. Transparent and consistent inventories were developed for electricity generation (nine fuel cycles) and for heating systems. The results, which include gaseous and liquid emissions as well as non-energetic resources such as land depreciation, cover average, currently operating systems in the UCPTE network and in Switzerland. Examples of comparisons of heating systems and electricity generation systems, with respect to their contributions to such impact classes as greenhouse effect, acidification and photosmog, are provided. Major gaps exist with respect to the assessment of the severe accidents potential within the different energy systems. When analysing the objective risks due to severe accidents two approaches are employed, i.e. direct use of past experience and applications of Probabilistic Safety Assessment (PSA). Progress with respect to extended knowledge about accidents that occurred in the past and in the context of uses of PSA for external costs calculations is reported. Limitations of historical data and modelling issues are discussed along with the role of risk aversion and current attempts to account for it. (author) 10 figs., 1 tab

  3. Medical Imaging Image Quality Assessment with Monte Carlo Methods

    International Nuclear Information System (INIS)

    Michail, C M; Fountos, G P; Kalyvas, N I; Valais, I G; Kandarakis, I S; Karpetas, G E; Martini, Niki; Koukou, Vaia

    2015-01-01

    The aim of the present study was to assess image quality of PET scanners through a thin layer chromatography (TLC) plane source. The source was simulated using a previously validated Monte Carlo model. The model was developed by using the GATE MC package and reconstructed images obtained with the STIR software for tomographic image reconstruction, with cluster computing. The PET scanner simulated in this study was the GE DiscoveryST. A plane source consisted of a TLC plate, was simulated by a layer of silica gel on aluminum (Al) foil substrates, immersed in 18F-FDG bath solution (1MBq). Image quality was assessed in terms of the Modulation Transfer Function (MTF). MTF curves were estimated from transverse reconstructed images of the plane source. Images were reconstructed by the maximum likelihood estimation (MLE)-OSMAPOSL algorithm. OSMAPOSL reconstruction was assessed by using various subsets (3 to 21) and iterations (1 to 20), as well as by using various beta (hyper) parameter values. MTF values were found to increase up to the 12th iteration whereas remain almost constant thereafter. MTF improves by using lower beta values. The simulated PET evaluation method based on the TLC plane source can be also useful in research for the further development of PET and SPECT scanners though GATE simulations. (paper)

  4. Preventing blood transfusion failures: FMEA, an effective assessment method.

    Science.gov (United States)

    Najafpour, Zhila; Hasoumi, Mojtaba; Behzadi, Faranak; Mohamadi, Efat; Jafary, Mohamadreza; Saeedi, Morteza

    2017-06-30

    Failure Mode and Effect Analysis (FMEA) is a method used to assess the risk of failures and harms to patients during the medical process and to identify the associated clinical issues. The aim of this study was to conduct an assessment of blood transfusion process in a teaching general hospital, using FMEA as the method. A structured FMEA was recruited in our study performed in 2014, and corrective actions were implemented and re-evaluated after 6 months. Sixteen 2-h sessions were held to perform FMEA in the blood transfusion process, including five steps: establishing the context, selecting team members, analysis of the processes, hazard analysis, and developing a risk reduction protocol for blood transfusion. Failure modes with the highest risk priority numbers (RPNs) were identified. The overall RPN scores ranged from 5 to 100 among which, four failure modes were associated with RPNs over 75. The data analysis indicated that failures with the highest RPNs were: labelling (RPN: 100), transfusion of blood or the component (RPN: 100), patient identification (RPN: 80) and sampling (RPN: 75). The results demonstrated that mis-transfusion of blood or blood component is the most important error, which can lead to serious morbidity or mortality. Provision of training to the personnel on blood transfusion, knowledge raising on hazards and appropriate preventative measures, as well as developing standard safety guidelines are essential, and must be implemented during all steps of blood and blood component transfusion.

  5. Assessing dental wear in reindeer using geometric morphometrical methods

    Directory of Open Access Journals (Sweden)

    Rolf Rødven

    2009-01-01

    Full Text Available Assessing dental wear is a useful tool for monitoring the interaction between ungulates and their food resources. However, using a univariate measurement for dental wear, like for instance height of the first molar may not capture the variation in dental wear important for the dental functional morphology. We here demonstrate a method for assessing dental wear for ungulates by using geometric morphometrical methods on 11 mandibles from nine Svalbard reindeer (Rangifer tarandus platyrhynchus. Shape measurements were obtained from a combination of fixed and sliding semi-landmarks, and dental wear was estimated using residual variation of the landmarks. The morphometric measurements obtained showed a good fit when compared to subjective scores of dental wear. We conclude that this method may give a more integrated and robust assessment of dental wear than univariate methods, and suggest it to be used as an alternative or in addition to traditional measurements of dental wear.Abstract in Norwegian / Sammendrag:Vurdering av tannslitasje hos rein ved hjelp av geometrisk morfometriske metoder Vurdering av tannslitasje er en anvendbar metode for å overvåke betydningen av miljøet for livshistorien til hovdyr. Imidlertid vil bruk av et enkelt mål, som for eksempel høyde på første molar, ikke nødvendigvis fange opp variasjonen i tannslitasje som er viktig i forhold til tennenes funksjonelle morfologi. I denne artikkelen viser vi hvordan tannslitasje kan vurderes ved å anvende geometrisk morfometriske metoder på 11 underkjever fra ni Svalbardrein (Rangifer tarandus platyrhynchus. Formen på tannrekka ble målt ved hjelp av en kombinasjon av fikserte og glidende semi-landemerker, hvor tannslitasje ble estimert ved å bruke residual variasjon av landemerkene. De morfometriske målene stemte godt overens med subjektiv vurdering av tannslitasje. Vi konkluderer at denne metoden kan gi en mer integrert og robust vurdering av tannslitasje enn univariate

  6. Spectral fitting method for the solution of time-dependent Schroedinger equations: Applications to atoms in intense laser fields

    International Nuclear Information System (INIS)

    Qiao Haoxue; Cai Qingyu; Rao Jianguo; Li Baiwen

    2002-01-01

    A spectral fitting method for solving the time-dependent Schroedinger equation has been developed and applied to the atom in intense laser fields. This method allows us to obtain a highly accurate time-dependent wave function with a contribution from the high-order term of Δt. Moreover, the time-dependent wave function is determined on a small number of discrete mesh points, thus making calculations simple and accurate. This method is illustrated by computing wave functions and harmonic generation spectra of a model atom in laser fields

  7. Domain decomposition method for dynamic faulting under slip-dependent friction

    International Nuclear Information System (INIS)

    Badea, Lori; Ionescu, Ioan R.; Wolf, Sylvie

    2004-01-01

    The anti-plane shearing problem on a system of finite faults under a slip-dependent friction in a linear elastic domain is considered. Using a Newmark method for the time discretization of the problem, we have obtained an elliptic variational inequality at each time step. An upper bound for the time step size, which is not a CFL condition, is deduced from the solution uniqueness criterion using the first eigenvalue of the tangent problem. Finite element form of the variational inequality is solved by a Schwarz method assuming that the inner nodes of the domain lie in one subdomain and the nodes on the fault lie in other subdomains. Two decompositions of the domain are analyzed, one made up of two subdomains and another one with three subdomains. Numerical experiments are performed to illustrate convergence for a single time step (convergence of the Schwarz algorithm, influence of the mesh size, influence of the time step), convergence in time (instability capturing, energy dissipation, optimal time step) and an application to a relevant physical problem (interacting parallel fault segments)

  8. A rapid Salmonella detection method involving thermophilic helicase-dependent amplification and a lateral flow assay.

    Science.gov (United States)

    Du, Xin-Jun; Zhou, Tian-Jiao; Li, Ping; Wang, Shuo

    2017-08-01

    Salmonella is a major foodborne pathogen that is widespread in the environment and can cause serious human and animal disease. Since conventional culture methods to detect Salmonella are time-consuming and laborious, rapid and accurate techniques to detect this pathogen are critically important for food safety and diagnosing foodborne illness. In this study, we developed a rapid, simple and portable Salmonella detection strategy that combines thermophilic helicase-dependent amplification (tHDA) with a lateral flow assay to provide a detection result based on visual signals within 90 min. Performance analyses indicated that the method had detection limits for DNA and pure cultured bacteria of 73.4-80.7 fg and 35-40 CFU, respectively. Specificity analyses showed no cross reactions with Escherichia coli, Staphylococcus aureus, Listeria monocytogenes, Enterobacter aerogenes, Shigella and Campylobacter jejuni. The results for detection in real food samples showed that 1.3-1.9 CFU/g or 1.3-1.9 CFU/mL of Salmonella in contaminated chicken products and infant nutritional cereal could be detected after 2 h of enrichment. The same amount of Salmonella in contaminated milk could be detected after 4 h of enrichment. This tHDA-strip can be used for the rapid detection of Salmonella in food samples and is particularly suitable for use in areas with limited equipment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. In-Flight Calibration Methods for Temperature-Dependent Offsets in the MMS Fluxgate Magnetometers

    Science.gov (United States)

    Bromund, K. R.; Plaschke, F.; Strangeway, R. J.; Anderson, B. J.; Huang, B. G.; Magnes, W.; Fischer, D.; Nakamura, R.; Leinweber, H. K.; Russell, C. T.; hide

    2016-01-01

    During the first dayside season of the Magnetospheric Multiscale (MMS) mission, the in-flight calibration process for the Fluxgate magnetometers (FGM) implemented an algorithm that selected a constant offset (zero-level) for each sensor on each orbit. This method was generally able to reduce the amplitude of residual spin tone to less than 0.2 nT within the region of interest. However, there are times when the offsets do show significant short-term variations. These variations are most prominent in the nighttime season (phase 1X), when eclipses are accompanied by offset changes as large as 1 nT. Eclipses are followed by a recovery period as long as 12 hours where the offsets continue to change as temperatures stabilize. Understanding and compensating for these changes will become critical during Phase 2 of the mission in 2017, when the nightside will become the focus of MMS science. Although there is no direct correlation between offset and temperature, the offsets are seen for the period of any given week to be well-characterized as function of instrument temperature. Using this property, a new calibration method has been developed that has proven effective in compensating for temperature-dependent offsets during phase 1X of the MMS mission and also promises to further refine calibration quality during the dayside season.

  10. Groundwater vulnerability assessment: from overlay methods to statistical methods in the Lombardy Plain area

    Directory of Open Access Journals (Sweden)

    Stefania Stevenazzi

    2017-06-01

    Full Text Available Groundwater is among the most important freshwater resources. Worldwide, aquifers are experiencing an increasing threat of pollution from urbanization, industrial development, agricultural activities and mining enterprise. Thus, practical actions, strategies and solutions to protect groundwater from these anthropogenic sources are widely required. The most efficient tool, which helps supporting land use planning, while protecting groundwater from contamination, is represented by groundwater vulnerability assessment. Over the years, several methods assessing groundwater vulnerability have been developed: overlay and index methods, statistical and process-based methods. All methods are means to synthesize complex hydrogeological information into a unique document, which is a groundwater vulnerability map, useable by planners, decision and policy makers, geoscientists and the public. Although it is not possible to identify an approach which could be the best one for all situations, the final product should always be scientific defensible, meaningful and reliable. Nevertheless, various methods may produce very different results at any given site. Thus, reasons for similarities and differences need to be deeply investigated. This study demonstrates the reliability and flexibility of a spatial statistical method to assess groundwater vulnerability to contamination at a regional scale. The Lombardy Plain case study is particularly interesting for its long history of groundwater monitoring (quality and quantity, availability of hydrogeological data, and combined presence of various anthropogenic sources of contamination. Recent updates of the regional water protection plan have raised the necessity of realizing more flexible, reliable and accurate groundwater vulnerability maps. A comparison of groundwater vulnerability maps obtained through different approaches and developed in a time span of several years has demonstrated the relevance of the

  11. Method to Find Recovery Event Combinations in Probabilistic Safety Assessment

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Riley, Jeff

    2016-01-01

    These research activities may develop mathematical methods, engineering analyses, and business processes. The research activities of the project covered by this scope are directed toward the specific issues of implementing the methods and strategies on a computational platform, identifying the features and enhancements to EPRI tools that would be necessary to realize significant improvements to the risk assessments performed by the end user. Fault tree analysis is extensively and successfully applied to the risk assessment of safety-critical systems such as nuclear, chemical and aerospace systems. The fault tree analysis is being used together with an event tree analysis in PSA of nuclear power plants. Fault tree solvers for a PSA are mostly based on the cutset-based algorithm. They generate minimal cut sets (MCSs) from a fault tree. The most popular fault tree solver in the PSA industry is FTREX. During the course of this project, certain technical issues (see Sections 2 to 5) have been identified that need to be addressed regarding how minimal cut sets are generated and quantified. The objective of this scope of the work was to develop new methods or techniques to address these technical limitations. By turning on all the cutset initiators (%1, %2, %3, %), all the possible minimal cut sets can be calculated easier than with the original fault tree. It is accomplished by the fact that the number of events in the minimal cut sets are significantly reduced by using cutset initiators instead of random failure events. And byy turning on a few chosen cutset initiators and turning off the other cutset initiators, minimal cut sets of the selected cutset initiator(s) can be easily calculated. As explained in the previous Sections, there is no way to calculate these minimal cut sets by turning off/on the random failure events in the original fault tree

  12. Method to Find Recovery Event Combinations in Probabilistic Safety Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Woo Sik [Sejong University, Seoul (Korea, Republic of); Riley, Jeff [Electric Power Research, Palo Alto (United States)

    2016-05-15

    These research activities may develop mathematical methods, engineering analyses, and business processes. The research activities of the project covered by this scope are directed toward the specific issues of implementing the methods and strategies on a computational platform, identifying the features and enhancements to EPRI tools that would be necessary to realize significant improvements to the risk assessments performed by the end user. Fault tree analysis is extensively and successfully applied to the risk assessment of safety-critical systems such as nuclear, chemical and aerospace systems. The fault tree analysis is being used together with an event tree analysis in PSA of nuclear power plants. Fault tree solvers for a PSA are mostly based on the cutset-based algorithm. They generate minimal cut sets (MCSs) from a fault tree. The most popular fault tree solver in the PSA industry is FTREX. During the course of this project, certain technical issues (see Sections 2 to 5) have been identified that need to be addressed regarding how minimal cut sets are generated and quantified. The objective of this scope of the work was to develop new methods or techniques to address these technical limitations. By turning on all the cutset initiators (%1, %2, %3, %), all the possible minimal cut sets can be calculated easier than with the original fault tree. It is accomplished by the fact that the number of events in the minimal cut sets are significantly reduced by using cutset initiators instead of random failure events. And byy turning on a few chosen cutset initiators and turning off the other cutset initiators, minimal cut sets of the selected cutset initiator(s) can be easily calculated. As explained in the previous Sections, there is no way to calculate these minimal cut sets by turning off/on the random failure events in the original fault tree.

  13. Laboratory assessment of novel oral anticoagulants: method suitability and variability between coagulation laboratories.

    Science.gov (United States)

    Helin, Tuukka A; Pakkanen, Anja; Lassila, Riitta; Joutsi-Korhonen, Lotta

    2013-05-01

    Laboratory tests to assess novel oral anticoagulants (NOACs) are under evaluation. Routine monitoring is unnecessary, but under special circumstances bioactivity assessment becomes crucial. We analyzed the effects of NOACs on coagulation tests and the availability of specific assays at different laboratories. Plasma samples spiked with dabigatran (Dabi; 120 and 300 μg/L) or rivaroxaban (Riva; 60, 146, and 305 μg/L) were sent to 115 and 38 European laboratories, respectively. International normalized ratio (INR) and activated partial thromboplastin time (APTT) were analyzed for all samples; thrombin time (TT) was analyzed specifically for Dabi and calibrated anti-activated factor X (anti-Xa) activity for Riva. We compared the results with patient samples. Results of Dabi samples were reported by 73 laboratories (13 INR and 9 APTT reagents) and Riva samples by 22 laboratories (5 INR and 4 APTT reagents). Both NOACs increased INR values; the increase was modest, albeit larger, for Dabi, with higher CV, especially with Quick (vs Owren) methods. Both NOACs dose-dependently prolonged the APTT. Again, the prolongation and CVs were larger for Dabi. The INR and APTT results varied reagent-dependently (P laboratories, respectively. The screening tests INR and APTT are suboptimal in assessing NOACs, having high reagent dependence and low sensitivity and specificity. They may provide information, if laboratories recognize their limitations. The variation will likely increase and the sensitivity differ in clinical samples. Specific assays measure NOACs accurately; however, few laboratories applied them. © 2013 American Association for Clinical Chemistry.

  14. Vulnerability assessment of skiing-dependent businesses to the effects of climate change in Banff and Jasper National Parks, Canada

    Science.gov (United States)

    Reynolds, David Michael

    This qualitative study examines the potential positive and negative socio-economic impacts that may emerge from the long-term effects of climate change on skiing-dependent businesses in Banff and Jasper National Parks, Canada. My goal was to determine whether or not skiing-related tourism in the parks in the 2020s and 2050s is more or less socio-economically vulnerable to the effects of climate change on snow cover, temperatures and ski season length at ski resorts in the parks. My study explored the level of awareness and personal perceptions of 60 skiing-dependent business managers about how the impact of climate change on ski resorts may influence future socio-economics of ski tourism businesses. I employed a vulnerability assessment approach and adopted some elements of grounded theory. My primary data sources are interviews with managers and the outcome of the geographical factors index (GFI). Supporting methods include: an analysis and interpretation of climate model data and an interpretation of the economic analysis of skiing in the parks. The interview data were sorted and coded to establish concepts and findings by interview questions, while the GFI model rated and ranked 24 regional ski resorts in the Canadian Cordillera. The findings answered the research questions and helped me conclude what the future socio-economic vulnerability may be of skiing-dependent businesses in the parks. The interviews revealed that managers are not informed about climate change and they have not seen any urgency to consider the effects on business. The GFI revealed that the ski resorts in the parks ranked in the top ten of 24 ski resorts in the Cordillera based on 14 common geographical factors. The economic reports suggest skiing is the foundation of the winter economy in the parks and any impact on skiing would directly impact other skiing-dependent businesses. Research indicates that the effects of climate change may have less economic impact on skiing-dependent

  15. Assessment of beet quality using a refractometric method. Ruebenqualitaetsbewertung mit Hilfe einer refraktometrischen Methode

    Energy Technology Data Exchange (ETDEWEB)

    Pollach, G; Hein, W; Roesner, G; Berninger, H; Kernchen, W

    1992-04-20

    We tested on 40 beet samples how far it might be possible to assess beet quality using refractometric and densimetric methods. Regarding the parameter molasses non-sugar on beet, a combination of Aluminum defecation and refractometry proved to be almost equivalent to methods based on non-sugar components. As well as for thick juice purity and molasses non-sugar on beet, formulae are given, assuming low Ca molasses, for molasses sugar and corrected sugar, respectively, on beet. By assuming a relative loss between beet and thick juice, very simple relationships were found. Practical tests in beet laboratories have not yet been carried out. (orig.)

  16. Defining groundwater-dependent ecosystems and assessing critical water needs for their foundational plant communities

    Science.gov (United States)

    Stella, J. C.

    2017-12-01

    In many water-limited regions, human water use in conjunction with increased climate variability threaten the sustainability of groundwater-dependent plant communities and the ecosystems that depend on them (GDEs). Identifying and delineating vulnerable GDEs and determining critical functional thresholds for their foundational species has proved challenging, but recent research across several disciplines shows great promise for reducing scientific uncertainty and increasing applicability to ecosystem and groundwater management. Combining interdisciplinary approaches provides insights into indicators that may serve as early indicators of ecosystem decline, or alternatively demonstrate lags in responses depending on scale or sensitivity, or that even may decouple over time (Fig. 1). At the plant scale, miniaturization of plant sap flow sensors and tensiometers allow for non-destructive, continual measurements of plant water status in response to environmental stressors. Novel applications of proven tree-ring and stable isotope methods provide multi-decadal chronologies of radial growth, physiological function (using d13C ratios) and source water use (using d18O ratios) in response to annual variation in climate and subsurface water availability to plant roots. At a landscape scale, integration of disparate geospatial data such as hyperspectral imagery and LiDAR, as well as novel spectral mixing analysis promote the development of novel water stress indices such as vegetation greenness and non-photosynthetic (i.e., dead) vegetation (Fig. 2), as well as change detection using time series (Fig. 3). Furthermore, increases in data resolution across numerous data types can increasingly differentiate individual plant species, including sensitive taxa that serve as early warning indicators of ecosystem impairment. Combining and cross-calibrating these approaches provide insight into the full range of GDE response to environmental change, including increased climate drought

  17. A proposed assessment method for image of regional educational institutions

    Directory of Open Access Journals (Sweden)

    Kataeva Natalya

    2017-01-01

    Full Text Available Market of educational services in the current Russian economic conditions is a complex of a huge variety of educational institutions. Market of educational services is already experiencing a significant influence of the demographic situation in Russia. This means that higher education institutions are forced to fight in a tough competition for high school students. Increased competition in the educational market forces universities to find new methods of non-price competition in attraction of potential students and throughout own educational and economic activities. Commercialization of education places universities in a single plane with commercial companies who study a positive perception of the image and reputation as a competitive advantage, which is quite acceptable for use in strategic and current activities of higher education institutions to ensure the competitiveness of educational services and educational institution in whole. Nevertheless, due to lack of evidence-based proposals in this area there is a need for scientific research in terms of justification of organizational and methodological aspects of image use as a factor in the competitiveness of the higher education institution. Theoretically and practically there are different methods and ways of evaluating the company’s image. The article provides a comparative assessment of the existing valuation methods of corporate image and the author’s method of estimating the image of higher education institutions based on the key influencing factors. The method has been tested on the Vyatka State Agricultural Academy (Russia. The results also indicate the strengths and weaknesses of the institution, highlights ways of improving, and adjusts the efforts for image improvement.

  18. A method to stabilize the temperature dependent performance of G-APD arrays

    Energy Technology Data Exchange (ETDEWEB)

    Huh, Yoonsuk [Molecular Imaging Research and Education (MiRe) Laboratory, Department of Electronic Engineering, Sogang University, Seoul (Korea, Republic of); Sungkyunkwan University, School of Medicine, Seoul (Korea, Republic of); Choi, Yong; Ho Jung, Jin; Jung, Jiwoong [Molecular Imaging Research and Education (MiRe) Laboratory, Department of Electronic Engineering, Sogang University, Seoul (Korea, Republic of)

    2015-02-01

    This paper presents a compensation method to stabilize the temperature dependent performance of Geiger-mode Avalanche Photodiode (G-APD) arrays for Positron Emission Tomography (PET). The compensation method is used to identify the bias voltage range that provides stable performance even at different temperatures using the G-APD’s characteristics, and to control the photo-peak variation as a function of temperature using the preamplifier gain within the identified bias voltage range. A pair of G-APD detectors and temperature sensors were located in the temperature chamber and the preamplifiers which can control the gain of the detectors using the digital potentiometer were positioned outside the chamber. The performance of the G-APD detector, especially energy resolution and coincidence timing resolution, was characterized as a function of bias voltage at different temperatures from 20 °C to 40 °C at 5 °C increments; the energy resolution, coincidence timing resolution, and photo-peak position of all channels of G-APD PET detectors before and after the preamplifier gain correction were then measured and compared. The results of this study demonstrated that the optimal bias voltage range providing the good energy and coincidence timing resolution, 12.1±1.2% and 1.30±0.09 ns, respectively, could be identified at the temperature range and the photo-peak variation and the performance at different temperatures could be stabilized by adjusting the preamplifier gain within the identified bias voltage range. We concluded the proposed method to be reliable and useful for the development of the PET system using G-APD arrays.

  19. Preliminary Groundwater Assessment using Electrical Method at Quaternary Deposits Area

    Science.gov (United States)

    Hazreek, Z. A. M.; Raqib, A. G. A.; Aziman, M.; Azhar, A. T. S.; Khaidir, A. T. M.; Fairus, Y. M.; Rosli, S.; Fakhrurrazi, I. M.; Izzaty, R. A.

    2017-08-01

    Alternative water sources using groundwater has increasingly demand in recent years. In the past, proper and systematic study of groundwater potential was varies due to several constraints. Conventionally, tube well point was drilled based on subjective judgment of several parties which may lead to the uncertainties of the project success. Hence, this study performed an electrical method to investigate the groundwater potential at quaternary deposits area particularly using resistivity and induced polarization technique. Electrical method was performed using ABEM SAS4000 equipment based on pole dipole array and 2.5 m electrode spacing. Resistivity raw data was analyzed using RES2DINV software. It was found that groundwater was able to be detected based on resistivity and chargeability values which varied at 10 - 100 Ωm and 0 - 1 ms respectively. Moreover, suitable location of tube well was able to be proposed which located at 80 m from the first survey electrode in west direction. Verification of both electrical results with established references has shown some good agreement thus able to convince the result reliability. Hence, the establishment of electrical method in preliminary groundwater assessment was able to assist several parties in term groundwater prospective at study area which efficient in term of cost, time, data coverage and sustainability.

  20. Energy dependency of mechanical properties on polymer impregnated concrete polymerized by radiation induced method

    International Nuclear Information System (INIS)

    Ono, Hironobu

    1978-01-01

    The purpose of this paper is to study the characteristics of polymerization on polymer impregnated concrete (PIC) polymerized by various radiation source which have the peculiar energy respectively as follows; Gamma-rays: 60 Co-1.25MeV, 137 Cs-0.66MeV, X-Ray: 0.88MeV and accelerated electron beam 4.0, 2.0 and 1.2MeV. This experimental program was carried out to investigate the effect of radiation energy, density of cementmortar, optimum irradiating conditions and other factors which have influence upon the polymerization and strength of PIC. The test results shows that the energy dependency on the accelerated electron was remarkable effect for relative absorption energy and strength of specimens (Fig. 5) and it can be estimate that the impregnation depth from the surface of specimens in ordinary mortar MMA-PIC were about 10 mm, 6 mm, and 3 mm as to 4.0, 2.0 and 1.2MeV respectively under curing 50 Mrads (Fig. 2). It is also show that the optimum total exposure dose on magnetic electro wave methods, estimate about 3 MR at 60 Co; 1 x 10 6 R/hr, 2 MR at 137 Cs; 4.5 x 10 4 R/hr and 2 MR at X-ray; 5 x 10 5 R/hr at curing temperature 20 0 C (Fig. 9, Fig. 10). We can see the fact that the energy dependency is noticiable only comparing same kinds of radiation source. (author)

  1. A new perspective on human health risk assessment: Development of a time dependent methodology and the effect of varying exposure durations

    International Nuclear Information System (INIS)

    Siirila, Erica R.; Maxwell, Reed M.

    2012-01-01

    We present a new Time Dependent Risk Assessment (TDRA) that stochastically considers how joint uncertainty and inter-individual variability (JUV) associated with human health risk change as a function of time. In contrast to traditional, time independent assessments of risk, this new formulation relays information on when the risk occurs, how long the duration of risk is, and how risk changes with time. Because the true exposure duration (ED) is often uncertain in a risk assessment, we also investigate how varying the magnitude of fixed size durations (ranging between 5 and 70 years) of this parameter affects the distribution of risk in both the time independent and dependent methodologies. To illustrate this new formulation and to investigate these mechanisms for sensitivity, an example of arsenic contaminated groundwater is used in conjunction with two scenarios of different environmental concentration signals resulting from rate dependencies in geochemical reactions. Cancer risk is computed and compared using environmental concentration ensembles modeled with sorption as 1) a linear equilibrium assumption (LEA) and 2) first order kinetics (Kin). Results show that the information attained in the new time dependent methodology reveals how the uncertainty in other time-dependent processes in the risk assessment may influence the uncertainty in risk. We also show that individual susceptibility also affects how risk changes in time, information that would otherwise be lost in the traditional, time independent methodology. These results are especially pertinent for forecasting risk in time, and for risk managers who are assessing the uncertainty of risk. - Highlights: ► A human health, Time Dependent Risk Assessment (TDRA) methodology is presented. ► TDRA relays information on the magnitude, duration, and fluxes of risk in time. ► Kinetic and equilibrium concentration signals show sensitivity in TDRA results. ► In the TDRA results, individual susceptibility

  2. A comparison of three time-dependent wave packet methods for calculating electron--atom elastic scattering cross sections

    International Nuclear Information System (INIS)

    Judson, R.S.; McGarrah, D.B.; Sharafeddin, O.A.; Kouri, D.J.; Hoffman, D.K.

    1991-01-01

    We compare three time-dependent wave packet methods for performing elastic scattering calculations from screened Coulomb potentials. The three methods are the time-dependent amplitude density method (TDADM), what we term a Cayley-transform method (CTM), and the Chebyshev propagation method of Tal-Ezer and Kosloff. Both the TDADM and the CTM are based on a time-dependent integral equation for the wave function. In the first, we propagate the time-dependent amplitude density, |ζ(t)right-angle=U|ψ(t)right-angle, where U is the interaction potential and |ψ(t)right-angle is the usual time-dependent wave function. In the other two, the wave function is propagated. As a numerical example, we calculate phase shifts and cross sections using a screened Coulomb, Yukawa type potential over the range 200--1000 eV. One of the major advantages of time-dependent methods such as these is that we get scattering information over this entire range of energies from one propagation. We find that in most cases, all three methods yield comparable accuracy and are about equally efficient computationally. However for l=0, where the Coulomb well is not screened by the centrifugal potential, the TDADM requires smaller grid spacings to maintain accuracy

  3. Evaluation of Dried Urine Spot Method to Screen Cotinine among Tobacco Dependents: An Exploratory Study.

    Science.gov (United States)

    Jain, Raka; Quraishi, Rizwana; Verma, Arpita

    2017-01-01

    Assessment of cotinine, a metabolite of nicotine in body fluids, is an important approach for validating the self-report among tobacco users. Adaptation of assays on dried urine spots (DUSs) has advantages of ease of collection, transportation, minimal invasiveness, and requirement of small volume. The aim of the present study was to develop an efficient method for testing cotinine in DUSs and evaluating its clinical applicability. This involved optimization of conditions for detection, recovery, and stability of cotinine from dried urine, spotted on filter paper. Enzyme-linked immunosorbent assay was used for screening, whereas confirmation was done by gas chromatography. For clinical applicability, urine samples of tobacco users were tested. Water was found to be a suitable extracting solvent as compared to carbonate-bicarbonate buffer (pH 9.2) and saline. Screening was achieved by two punches taken from a 20 μl (diameter 1.3 cm) spotted urine samples, and confirmation was achieved by five complete circles each of 20 μl sample volume. The recovery was found to be 97% in water. Limit of detection for the method was found to be 100 ng/ml. No signs of significant degradation were found under all storage conditions. All the urine samples of tobacco users were found to be positive by a conventional method as well as DUSs, and the method proved to be efficient. DUS samples are a useful alternative for biological monitoring of recent nicotine use, especially in developing countries where sample logistics could be an important concern.

  4. Biomass Assessment: A Question of Method and Expertise

    International Nuclear Information System (INIS)

    Thivolle-Cazat, A.; Le Net, E.; Labalette, F.; Marsac, S.

    2013-01-01

    Whereas the new stakes on lignocellulosic biomass are often demand-oriented (heat, electricity, biofuels, etc.) mainly through public policies, the new equilibrium will depend also on the supply-side. This supply has to be understood as socio-economic and environmental targets combining many topics: multi- resources (agriculture, forest, 'dedicated coppices', by-products and wastes), available/potential quantities and costs, localisation, replacement/substitution effects (activities, lands), and supply- side stakeholders' behaviours. Many initiatives have been launched to grasp those dimensions through projects (National Research Agency, French Environment and Energy Management Agency, etc.). Many figures exist on the biomass assessment aspect but they are not clear enough and not comparable due to differences in definitions, scopes, data, parameters, geographical levels, reporting units, time-scale, etc. Regarding the characterisation of biomass supply chains, evaluations are often incomplete and lack methodological references. This article aims to focus on methodological key points and barriers to overcome, in order to get a better evaluation and understanding of biomass mobilisation expected by potential users and public authorities. (authors)

  5. Grey Matter Atrophy in Multiple Sclerosis: Clinical Interpretation Depends on Choice of Analysis Method.

    Directory of Open Access Journals (Sweden)

    Veronica Popescu

    Full Text Available Studies disagree on the location of grey matter (GM atrophy in the multiple sclerosis (MS brain.To examine the consistency between FSL, FreeSurfer, SPM for GM atrophy measurement (for volumes, patient/control discrimination, and correlations with cognition.127 MS patients and 50 controls were included and cortical and deep grey matter (DGM volumetrics were performed. Consistency of volumes was assessed with Intraclass Correlation Coefficient/ICC. Consistency of patients/controls discrimination was assessed with Cohen's d, t-tests, MANOVA and a penalized double-loop logistic classifier. Consistency of association with cognition was assessed with Pearson correlation coefficient and ANOVA. Voxel-based morphometry (SPM-VBM and FSL-VBM and vertex-wise FreeSurfer were used for group-level comparisons.The highest volumetry ICC were between SPM and FreeSurfer for cortical regions, and the lowest between SPM and FreeSurfer for DGM. The caudate nucleus and temporal lobes had high consistency between all software, while amygdala had lowest volumetric consistency. Consistency of patients/controls discrimination was largest in the DGM for all software, especially for thalamus and pallidum. The penalized double-loop logistic classifier most often selected the thalamus, pallidum and amygdala for all software. FSL yielded the largest number of significant correlations. DGM yielded stronger correlations with cognition than cortical volumes. Bilateral putamen and left insula volumes correlated with cognition using all methods.GM volumes from FreeSurfer, FSL and SPM are different, especially for cortical regions. While group-level separation between MS and controls is comparable, correlations between regional GM volumes and clinical/cognitive variables in MS should be cautiously interpreted.

  6. Estimated work ability in warm outdoor environments depends on the chosen heat stress assessment metric.

    Science.gov (United States)

    Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord

    2018-03-01

    With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for

  7. Estimated work ability in warm outdoor environments depends on the chosen heat stress assessment metric

    Science.gov (United States)

    Bröde, Peter; Fiala, Dusan; Lemke, Bruno; Kjellstrom, Tord

    2018-03-01

    With a view to occupational effects of climate change, we performed a simulation study on the influence of different heat stress assessment metrics on estimated workability (WA) of labour in warm outdoor environments. Whole-day shifts with varying workloads were simulated using as input meteorological records for the hottest month from four cities with prevailing hot (Dallas, New Delhi) or warm-humid conditions (Managua, Osaka), respectively. In addition, we considered the effects of adaptive strategies like shielding against solar radiation and different work-rest schedules assuming an acclimated person wearing light work clothes (0.6 clo). We assessed WA according to Wet Bulb Globe Temperature (WBGT) by means of an empirical relation of worker performance from field studies (Hothaps), and as allowed work hours using safety threshold limits proposed by the corresponding standards. Using the physiological models Predicted Heat Strain (PHS) and Universal Thermal Climate Index (UTCI)-Fiala, we calculated WA as the percentage of working hours with body core temperature and cumulated sweat loss below standard limits (38 °C and 7.5% of body weight, respectively) recommended by ISO 7933 and below conservative (38 °C; 3%) and liberal (38.2 °C; 7.5%) limits in comparison. ANOVA results showed that the different metrics, workload, time of day and climate type determined the largest part of WA variance. WBGT-based metrics were highly correlated and indicated slightly more constrained WA for moderate workload, but were less restrictive with high workload and for afternoon work hours compared to PHS and UTCI-Fiala. Though PHS showed unrealistic dynamic responses to rest from work compared to UTCI-Fiala, differences in WA assessed by the physiological models largely depended on the applied limit criteria. In conclusion, our study showed that the choice of the heat stress assessment metric impacts notably on the estimated WA. Whereas PHS and UTCI-Fiala can account for

  8. Techno-experiential design assessment and media experience database: A method for emerging technology assessment

    OpenAIRE

    Schick, Dan

    2005-01-01

    This thesis evaluates the Techno-Experiential Design Assessment (TEDA) for social research on new media and emerging technology. Dr. Roman Onufrijchuk developed TEDA to address the shortcomings of current methods designed for studying existing technologies. Drawing from the ideas of Canadian media theorist Marshall McLuhan, TEDA focuses on the environmental changes introduced by a new technology into a user's life. I describe the key components of the TEDA methodology and provide examples of ...

  9. A QUALITY ASSESSMENT METHOD FOR 3D ROAD POLYGON OBJECTS

    Directory of Open Access Journals (Sweden)

    L. Gao

    2015-08-01

    Full Text Available With the development of the economy, the fast and accurate extraction of the city road is significant for GIS data collection and update, remote sensing images interpretation, mapping and spatial database updating etc. 3D GIS has attracted more and more attentions from academics, industries and governments with the increase of requirements for interoperability and integration of different sources of data. The quality of 3D geographic objects is very important for spatial analysis and decision-making. This paper presents a method for the quality assessment of the 3D road polygon objects which is created by integrating 2D Road Polygon data with LiDAR point cloud and other height information such as Spot Height data in Hong Kong Island. The quality of the created 3D road polygon data set is evaluated by the vertical accuracy, geometric and attribute accuracy, connectivity error, undulation error and completeness error and the final results are presented.

  10. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    Science.gov (United States)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  11. Experimental Methods for UAV Aerodynamic and Propulsion Performance Assessment

    Directory of Open Access Journals (Sweden)

    Stefan ANTON

    2015-06-01

    Full Text Available This paper presents an experimental method for assessing the performances and the propulsion power of a UAV in several points based on telemetry. The points in which we make the estimations are chosen based on several criteria and the fallowing parameters are measured: airspeed, time-to-climb, altitude and the horizontal distance. With the estimated propulsion power and knowing the shaft motor power, the propeller efficiency is determined at several speed values. The shaft motor power was measured in the lab using the propeller as a break. Many flights, using the same UAV configuration, were performed before extracting flight data, in order to reduce the instrumental or statistic errors. This paper highlights both the methodology of processing the data and the validation of theoretical results.

  12. Comparative Assessment of Advanced Gay Hydrate Production Methods

    Energy Technology Data Exchange (ETDEWEB)

    M. D. White; B. P. McGrail; S. K. Wurstner

    2009-06-30

    Displacing natural gas and petroleum with carbon dioxide is a proven technology for producing conventional geologic hydrocarbon reservoirs, and producing additional yields from abandoned or partially produced petroleum reservoirs. Extending this concept to natural gas hydrate production offers the potential to enhance gas hydrate recovery with concomitant permanent geologic sequestration. Numerical simulation was used to assess a suite of carbon dioxide injection techniques for producing gas hydrates from a variety of geologic deposit types. Secondary hydrate formation was found to inhibit contact of the injected CO{sub 2} regardless of injectate phase state, thus diminishing the exchange rate due to pore clogging and hydrate zone bypass of the injected fluids. Additional work is needed to develop methods of artificially introducing high-permeability pathways in gas hydrate zones if injection of CO{sub 2} in either gas, liquid, or micro-emulsion form is to be more effective in enhancing gas hydrate production rates.

  13. Discourses and Practices in Teaching Methods and Assessment

    Directory of Open Access Journals (Sweden)

    Deepak Gopinath

    2015-02-01

    Full Text Available Translating the purposes of education into practice is particularly challenging for those who are new or have recently entered academia. By reflecting on my first years of teaching in higher education, I discuss two key aspects of my teaching practice: shifts in choice of teaching methods and a critique of different forms of assessment. Through the discussion, I argue that a teacher needs to be reflective on both these aspects and that such reflection needs to be carried out so that the student develops into a “self-directing,” “self-monitoring,” and “self-correcting” individual. At the end of the discussion, the relevance of a “project-based learning” approach starts to become significant in taking my pedagogical practice forward.

  14. Assessment of South African uranium resources: methods and results

    International Nuclear Information System (INIS)

    Camisani-Calzolari, F.A.G.M.; De Klerk, W.J.; Van der Merwe, P.J.

    1985-01-01

    This paper deals primarily with the methods used by the Atomic Energy Corporation of South Africa, in arriving at the assessment of the South African uranium resources. The Resource Evaluation Group is responsible for this task, which is carried out on a continuous basis. The evaluation is done on a property-by-property basis and relies upon data submitted to the Nuclear Development Corporation of South Africa by the various companies involved in uranium mining and prospecting in South Africa. Resources are classified into Reasonably Assured (RAR), Estimated Additional (EAR) and Speculative (SR) categories as defined by the NEA/IAEA Steering Group on Uranium Resources. Each category is divided into three categories, viz, resources exploitable at less than $80/kg uranium, at $80-130/kg uranium and at $130-260/kg uranium. Resources are reported in quantities of uranium metal that could be recovered after mining and metallurgical losses have been taken into consideration. Resources in the RAR and EAR categories exploitable at costs of less than $130/kg uranium are now estimated at 460 000 t uranium which represents some 14 per cent of WOCA's (World Outside the Centrally Planned Economies Area) resources. The evaluation of a uranium venture is carried out in various steps, of which the most important, in order of implementation, are: geological interpretation, assessment of in situ resources using techniques varying from manual contouring of values, geostatistics, feasibility studies and estimation of recoverable resources. Because the choice of an evaluation method is, to some extent, dictated by statistical consderations, frequency distribution curves of the uranium grade variable are illustrated and discussed for characteristic deposits

  15. Assessment of metal artifact reduction methods in pelvic CT

    Energy Technology Data Exchange (ETDEWEB)

    Abdoli, Mehrsima [Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, Amsterdam 1066 CX (Netherlands); Mehranian, Abolfazl [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211 (Switzerland); Ailianou, Angeliki; Becker, Minerva [Division of Radiology, Geneva University Hospital, Geneva CH-1211 (Switzerland); Zaidi, Habib, E-mail: habib.zaidi@hcuge.ch [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva CH-1211 (Switzerland); Geneva Neuroscience Center, Geneva University, Geneva CH-1205 (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen 9700 RB (Netherlands)

    2016-04-15

    Purpose: Metal artifact reduction (MAR) produces images with improved quality potentially leading to confident and reliable clinical diagnosis and therapy planning. In this work, the authors evaluate the performance of five MAR techniques for the assessment of computed tomography images of patients with hip prostheses. Methods: Five MAR algorithms were evaluated using simulation and clinical studies. The algorithms included one-dimensional linear interpolation (LI) of the corrupted projection bins in the sinogram, two-dimensional interpolation (2D), a normalized metal artifact reduction (NMAR) technique, a metal deletion technique, and a maximum a posteriori completion (MAPC) approach. The algorithms were applied to ten simulated datasets as well as 30 clinical studies of patients with metallic hip implants. Qualitative evaluations were performed by two blinded experienced radiologists who ranked overall artifact severity and pelvic organ recognition for each algorithm by assigning scores from zero to five (zero indicating totally obscured organs with no structures identifiable and five indicating recognition with high confidence). Results: Simulation studies revealed that 2D, NMAR, and MAPC techniques performed almost equally well in all regions. LI falls behind the other approaches in terms of reducing dark streaking artifacts as well as preserving unaffected regions (p < 0.05). Visual assessment of clinical datasets revealed the superiority of NMAR and MAPC in the evaluated pelvic organs and in terms of overall image quality. Conclusions: Overall, all methods, except LI, performed equally well in artifact-free regions. Considering both clinical and simulation studies, 2D, NMAR, and MAPC seem to outperform the other techniques.

  16. Energy-dependency correction factors for the digital dosimeters using in NMD environment dose assessment

    International Nuclear Information System (INIS)

    Lai, Y.C.; Huang, Y. F.; Chen, Y.W.

    2008-01-01

    Full text: Short-term environment dose-rate assessments using real-time digital dosimeters within a Nuclear Medicine Department (NMD) are gaining more world-wide uses recently. In the past, conventional ion chamber-type survey-meters are used dominantly in environmental dose rates evaluation. Although it has suffered less gamma energy-dependency, but it is less sensitive in comparison with other digital dosimeters and more bulky in design that can hardly make it into a pocket size application. With modern electronic advancement and its shrinking in physical size, real-time personal dosimeter nowadays has gaining more popular to use a miniature G-M counter or a solid-state diode sensor, or even a NaI(Tl) scintillation device for ambient radiation monitoring. Radiation sensor operated in pulse-mode can never been used in doses or dose rates determination since each digital pulse has carried no energy information of the impinging gamma ray being interactive with, especially in the G-M counter or the diode sensor case. The raw count rates measured from a pulse-mode device are heavily dependent on the packaging of the sensor to make it less energy-sensitive. The doses or dose rates are then calculated by using a built-in conversion factor, based on a Cs-137 beam source calibration data conducted by various manufacturing vendors, to convert its raw counts into a so-called dose or dose-rate unit. In this study, we have focused our interests in the low energy response of the digital dosimeters from several brands currently for our in-house uses. Mainly, Tc-99m and I-131 in point sources and water phantoms detection configurations have been deployed to simulate our NMD outpatients for environment radiation monitoring purpose. The energy-dependent correction factors of the digital dosimeters will be evaluated by using calibrated Tc-99m or I-131 standard sources directly that has much lower gamma energy than the Cs-137 beam source of 661 keV. In the near future, we would

  17. Assessment of the reliability of ultrasonic inspection methods

    International Nuclear Information System (INIS)

    Haines, N.F.; Langston, D.B.; Green, A.J.; Wilson, R.

    1982-01-01

    The reliability of NDT techniques has remained an open question for many years. A reliable technique may be defined as one that, when rigorously applied by a number of inspection teams, consistently finds then correctly sizes all defects of concern. In this paper we report an assessment of the reliability of defect detection by manual ultrasonic methods applied to the inspection of thick section pressure vessel weldments. Initially we consider the available data relating to the inherent physical capabilities of ultrasonic techniques to detect cracks in weldment and then, independently, we assess the likely variability in team to team performance when several teams are asked to follow the same specified test procedure. The two aspects of 'capability' and 'variability' are brought together to provide quantitative estimates of the overall reliability of ultrasonic inspection of thick section pressure vessel weldments based on currently existing data. The final section of the paper considers current research programmes on reliability and presents a view on how these will help to further improve NDT reliability. (author)

  18. Assessing semantic similarity of texts - Methods and algorithms

    Science.gov (United States)

    Rozeva, Anna; Zerkova, Silvia

    2017-12-01

    Assessing the semantic similarity of texts is an important part of different text-related applications like educational systems, information retrieval, text summarization, etc. This task is performed by sophisticated analysis, which implements text-mining techniques. Text mining involves several pre-processing steps, which provide for obtaining structured representative model of the documents in a corpus by means of extracting and selecting the features, characterizing their content. Generally the model is vector-based and enables further analysis with knowledge discovery approaches. Algorithms and measures are used for assessing texts at syntactical and semantic level. An important text-mining method and similarity measure is latent semantic analysis (LSA). It provides for reducing the dimensionality of the document vector space and better capturing the text semantics. The mathematical background of LSA for deriving the meaning of the words in a given text by exploring their co-occurrence is examined. The algorithm for obtaining the vector representation of words and their corresponding latent concepts in a reduced multidimensional space as well as similarity calculation are presented.

  19. [The Confusion Assessment Method: Transcultural adaptation of a French version].

    Science.gov (United States)

    Antoine, V; Belmin, J; Blain, H; Bonin-Guillaume, S; Goldsmith, L; Guerin, O; Kergoat, M-J; Landais, P; Mahmoudi, R; Morais, J A; Rataboul, P; Saber, A; Sirvain, S; Wolfklein, G; de Wazieres, B

    2018-04-03

    The Confusion Assessment Method (CAM) is a validated key tool in clinical practice and research programs to diagnose delirium and assess its severity. There is no validated French version of the CAM training manual and coding guide (Inouye SK). The aim of this study was to establish a consensual French version of the CAM and its manual. Cross-cultural adaptation to achieve equivalence between the original version and a French adapted version of the CAM manual. A rigorous process was conducted including control of cultural adequacy of the tool's components, double forward and back translations, reconciliation, expert committee review (including bilingual translators with different nationalities, a linguist, highly qualified clinicians, methodologists) and pretesting. A consensual French version of the CAM was achieved. Implementation of the CAM French version in daily clinical practice will enable optimal diagnosis of delirium diagnosis and enhance communication between health professionals in French speaking countries. Validity and psychometric properties are being tested in a French multicenter cohort, opening up new perspectives for improved quality of care and research programs in French speaking countries. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  20. Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol.2

    International Nuclear Information System (INIS)

    Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.; Desrosiers, A.E.

    1983-05-01

    As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines. In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios