WorldWideScience

Sample records for points sensitivity analyses

  1. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  2. EPA Region 1 Environmentally Sensitive Areas (Points)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This coverage represents point equivalents of environmentally sensitive areas in EPA New England. This coverage represents polygon equivalents of environmentally...

  3. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  4. Sensitivity of surface meteorological analyses to observation networks

    Science.gov (United States)

    Tyndall, Daniel Paul

    A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.

  5. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  6. Measuring sensitivity in pharmacoeconomic studies. Refining point sensitivity and range sensitivity by incorporating probability distributions.

    Science.gov (United States)

    Nuijten, M J

    1999-07-01

    The aim of the present study is to describe a refinement of a previously presented method, based on the concept of point sensitivity, to deal with uncertainty in economic studies. The original method was refined by the incorporation of probability distributions which allow a more accurate assessment of the level of uncertainty in the model. In addition, a bootstrap method was used to create a probability distribution for a fixed input variable based on a limited number of data points. The original method was limited in that the sensitivity measurement was based on a uniform distribution of the variables and that the overall sensitivity measure was based on a subjectively chosen range which excludes the impact of values outside the range on the overall sensitivity. The concepts of the refined method were illustrated using a Markov model of depression. The application of the refined method substantially changed the ranking of the most sensitive variables compared with the original method. The response rate became the most sensitive variable instead of the 'per diem' for hospitalisation. The refinement of the original method yields sensitivity outcomes, which greater reflect the real uncertainty in economic studies.

  7. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  8. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  9. Indian Point 2 steam generator tube rupture analyses

    International Nuclear Information System (INIS)

    Dayan, A.

    1985-01-01

    Analyses were conducted with RETRAN-02 to study consequences of steam generator tube rupture (SGTR) events. The Indian Point, Unit 2, power plant (IP2, PWR) was modeled as a two asymmetric loops, consisting of 27 volumes and 37 junctions. The break section was modeled once, conservatively, as a 150% flow area opening at the wall of the steam generator cold leg plenum, and once as a 200% double-ended tube break. Results revealed 60% overprediction of breakflow rates by the traditional conservative model. Two SGTR transients were studied, one with low-pressure reactor trip and one with an earlier reactor trip via over temperature ΔT. The former is more typical to a plant with low reactor average temperature such as IP2. Transient analyses for a single tube break event over 500 seconds indicated continued primary subcooling and no need for steam line pressure relief. In addition, SGTR transients with reactor trip while the pressurizer still contains water were found to favorably reduce depressurization rates. Comparison of the conservative results with independent LOFTRAN predictions showed good agreement

  10. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  11. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Hudson River: SENSITIV (Sensitive Area Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for sensitive areas along the Hudson River. Vector points in this data set represent sensitive areas. This data set...

  12. Balancing data sharing requirements for analyses with data sensitivity

    Science.gov (United States)

    Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.

    2007-01-01

    Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.

  13. Local sensitivity of per-recruit fishing mortality reference points.

    Science.gov (United States)

    Cadigan, N G; Wang, S

    2016-12-01

    We study the sensitivity of fishery management per-recruit harvest rates which may be part of a quantitative harvest strategy designed to achieve some objective for catch or population size. We use a local influence sensitivity analysis to derive equations that describe how these reference harvest rates are affected by perturbations to productivity processes. These equations give a basic theoretical understanding of sensitivity that can be used to predict what the likely impacts of future changes in productivity will be. Our results indicate that per-recruit reference harvest rates are more sensitive to perturbations when the equilibrium catch or population size per recruit, as functions of the harvest rate, have less curvature near the reference point. Overall our results suggest that per recruit reference points will, with some exceptions, usually increase if (1) growth rates increase, (2) natural mortality rates increase, or (3) fishery selectivity increases to an older age.

  14. Tar dew point analyser as a tool in biomass gasification

    Energy Technology Data Exchange (ETDEWEB)

    Vreugdenhil, B.J.; Kuipers, J. [ECN Biomass, Coal and Environmental Research, Petten (Netherlands)

    2008-08-15

    Application of the Tar Dew point Analyzer (TDA) in different biomass based gasification systems and subsequent gas cleaning setups has been proven feasible. Such systems include BFB gasifiers, CFB gasifier and fixed bed gasifiers, with tar crackers or different scrubbers for tar removal. Tar dew points obtained with the TDA give direct insight in the performance of the gas cleaning section and help prevent any tar related problems due to condensation. The current TDA is capable of measuring tar dew points between -20 to 200C. This manuscript will present results from 4 different gasification setups. The range of measured tar dew points is -7 to 164C with comparable results from the calculated dew points based on the SPA measurements. Further detail will be presented on the differences between TDA and SPA results and explanations will be given for deviations that occurred. Improvements for the TDA regarding future work will be presented.

  15. Probabilistic and Nonprobabilistic Sensitivity Analyses of Uncertain Parameters

    Directory of Open Access Journals (Sweden)

    Sheng-En Fang

    2014-01-01

    Full Text Available Parameter sensitivity analyses have been widely applied to industrial problems for evaluating parameter significance, effects on responses, uncertainty influence, and so forth. In the interest of simple implementation and computational efficiency, this study has developed two sensitivity analysis methods corresponding to the situations with or without sufficient probability information. The probabilistic method is established with the aid of the stochastic response surface and the mathematical derivation proves that the coefficients of first-order items embody the parameter main effects on the response. Simultaneously, a nonprobabilistic interval analysis based method is brought forward for the circumstance when the parameter probability distributions are unknown. The two methods have been verified against a numerical beam example with their accuracy compared to that of a traditional variance-based method. The analysis results have demonstrated the reliability and accuracy of the developed methods. And their suitability for different situations has also been discussed.

  16. Sensitivity analyses of fast reactor systems including thorium and uranium

    International Nuclear Information System (INIS)

    Marable, J.H.; Weisbin, C.R.

    1978-01-01

    The Cross Section Evaluation Working Group (CSEWG) has, in conjunction with the development of the fifth version of ENDF/B, assembled new evaluations for 232 Th and 233 U. It is the purpose of this paper to describe briefly some of the more important features of these evaluations relative to ENDF/B-4 to project the change in reactor performance based upon the newer evaluated files and sensitivity coefficients for interesting design problems, and to indicate preliminary results from ongoing uncertainty analyses

  17. Sensitivity and uncertainty analyses in aging risk-based prioritizations

    International Nuclear Information System (INIS)

    Hassan, M.; Uryas'ev, S.; Vesely, W.E.

    1993-01-01

    Aging risk evaluations of nuclear power plants using Probabilistic Risk Analyses (PRAs) involve assessments of the impact of aging structures, systems, and components (SSCs) on plant core damage frequency (CDF). These assessments can be used to prioritize the contributors to aging risk reflecting the relative risk potential of the SSCs. Aging prioritizations are important for identifying the SSCs contributing most to plant risk and can provide a systematic basis on which aging risk control and management strategies for a plant can be developed. However, these prioritizations are subject to variabilities arising from uncertainties in data, and/or from various modeling assumptions. The objective of this paper is to present an evaluation of the sensitivity of aging prioritizations of active components to uncertainties in aging risk quantifications. Approaches for robust prioritization of SSCs also are presented which are less susceptible to the uncertainties

  18. Sensitivity analyses on in-vessel hydrogen generation for KNGR

    International Nuclear Information System (INIS)

    Kim, See Darl; Park, S.Y.; Park, S.H.; Park, J.H.

    2001-03-01

    Sensitivity analyses for the in-vessel hydrogen generation, using the MELCOR program, are described in this report for the Korean Next Generation Reactor. The typical accident sequences of a station blackout and a large LOCA scenario are selected. A lower head failure model, a Zircaloy oxidation reaction model and a B 4 C reaction model are considered for the sensitivity parameters. As for the base case, 1273.15K for a failure temperature of the penetrations or the lower head, an Urbanic-Heidrich correlation for the Zircaloy oxidation reaction model and the B 4 C reaction model are used. Case 1 used 1650K as the failure temperature for the penetrations and Case 2 considered creep rupture instead of penetration failure. Case 3 used a MATPRO-EG and G correlation for the Zircaloy oxidation reaction model and Case 4 turned off the B 4 C reaction model. The results of the studies are summarized below : (1) When the penetration failure temperature is higher, or the creep rupture failure model is considered, the amount of hydrogen increases for two sequences. (2) When the MATPRO-EG and G correlation for a Zircaloy oxidation reaction is considered, the amount of hydrogen is less than the Urbanic-Heidrich correlation (Base case) for both scenarios. (3) When the B 4 C reaction model turns off, the amount of hydrogen decreases for two sequences

  19. Uncertainty and sensitivity analyses of ballast life-cycle cost and payback period

    OpenAIRE

    Mcmahon, James E.

    2000-01-01

    The paper introduces an innovative methology for evaluating the relative significance of energy-efficient technologies applied to fluorescent lamp ballasts. The method involves replacing the point estimates of life cycle cost of the ballasts with uncertainty distributions reflecting the whole spectrum of possible costs, and the assessed probability associated with each value. The results of uncertainty and sensitivity analyses will help analysts reduce effort in data collection and carry on a...

  20. Point process analyses of variations in smoking rate by setting, mood, gender, and dependence

    Science.gov (United States)

    Shiffman, Saul; Rathbun, Stephen L.

    2010-01-01

    The immediate emotional and situational antecedents of ad libitum smoking are still not well understood. We re-analyzed data from Ecological Momentary Assessment using novel point-process analyses, to assess how craving, mood, and social setting influence smoking rate, as well as assessing the moderating effects of gender and nicotine dependence. 304 smokers recorded craving, mood, and social setting using electronic diaries when smoking and at random nonsmoking times over 16 days of smoking. Point-process analysis, which makes use of the known random sampling scheme for momentary variables, examined main effects of setting and interactions with gender and dependence. Increased craving was associated with higher rates of smoking, particularly among women. Negative affect was not associated with smoking rate, even in interaction with arousal, but restlessness was associated with substantially higher smoking rates. Women's smoking tended to be less affected by negative affect. Nicotine dependence had little moderating effect on situational influences. Smoking rates were higher when smokers were alone or with others smoking, and smoking restrictions reduced smoking rates. However, the presence of others smoking undermined the effects of restrictions. The more sensitive point-process analyses confirmed earlier findings, including the surprising conclusion that negative affect by itself was not related to smoking rates. Contrary to hypothesis, men's and not women's smoking was influenced by negative affect. Both smoking restrictions and the presence of others who are not smoking suppress smoking, but others’ smoking undermines the effects of restrictions. Point-process analyses of EMA data can bring out even small influences on smoking rate. PMID:21480683

  1. Sensitivity analyses of the peach bottom turbine trip 2 experiment

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2003-01-01

    In the light of the sustained development in computer technology, the possibilities for code calculations in predicting more realistic transient scenarios in nuclear power plants have been enlarged substantially. Therefore, it becomes feasible to perform 'Best-estimate' simulations through the incorporation of three-dimensional modeling of reactor core into system codes. This method is particularly suited for complex transients that involve strong feedback effects between thermal-hydraulics and kinetics as well as to transient involving local asymmetric effects. The Peach bottom turbine trip test is characterized by a prompt core power excursion followed by a self limiting power behavior. To emphasize and understand the feedback mechanisms involved during this transient, a series of sensitivity analyses were carried out. This should allow the characterization of discrepancies between measured and calculated trends and assess the impact of the thermal-hydraulic and kinetic response of the used models. On the whole, the data comparison revealed a close dependency of the power excursion with the core feedback mechanisms. Thus for a better best estimate simulation of the transient, both of the thermal-hydraulic and the kinetic models should be made more accurate. (author)

  2. Synthesis of Trigeneration Systems: Sensitivity Analyses and Resilience

    Directory of Open Access Journals (Sweden)

    Monica Carvalho

    2013-01-01

    Full Text Available This paper presents sensitivity and resilience analyses for a trigeneration system designed for a hospital. The following information is utilized to formulate an integer linear programming model: (1 energy service demands of the hospital, (2 technical and economical characteristics of the potential technologies for installation, (3 prices of the available utilities interchanged, and (4 financial parameters of the project. The solution of the model, minimizing the annual total cost, provides the optimal configuration of the system (technologies installed and number of pieces of equipment and the optimal operation mode (operational load of equipment, interchange of utilities with the environment, convenience of wasting cogenerated heat, etc. at each temporal interval defining the demand. The broad range of technical, economic, and institutional uncertainties throughout the life cycle of energy supply systems for buildings makes it necessary to delve more deeply into the fundamental properties of resilient systems: feasibility, flexibility and robustness. The resilience of the obtained solution is tested by varying, within reasonable limits, selected parameters: energy demand, amortization and maintenance factor, natural gas price, self-consumption of electricity, and time-of-delivery feed-in tariffs.

  3. Geographic Response Plan (GRP) Sensitive Site Points (Editable), Guam, 2016, US EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This is an editable point feature data set with points over Apra Harbor in Guam. These points represent sensitive sites such as access points for public use and...

  4. Sensitivity of point scale surface runoff predictions to rainfall resolution

    Directory of Open Access Journals (Sweden)

    A. J. Hearman

    2007-01-01

    Full Text Available This paper investigates the effects of using non-linear, high resolution rainfall, compared to time averaged rainfall on the triggering of hydrologic thresholds and therefore model predictions of infiltration excess and saturation excess runoff at the point scale. The bounded random cascade model, parameterized to three locations in Western Australia, was used to scale rainfall intensities at various time resolutions ranging from 1.875 min to 2 h. A one dimensional, conceptual rainfall partitioning model was used that instantaneously partitioned water into infiltration excess, infiltration, storage, deep drainage, saturation excess and surface runoff, where the fluxes into and out of the soil store were controlled by thresholds. The results of the numerical modelling were scaled by relating soil infiltration properties to soil draining properties, and in turn, relating these to average storm intensities. For all soil types, we related maximum infiltration capacities to average storm intensities (k* and were able to show where model predictions of infiltration excess were most sensitive to rainfall resolution (ln k*=0.4 and where using time averaged rainfall data can lead to an under prediction of infiltration excess and an over prediction of the amount of water entering the soil (ln k*>2 for all three rainfall locations tested. For soils susceptible to both infiltration excess and saturation excess, total runoff sensitivity was scaled by relating drainage coefficients to average storm intensities (g* and parameter ranges where predicted runoff was dominated by infiltration excess or saturation excess depending on the resolution of rainfall data were determined (ln g*<2. Infiltration excess predicted from high resolution rainfall was short and intense, whereas saturation excess produced from low resolution rainfall was more constant and less intense. This has important implications for the accuracy of current hydrological models that use time

  5. Uncertainty and sensitivity analyses of ballast life-cycle cost and payback period

    Energy Technology Data Exchange (ETDEWEB)

    McMahon, James E.; Liu, Xiaomin; Turiel, Ike; Hakim, Sajid; Fisher, Diane

    2000-06-01

    The paper introduces an innovative methodology for evaluating the relative significance of energy-efficient technologies applied to fluorescent lamp ballasts. The method involves replacing the point estimates of life cycle cost of the ballasts with uncertainty distributions reflecting the whole spectrum of possible costs, and the assessed probability associated with each value. The results of uncertainty and sensitivity analyses will help analysts reduce effort in data collection and carry on analysis more efficiently. These methods also enable policy makers to gain an insightful understanding of which efficient technology alternatives benefit or cost what fraction of consumers, given the explicit assumptions of the analysis.

  6. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    International Nuclear Information System (INIS)

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings

  7. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    Science.gov (United States)

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop + Pesticide model. Simulations are performed of hive population trajectories with and without pesti...

  8. Peer review of HEDR uncertainty and sensitivity analyses plan

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, F.O.

    1993-06-01

    This report consists of a detailed documentation of the writings and deliberations of the peer review panel that met on May 24--25, 1993 in Richland, Washington to evaluate your draft report ``Uncertainty/Sensitivity Analysis Plan`` (PNWD-2124 HEDR). The fact that uncertainties are being considered in temporally and spatially varying parameters through the use of alternative time histories and spatial patterns deserves special commendation. It is important to identify early those model components and parameters that will have the most influence on the magnitude and uncertainty of the dose estimates. These are the items that should be investigated most intensively prior to committing to a final set of results.

  9. On accuracy problems for semi-analytical sensitivity analyses

    DEFF Research Database (Denmark)

    Pedersen, P.; Cheng, G.; Rasmussen, John

    1989-01-01

    The semi-analytical method of sensitivity analysis combines ease of implementation with computational efficiency. A major drawback to this method, however, is that severe accuracy problems have recently been reported. A complete error analysis for a beam problem with changing length is carried ou...... pseudo loads in order to obtain general load equilibrium with rigid body motions. Such a method would be readily applicable for any element type, whether analytical expressions for the element stiffnesses are available or not. This topic is postponed for a future study....

  10. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    Science.gov (United States)

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost

  11. SPES3 Facility RELAP5 Sensitivity Analyses on the Containment System for Design Review

    International Nuclear Information System (INIS)

    Achilli, A.; Congiu, C.; Ferri, R.; Bianchi, F.; Meloni, P.; Grgic, D.; Dzodzo, M.

    2012-01-01

    An Italian MSE R and D programme on Nuclear Fission is funding, through ENEA, the design and testing of SPES3 facility at SIET, for IRIS reactor simulation. IRIS is a modular, medium size, advanced, integral PWR, developed by an international consortium of utilities, industries, research centres and universities. SPES3 simulates the primary, secondary and containment systems of IRIS, with 1:100 volume scale, full elevation and prototypical thermal-hydraulic conditions. The RELAP5 code was extensively used in support to the design of the facility to identify criticalities and weak points in the reactor simulation. FER, at Zagreb University, performed the IRIS reactor analyses with the RELAP5 and GOTHIC coupled codes. The comparison between IRIS and SPES3 simulation results led to a simulation-design feedback process with step-by-step modifications of the facility design, up to the final configuration. For this, a series of sensitivity cases was run to investigate specific aspects affecting the trend of the main parameters of the plant, as the containment pressure and EHRS removed power, to limit fuel clad temperature excursions during accidental transients. This paper summarizes the sensitivity analyses on the containment system that allowed to review the SPES3 facility design and confirm its capability to appropriately simulate the IRIS plant.

  12. SPES3 Facility RELAP5 Sensitivity Analyses on the Containment System for Design Review

    Directory of Open Access Journals (Sweden)

    Andrea Achilli

    2012-01-01

    Full Text Available An Italian MSE R&D programme on Nuclear Fission is funding, through ENEA, the design and testing of SPES3 facility at SIET, for IRIS reactor simulation. IRIS is a modular, medium size, advanced, integral PWR, developed by an international consortium of utilities, industries, research centres and universities. SPES3 simulates the primary, secondary and containment systems of IRIS, with 1:100 volume scale, full elevation and prototypical thermal-hydraulic conditions. The RELAP5 code was extensively used in support to the design of the facility to identify criticalities and weak points in the reactor simulation. FER, at Zagreb University, performed the IRIS reactor analyses with the RELAP5 and GOTHIC coupled codes. The comparison between IRIS and SPES3 simulation results led to a simulation-design feedback process with step-by-step modifications of the facility design, up to the final configuration. For this, a series of sensitivity cases was run to investigate specific aspects affecting the trend of the main parameters of the plant, as the containment pressure and EHRS removed power, to limit fuel clad temperature excursions during accidental transients. This paper summarizes the sensitivity analyses on the containment system that allowed to review the SPES3 facility design and confirm its capability to appropriately simulate the IRIS plant.

  13. Hospital Standardized Mortality Ratios: Sensitivity Analyses on the Impact of Coding

    Science.gov (United States)

    Bottle, Alex; Jarman, Brian; Aylin, Paul

    2011-01-01

    Introduction Hospital standardized mortality ratios (HSMRs) are derived from administrative databases and cover 80 percent of in-hospital deaths with adjustment for available case mix variables. They have been criticized for being sensitive to issues such as clinical coding but on the basis of limited quantitative evidence. Methods In a set of sensitivity analyses, we compared regular HSMRs with HSMRs resulting from a variety of changes, such as a patient-based measure, not adjusting for comorbidity, not adjusting for palliative care, excluding unplanned zero-day stays ending in live discharge, and using more or fewer diagnoses. Results Overall, regular and variant HSMRs were highly correlated (ρ > 0.8), but differences of up to 10 points were common. Two hospitals were particularly affected when palliative care was excluded from the risk models. Excluding unplanned stays ending in same-day live discharge had the least impact despite their high frequency. The largest impacts were seen when capturing postdischarge deaths and using just five high-mortality diagnosis groups. Conclusions HSMRs in most hospitals changed by only small amounts from the various adjustment methods tried here, though small-to-medium changes were not uncommon. However, the position relative to funnel plot control limits could move in a significant minority even with modest changes in the HSMR. PMID:21790587

  14. Spatially resolved synchrotron-induced X-ray fluorescence analyses of metal point drawings and their mysterious inscriptions

    International Nuclear Information System (INIS)

    Reiche, Ina; Radtke, Martin; Berger, Achim; Goerner, Wolf; Ketelsen, Thomas; Merchel, Silke; Riederer, Josef; Riesemeier, Heinrich; Roth, Michael

    2004-01-01

    Synchrotron-induced X-ray fluorescence (Sy-XRF) analysis was used to study the chemical composition of precious Renaissance silverpoint drawings. Drawings by famous artists such as Albrecht Duerer (1471-1528) and Jan van Eyck (approximately 1395-1441) must be investigated non-destructively. Moreover, extremely sensitive synchrotron- or accelerator-based techniques are needed since only small quantities of silver are deposited on the paper. New criteria for attributing these works to a particular artist could be established based on the analysis of the chemical composition of the metal points used. We illustrate how analysis can give new art historical information by means of two case studies. Two particular drawings, one of Albrecht Duerer, showing a profile portrait of his closest friend, 'Willibald Pirckheimer' (1503), and a second one attributed to Jan van Eyck, showing a 'Portrait of an elderly man', often named 'Niccolo Albergati', are the object of intense art historical controversy. Both drawings show inscriptions next to the figures. Analyses by Sy-XRF could reveal the same kind of silverpoint for the Pirckheimer portrait and its mysterious Greek inscription, contrary to the drawing by Van Eyck where at least three different metal points were applied. Two different types of silver marks were found in this portrait. Silver containing gold marks were detected in the inscriptions and over-subscriptions. This is the first evidence of the use of gold points for metal point drawings in the Middle Ages

  15. A different point of view on the sensitivity of quartz crystal microbalance sensors

    International Nuclear Information System (INIS)

    Arnau, Antonio; Montagut, Yeison; García, José V; Jiménez, Yolanda

    2009-01-01

    In this paper, the sensitivity of a quartz crystal microbalance (QCM) sensor is analysed and discussed in terms of the phase change versus the surface mass change, instead of the classical sensitivity in terms of the resonant frequency change derived from the well-known Sauerbrey equation. The detection sensitivity derived from the Sauerbrey equation is a theoretical detection capability in terms of the frequency change versus the mass change, which increases with the square of frequency. However, when a specific application and measuring system are considered, the detection capability of the QCM sensor must be considered from a different point of view. A new equation is obtained, Δψ ≅ −Δm c /(m q + m L ), which quantifies the phase shift, Δψ, of a fixed frequency signal corresponding to the series resonant frequency of the sensor in a reference state versus a change in the coating mass, Δm c ; m q = η q π/2v q , where η q is the loss viscosity of the unperturbed sensor and v q is the wave propagation speed in quartz, is a parameter which only depends on the physical parameters of the unperturbed resonator and fixes the maximum sensitivity of the sensor and m L = ρ L δ L /2, where ρ L and δ L are, respectively, the liquid density and the wave penetration depth of the wave in the liquid, is the equivalent surface mass density associated with the oscillatory movement of the surface of the sensor in contact with a fluid medium. This equation is an approximate equation around the series resonance frequency of the sensor. The simulation results for 10, 50 and 150 MHz resonance frequency QCM sensors probe its validity. A new electronic system is proposed for QCM biosensor applications based on the equation introduced

  16. Sensitivity studies for 3-D rod ejection analyses on axial power shape

    Energy Technology Data Exchange (ETDEWEB)

    Park, Min-Ho; Park, Jin-Woo; Park, Guen-Tae; Ryu, Seok-Hee; Um, Kil-Sup; Lee, Jae-Il [KEPCO NF, Daejeon (Korea, Republic of)

    2015-10-15

    The current safety analysis methodology using the point kinetics model combined with numerous conservative assumptions result in unrealistic prediction of the transient behavior wasting huge margin for safety analyses while the safety regulation criteria for the reactivity initiated accident are going strict. To deal with this, KNF is developing a 3-D rod ejection analysis methodology using the multi-dimensional code coupling system CHASER. The CHASER system couples three-dimensional core neutron kinetics code ASTRA, sub-channel analysis code THALES, and fuel performance analysis code FROST using message passing interface (MPI). A sensitivity study for 3-D rod ejection analysis on axial power shape (APS) is carried out to survey the tendency of safety parameters by power distributions and to build up a realistic safety analysis methodology while maintaining conservatism. The currently developing 3-D rod ejection analysis methodology using the multi-dimensional core transient analysis code system, CHASER was shown to reasonably reflect the conservative assumptions by tuning up kinetic parameters.

  17. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Mississippi: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for gulls and terns in Mississippi. Vector points in this data set represent bird nesting sites. Species...

  18. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Florida Panhandle: REPTPT (Reptile Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for threatened and endangered reptiles/amphibians for the Florida Panhandle. Vector points in this data set...

  19. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: New Hampshire: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for nesting birds in New Hampshire. Vector points in this data set represent locations of nesting osprey...

  20. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Florida Panhandle: INVERTPT (Invertebrate Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for threatened/endangered invertebrate species for the Florida Panhandle. Vector points in this data set...

  1. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Northwest Arctic, Alaska: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for nesting birds in Northwest Arctic, Alaska. Vector points in this data set represent locations of...

  2. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Central California: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for alcids, diving birds, gulls, terns, pelagic birds, and shorebirds in Central California. Vector points...

  3. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  4. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    International Nuclear Information System (INIS)

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects

  5. Estimation of main diversification time-points of hantaviruses using phylogenetic analyses of complete genomes.

    Science.gov (United States)

    Castel, Guillaume; Tordo, Noël; Plyusnin, Alexander

    2017-04-02

    Because of the great variability of their reservoir hosts, hantaviruses are excellent models to evaluate the dynamics of virus-host co-evolution. Intriguing questions remain about the timescale of the diversification events that influenced this evolution. In this paper we attempted to estimate the first ever timing of hantavirus diversification based on thirty five available complete genomes representing five major groups of hantaviruses and the assumption of co-speciation of hantaviruses with their respective mammal hosts. Phylogenetic analyses were used to estimate the main diversification points during hantavirus evolution in mammals while host diversification was mostly estimated from independent calibrators taken from fossil records. Our results support an earlier developed hypothesis of co-speciation of known hantaviruses with their respective mammal hosts and hence a common ancestor for all hantaviruses carried by placental mammals. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Safety and sensitivity analyses of a generic geologic disposal system for high-level radioactive waste

    International Nuclear Information System (INIS)

    Kimura, Hideo; Takahashi, Tomoyuki; Shima, Shigeki; Matsuzuru, Hideo

    1994-11-01

    This report describes safety and sensitivity analyses of a generic geologic disposal system for HLW, using a GSRW code and an automated sensitivity analysis methodology based on the Differential Algebra. An exposure scenario considered here is based on a normal evolution scenario which excludes events attributable to probabilistic alterations in the environment. The results of sensitivity analyses indicate that parameters related to a homogeneous rock surrounding a disposal facility have higher sensitivities to the output analyzed here than those of a fractured zone and engineered barriers. The sensitivity analysis methodology provides technical information which might be bases for the optimization of design of the disposal facility. Safety analyses were performed on the reference disposal system which involve HLW in amounts corresponding to 16,000 MTU of spent fuels. The individual dose equivalent due to the exposure pathway ingesting drinking water was calculated using both the conservative and realistic values of geochemical parameters. In both cases, the committed dose equivalent evaluated here is the order of 10 -7 Sv, and thus geologic disposal of HLW may be feasible if the disposal conditions assumed here remain unchanged throughout the periods assessed here. (author)

  7. Sensitivity analyses of biodiesel thermo-physical properties under diesel engine conditions

    DEFF Research Database (Denmark)

    Cheng, Xinwei; Ng, Hoon Kiat; Gan, Suyin

    2016-01-01

    This reported work investigates the sensitivities of spray and soot developments to the change of thermo-physical properties for coconut and soybean methyl esters, using two-dimensional computational fluid dynamics fuel spray modelling. The choice of test fuels made was due to their contrasting...... saturation-unsaturation compositions. The sensitivity analyses for non-reacting and reacting sprays were carried out against a total of 12 thermo-physical properties, at an ambient temperature of 900 K and density of 22.8 kg/m3. For the sensitivity analyses, all the thermo-physical properties were set...... as the baseline case and each property was individually replaced by that of diesel. The significance of individual thermo-physical property was determined based on the deviations found in predictions such as liquid penetration, ignition delay period and peak soot concentration when compared to those of baseline...

  8. Intraosseous blood samples for point-of-care analysis: agreement between intraosseous and arterial analyses.

    Science.gov (United States)

    Jousi, Milla; Saikko, Simo; Nurmi, Jouni

    2017-09-11

    Point-of-care (POC) testing is highly useful when treating critically ill patients. In case of difficult vascular access, the intraosseous (IO) route is commonly used, and blood is aspirated to confirm the correct position of the IO-needle. Thus, IO blood samples could be easily accessed for POC analyses in emergency situations. The aim of this study was to determine whether IO values agree sufficiently with arterial values to be used for clinical decision making. Two samples of IO blood were drawn from 31 healthy volunteers and compared with arterial samples. The samples were analysed for sodium, potassium, ionized calcium, glucose, haemoglobin, haematocrit, pH, blood gases, base excess, bicarbonate, and lactate using the i-STAT® POC device. Agreement and reliability were estimated by using the Bland-Altman method and intraclass correlation coefficient calculations. Good agreement was evident between the IO and arterial samples for pH, glucose, and lactate. Potassium levels were clearly higher in the IO samples than those from arterial blood. Base excess and bicarbonate were slightly higher, and sodium and ionised calcium values were slightly lower, in the IO samples compared with the arterial values. The blood gases in the IO samples were between arterial and venous values. Haemoglobin and haematocrit showed remarkable variation in agreement. POC diagnostics of IO blood can be a useful tool to guide treatment in critical emergency care. Seeking out the reversible causes of cardiac arrest or assessing the severity of shock are examples of situations in which obtaining vascular access and blood samples can be difficult, though information about the electrolytes, acid-base balance, and lactate could guide clinical decision making. The analysis of IO samples should though be limited to situations in which no other option is available, and the results should be interpreted with caution, because there is not yet enough scientific evidence regarding the agreement of IO

  9. The MAGIC Touch: Combining MAGIC-Pointing with a Touch-Sensitive Mouse

    Science.gov (United States)

    Drewes, Heiko; Schmidt, Albrecht

    In this paper, we show how to use the combination of eye-gaze and a touch-sensitive mouse to ease pointing tasks in graphical user interfaces. A touch of the mouse positions the mouse pointer at the current gaze position of the user. Thus, the pointer is always at the position where the user expects it on the screen. This approach changes the user experience in tasks that include frequent switching between keyboard and mouse input (e.g. working with spreadsheets). In a user study, we compared the touch-sensitive mouse with a traditional mouse and observed speed improvements for pointing tasks on complex backgrounds. For pointing task on plain backgrounds, performances with both devices were similar, but users perceived the gaze-sensitive interaction of the touch-sensitive mouse as being faster and more convenient. Our results show that using a touch-sensitive mouse that positions the pointer on the user’s gaze position reduces the need for mouse movements in pointing tasks enormously.

  10. Experimental and theoretical analyses of package-on-package structure under three-point bending loading

    International Nuclear Information System (INIS)

    Jia Su; Wang Xi-Shu; Ren Huai-Hui

    2012-01-01

    High density packaging is developing toward miniaturization and integration, which causes many difficulties in designing, manufacturing, and reliability testing. Package-on-Package (PoP) is a promising three-dimensional high-density packaging method that integrates a chip scale package (CSP) in the top package and a fine-pitch ball grid array (FBGA) in the bottom package. In this paper, in-situ scanning electron microscopy (SEM) observation is carried out to detect the deformation and damage of the PoP structure under three-point bending loading. The results indicate that the cracks occur in the die of the top package, then cause the crack deflection and bridging in the die attaching layer. Furthermore, the mechanical principles are used to analyse the cracking process of the PoP structure based on the multi-layer laminating hypothesis and the theoretical analysis results are found to be in good agreement with the experimental results. (condensed matter: structural, mechanical, and thermal properties)

  11. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  12. Sobol method application in dimensional sensitivity analyses of different AFM cantilevers for biological particles

    Science.gov (United States)

    Korayem, M. H.; Taheri, M.; Ghahnaviyeh, S. D.

    2015-08-01

    Due to the more delicate nature of biological micro/nanoparticles, it is necessary to compute the critical force of manipulation. The modeling and simulation of reactions and nanomanipulator dynamics in a precise manipulation process require an exact modeling of cantilevers stiffness, especially the stiffness of dagger cantilevers because the previous model is not useful for this investigation. The stiffness values for V-shaped cantilevers can be obtained through several methods. One of them is the PBA method. In another approach, the cantilever is divided into two sections: a triangular head section and two slanted rectangular beams. Then, deformations along different directions are computed and used to obtain the stiffness values in different directions. The stiffness formulations of dagger cantilever are needed for this sensitivity analyses so the formulations have been driven first and then sensitivity analyses has been started. In examining the stiffness of the dagger-shaped cantilever, the micro-beam has been divided into two triangular and rectangular sections and by computing the displacements along different directions and using the existing relations, the stiffness values for dagger cantilever have been obtained. In this paper, after investigating the stiffness of common types of cantilevers, Sobol sensitivity analyses of the effects of various geometric parameters on the stiffness of these types of cantilevers have been carried out. Also, the effects of different cantilevers on the dynamic behavior of nanoparticles have been studied and the dagger-shaped cantilever has been deemed more suitable for the manipulation of biological particles.

  13. msgbsR: An R package for analysing methylation-sensitive restriction enzyme sequencing data.

    Science.gov (United States)

    Mayne, Benjamin T; Leemaqz, Shalem Y; Buckberry, Sam; Rodriguez Lopez, Carlos M; Roberts, Claire T; Bianco-Miotto, Tina; Breen, James

    2018-02-01

    Genotyping-by-sequencing (GBS) or restriction-site associated DNA marker sequencing (RAD-seq) is a practical and cost-effective method for analysing large genomes from high diversity species. This method of sequencing, coupled with methylation-sensitive enzymes (often referred to as methylation-sensitive restriction enzyme sequencing or MRE-seq), is an effective tool to study DNA methylation in parts of the genome that are inaccessible in other sequencing techniques or are not annotated in microarray technologies. Current software tools do not fulfil all methylation-sensitive restriction sequencing assays for determining differences in DNA methylation between samples. To fill this computational need, we present msgbsR, an R package that contains tools for the analysis of methylation-sensitive restriction enzyme sequencing experiments. msgbsR can be used to identify and quantify read counts at methylated sites directly from alignment files (BAM files) and enables verification of restriction enzyme cut sites with the correct recognition sequence of the individual enzyme. In addition, msgbsR assesses DNA methylation based on read coverage, similar to RNA sequencing experiments, rather than methylation proportion and is a useful tool in analysing differential methylation on large populations. The package is fully documented and available freely online as a Bioconductor package ( https://bioconductor.org/packages/release/bioc/html/msgbsR.html ).

  14. Uncertainty and sensitivity analyses of the complete program system UFOMOD and of selected submodels

    International Nuclear Information System (INIS)

    Fischer, F.; Ehrhardt, J.; Hasemann, I.

    1990-09-01

    Uncertainty and sensitivity studies with the program system UFOMOD have been performed since several years on a submodel basis to get a deeper insight into the propagation of parameter uncertainties through the different modules and to quantify their contribution to the confidence bands of the intermediate and final results of an accident consequence assessment. In a series of investigations with the atmospheric dispersion module, the models describing early protective actions, the models calculating short-term organ doses and the health effects model of the near range subsystem NE of UFOMOD, a great deal of experience has been gained with methods and evaluation techniques for uncertainty and sensitivity analyses. Especially the influence on results of different sampling techniques and sample sizes, parameter distributions and correlations could be quantified and the usefulness of sensitivity measures for the interpretation of results could be demonstrated. In each submodel investigation, the (5%, 95%)-confidende bounds of the complementary cumulative frequency distributions (CCFDs) of various consequence types (activity concentrations of I-131 and Cs-137, individual acute organ doses, individual risks of nonstochastic health effects, and the number of early deaths) were calculated. The corresponding sensitivity analyses for each of these endpoints led to a list of parameters contributing significantly to the variation of mean values and 99% - fractiles. The most important parameters were extracted and combined for the final overall analysis. (orig.) [de

  15. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  16. Sensitivity Analyses for Cross-Coupled Parameters in Automotive Powertrain Optimization

    Directory of Open Access Journals (Sweden)

    Pongpun Othaganont

    2014-06-01

    Full Text Available When vehicle manufacturers are developing new hybrid and electric vehicles, modeling and simulation are frequently used to predict the performance of the new vehicles from an early stage in the product lifecycle. Typically, models are used to predict the range, performance and energy consumption of their future planned production vehicle; they also allow the designer to optimize a vehicle’s configuration. Another use for the models is in performing sensitivity analysis, which helps us understand which parameters have the most influence on model predictions and real-world behaviors. There are various techniques for sensitivity analysis, some are numerical, but the greatest insights are obtained analytically with sensitivity defined in terms of partial derivatives. Existing methods in the literature give us a useful, quantified measure of parameter sensitivity, a first-order effect, but they do not consider second-order effects. Second-order effects could give us additional insights: for example, a first order analysis might tell us that a limiting factor is the efficiency of the vehicle’s prime-mover; our new second order analysis will tell us how quickly the efficiency of the powertrain will become of greater significance. In this paper, we develop a method based on formal optimization mathematics for rapid second-order sensitivity analyses and illustrate these through a case study on a C-segment electric vehicle.

  17. Sensitivity and uncertainty analyses of the HCLL mock-up experiment

    International Nuclear Information System (INIS)

    Leichtle, D.; Fischer, U.; Kodeli, I.; Perel, R.L.; Klix, A.; Batistoni, P.; Villari, R.

    2010-01-01

    Within the European Fusion Technology Programme dedicated computational methods, tools and data have been developed and validated for sensitivity and uncertainty analyses of fusion neutronics experiments. The present paper is devoted to this kind of analyses on the recent neutronics experiment on a mock-up of the Helium-Cooled Lithium Lead Test Blanket Module for ITER at the Frascati neutron generator. They comprise both probabilistic and deterministic methodologies for the assessment of uncertainties of nuclear responses due to nuclear data uncertainties and their sensitivities to the involved reaction cross-section data. We have used MCNP and MCSEN codes in the Monte Carlo approach and DORT and SUSD3D in the deterministic approach for transport and sensitivity calculations, respectively. In both cases JEFF-3.1 and FENDL-2.1 libraries for the transport data and mainly ENDF/B-VI.8 and SCALE6.0 libraries for the relevant covariance data have been used. With a few exceptions, the two different methodological approaches were shown to provide consistent results. A total nuclear data related uncertainty in the range of 1-2% (1σ confidence level) was assessed for the tritium production in the HCLL mock-up experiment.

  18. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  19. The necessity for comparative risk analyses as seen from the political point of view

    International Nuclear Information System (INIS)

    Steger, U.

    1981-01-01

    The author describes the current insufficient utilization of risk analyses in the political decision process and investigates if other technologies encounter the same difficulties of acceptance as in the nuclear energy field. This being likely he is trying to find out which contribution comparative risk analyses could make to the process of democratic will-formation so that new technologies are accepted. Firstly the author establishes theses criticizing the recent scientific efforts made in the field of risk analyses and their usability for the political decision process. He then defines the criteria risk analyses have to meet in order to serve as scientific elements for consultative political discussions. (orig./HP) [de

  20. Sensitivity and uncertainty analyses applied to criticality safety validation. Volume 2

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Hopper, C.M.; Parks, C.V.

    1999-01-01

    This report presents the application of sensitivity and uncertainty (S/U) analysis methodologies developed in Volume 1 to the code/data validation tasks of a criticality safety computational study. Sensitivity and uncertainty analysis methods were first developed for application to fast reactor studies in the 1970s. This work has revitalized and updated the existing S/U computational capabilities such that they can be used as prototypic modules of the SCALE code system, which contains criticality analysis tools currently in use by criticality safety practitioners. After complete development, simplified tools are expected to be released for general use. The methods for application of S/U and generalized linear-least-square methodology (GLLSM) tools to the criticality safety validation procedures were described in Volume 1 of this report. Volume 2 of this report presents the application of these procedures to the validation of criticality safety analyses supporting uranium operations where enrichments are greater than 5 wt %. Specifically, the traditional k eff trending analyses are compared with newly developed k eff trending procedures, utilizing the D and c k coefficients described in Volume 1. These newly developed procedures are applied to a family of postulated systems involving U(11)O 2 fuel, with H/X values ranging from 0--1,000. These analyses produced a series of guidance and recommendations for the general usage of these various techniques. Recommendations for future work are also detailed

  1. The Extraction of Vegetation Points from LiDAR Using 3D Fractal Dimension Analyses

    Directory of Open Access Journals (Sweden)

    Haiquan Yang

    2015-08-01

    Full Text Available Light Detection and Ranging (LiDAR, a high-precision technique used for acquiring three-dimensional (3D surface information, is widely used to study surface vegetation information. Moreover, the extraction of a vegetation point set from the LiDAR point cloud is a basic starting-point for vegetation information analysis, and an important part of its further processing. To extract the vegetation point set completely and to describe the different spatial morphological characteristics of various features in a LiDAR point cloud, we have used 3D fractal dimensions. We discovered that every feature has its own distinctive 3D fractal dimension interval. Based on the 3D fractal dimensions of tall trees, we propose a new method for the extraction of vegetation using airborne LiDAR. According to this method, target features can be distinguished based on their morphological characteristics. The non-ground points acquired by filtering are processed by region growing segmentation and the morphological characteristics are evaluated by 3D fractal dimensions to determine the features required for the determination of the point set for tall trees. Avon, New York, USA was selected as the study area to test the method and the result proves the method’s efficiency. Thus, this approach is feasible. Additionally, the method uses the 3D coordinate properties of the LiDAR point cloud and does not require additional information, such as return intensity, giving it a larger scope of application.

  2. KM3NeT/ARCA sensitivity and discovery potential for neutrino point-like sources

    Directory of Open Access Journals (Sweden)

    Trovato A.

    2016-01-01

    Full Text Available KM3NeT is a large research infrastructure with a network of deep-sea neutrino telescopes in the abyss of the Mediterranean Sea. Of these, the KM3NeT/ARCA detector, installed in the KM3NeT-It node of the network, is optimised for studying high-energy neutrinos of cosmic origin. Sensitivities to galactic sources such as the supernova remnant RXJ1713.7-3946 and the pulsar wind nebula Vela X are presented as well as sensitivities to a generic point source with an E−2 spectrum which represents an approximation for the spectrum of extragalactic candidate neutrino sources.

  3. Applicability of low-melting-point microcrystalline wax to develop temperature-sensitive formulations.

    Science.gov (United States)

    Matsumoto, Kohei; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2017-10-30

    Low-melting-point substances are widely used to develop temperature-sensitive formulations. In this study, we focused on microcrystalline wax (MCW) as a low-melting-point substance. We evaluated the drug release behavior of wax matrix (WM) particles using various MCW under various temperature conditions. WM particles containing acetaminophen were prepared using a spray congealing technique. In the dissolution test at 37°C, WM particles containing low-melting-point MCWs whose melting was starting at approx. 40°C (Hi-Mic-1045 or 1070) released the drug initially followed by the release of only a small amount. On the other hand, in the dissolution test at 20 and 25°C for WM particles containing Hi-Mic-1045 and at 20, 25, and 30°C for that containing Hi-Mic-1070, both WM particles showed faster drug release than at 37°C. The characteristic drug release suppression of WM particles containing low-melting-point MCWs at 37°C was thought attributable to MCW melting, as evidenced by differential scanning calorimetry analysis and powder X-ray diffraction analysis. Taken together, low-melting-point MCWs may be applicable to develop implantable temperature-sensitive formulations that drug release is accelerated by cooling at administered site. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Reliability of an experimental method to analyse the impact point on a golf ball during putting.

    Science.gov (United States)

    Richardson, Ashley K; Mitchell, Andrew C S; Hughes, Gerwyn

    2015-06-01

    This study aimed to examine the reliability of an experimental method identifying the location of the impact point on a golf ball during putting. Forty trials were completed using a mechanical putting robot set to reproduce a putt of 3.2 m, with four different putter-ball combinations. After locating the centre of the dimple pattern (centroid) the following variables were tested; distance of the impact point from the centroid, angle of the impact point from the centroid and distance of the impact point from the centroid derived from the X, Y coordinates. Good to excellent reliability was demonstrated in all impact variables reflected in very strong relative (ICC = 0.98-1.00) and absolute reliability (SEM% = 0.9-4.3%). The highest SEM% observed was 7% for the angle of the impact point from the centroid. In conclusion, the experimental method was shown to be reliable at locating the centroid location of a golf ball, therefore allowing for the identification of the point of impact with the putter head and is suitable for use in subsequent studies.

  5. Normalized Point Source Sensitivity for Off-Axis Optical Performance Evaluation of the Thirty Meter Telescope

    Science.gov (United States)

    Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George

    2010-01-01

    The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.

  6. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  7. Deformation analyse of the high point field Košická Nová Ves

    Directory of Open Access Journals (Sweden)

    Sedlák Vladimír

    2003-09-01

    Full Text Available From the science point of view the deformation measurements serve to an objective determination of movements and from the technical point of view the deformation measurements serve to a determinantion of the building technologies and the construction procedures. Detrmined movements by means of using the geodetic terrestrial or satellite navigation technologies give informations about displacements in a concrete time information on the base of repeated geodetic measurements in the concrete time intervals (epochs.Level deformation investigation of the point of the monitoring station stabled in the fill slope territory Košická Nová Ves is the main task of the presented paper. Level measurements are realized in autumn 2000 (the epoch 200.9 - it is considered as the first epoch of the deformation measurement, and in spring 2001 (the epoch 2001.3 – it is considered as the second epoch of the deformation measurement.

  8. Mechanical pain sensitivity of deep tissues in children - possible development of myofascial trigger points in children

    Directory of Open Access Journals (Sweden)

    Han Ting-I

    2012-02-01

    Full Text Available Abstract Background It is still unclear when latent myofascial trigger points (MTrPs develop during early life. This study is designed to investigate the mechanical pain sensitivity of deep tissues in children in order to see the possible timing of the development of latent MTrPs and attachment trigger points (A-TrPs in school children. Methods Five hundreds and five healthy school children (age 4- 11 years were investigated. A pressure algometer was used to measure the pressure pain threshold (PPT at three different sites in the brachioradialis muscle: the lateral epicondyle at elbow (site A, assumed to be the A-TrP site, the mid-point of the muscle belly (site B, assumed to be the MTrP site, and the muscle-tendon junction as a control site (site C. Results The results showed that, for all children in this study, the mean PPT values was significantly lower (p p Conclusions It is concluded that a child had increased sensitivity at the tendon attachment site and the muscle belly (endplate zone after age of 4 years. Therefore, it is likely that a child may develop an A-Trp and a latent MTrP at the brachioradialis muscle after the age of 4 years. The changes in sensitivity, or the development for these trigger points, may not be related to the activity level of children aged 7-11 years. Further investigation is still required to indentify the exact timing of the initial occurrence of a-Trps and latent MTrPs.

  9. Scoping and sensitivity analyses for the Demonstration Tokamak Hybrid Reactor (DTHR)

    International Nuclear Information System (INIS)

    Sink, D.A.; Gibson, G.

    1979-03-01

    The results of an extensive set of parametric studies are presented which provide analytical data of the effects of various tokamak parameters on the performance and cost of the DTHR (Demonstration Tokamak Hybrid Reactor). The studies were centered on a point design which is described in detail. Variations in the device size, neutron wall loading, and plasma aspect ratio are presented, and the effects on direct hardware costs, fissile fuel production (breeding), fusion power production, electrical power consumption, and thermal power production are shown graphically. The studies considered both ignition and beam-driven operations of DTHR and yielded results based on two empirical scaling laws presently used in reactor studies. Sensitivity studies were also made for variations in the following key parameters: the plasma elongation, the minor radius, the TF coil peak field, the neutral beam injection power, and the Z/sub eff/ of the plasma

  10. Test sensitivity is important for detecting variability in pointing comprehension in canines.

    Science.gov (United States)

    Pongrácz, Péter; Gácsi, Márta; Hegedüs, Dorottya; Péter, András; Miklósi, Adám

    2013-09-01

    Several articles have been recently published on dogs' (Canis familiaris) performance in two-way object choice experiments in which subjects had to find hidden food by utilizing human pointing. The interpretation of results has led to a vivid theoretical debate about the cognitive background of human gestural signal understanding in dogs, despite the fact that many important details of the testing method have not yet been standardized. We report three experiments that aim to reveal how some procedural differences influence adult companion dogs' performance in these tests. Utilizing a large sample in Experiment 1, we provide evidence that neither the keeping conditions (garden/house) nor the location of the testing (outdoor/indoor) affect a dogs' performance. In Experiment 2, we compare dogs' performance using three different types of pointing gestures. Dogs' performance varied between momentary distal and momentary cross-pointing but "low" and "high" performer dogs chose uniformly better than chance level if they responded to sustained pointing gestures with reinforcement (food reward and a clicking sound; "clicker pointing"). In Experiment 3, we show that single features of the aforementioned "clicker pointing" method can slightly improve dogs' success rate if they were added one by one to the momentary distal pointing method. These results provide evidence that although companion dogs show a robust performance at different testing locations regardless of their keeping conditions, the exact execution of the human gesture and additional reinforcement techniques have substantial effect on the outcomes. Consequently, researchers should standardize their methodology before engaging in debates on the comparative aspects of socio-cognitive skills because the procedures they utilize may differ in sensitivity for detecting differences.

  11. Robust artificial neural network for reliability and sensitivity analyses of complex non-linear systems.

    Science.gov (United States)

    Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo

    2017-12-01

    Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Sensitivity Study of Poisson's Ratio Used in Soil Structure Interaction (SSI) Analyses

    International Nuclear Information System (INIS)

    Han, Seung-ju; You, Dong-Hyun; Jang, Jung-bum; Yun, Kwan-hee

    2016-01-01

    The preliminary review for Design Certification (DC) of APR1400 was accepted by NRC on March 4, 2015. After the acceptance of the application for standard DC of APR1400, KHNP has responded the Request for Additional Information (RAI) raised by NRC to undertake a full design certification review. Design certification is achieved through the NRC's rulemaking process, and is founded on the staff's review of the application, which addresses the various safety issues associated with the proposed nuclear power plant design, independent of a specific site. The USNRC issued RAIs pertain to Design Control Document (DCD) Ch.3.7 'Seismic Design' is DCD Tables 3.7A-1 and 3.7A-2 show Poisson’s ratios in the S1 and S2 soil profiles used for SSI analysis as great as 0.47 and 0.48 respectively. Based on staff experience, use of Poisson's ratio approaching these values may result in numerical instability of the SSI analysis results. Sensitivity study is performed using the ACS SASSI NI model of APR1400 with S1 and S2 soil profiles to demonstrate that the Poisson’s ratio values used in the SSI analyses of S1 and S2 soil profile cases do not produce numerical instabilities in the SSI analysis results. No abrupt changes or spurious peaks, which tend to indicate existence of numerical sensitivities in the SASSI solutions, appear in the computed transfer functions of the original SSI analyses that have the maximum dynamic Poisson’s ratio values of 0.47 and 0.48 as well as in the re-computed transfer functions that have the maximum dynamic Poisson’s ratio values limited to 0.42 and 0.45

  13. Sensitivity analyses of factors influencing CMAQ performance for fine particulate nitrate.

    Science.gov (United States)

    Shimadera, Hikari; Hayami, Hiroshi; Chatani, Satoru; Morino, Yu; Mori, Yasuaki; Morikawa, Tazuko; Yamaji, Kazuyo; Ohara, Toshimasa

    2014-04-01

    Improvement of air quality models is required so that they can be utilized to design effective control strategies for fine particulate matter (PM2.5). The Community Multiscale Air Quality modeling system was applied to the Greater Tokyo Area of Japan in winter 2010 and summer 2011. The model results were compared with observed concentrations of PM2.5 sulfate (SO4(2-)), nitrate (NO3(-)) and ammonium, and gaseous nitric acid (HNO3) and ammonia (NH3). The model approximately reproduced PM2.5 SO4(2-) concentration, but clearly overestimated PM2.5 NO3(-) concentration, which was attributed to overestimation of production of ammonium nitrate (NH4NO3). This study conducted sensitivity analyses of factors associated with the model performance for PM2.5 NO3(-) concentration, including temperature and relative humidity, emission of nitrogen oxides, seasonal variation of NH3 emission, HNO3 and NH3 dry deposition velocities, and heterogeneous reaction probability of dinitrogen pentoxide. Change in NH3 emission directly affected NH3 concentration, and substantially affected NH4NO3 concentration. Higher dry deposition velocities of HNO3 and NH3 led to substantial reductions of concentrations of the gaseous species and NH4NO3. Because uncertainties in NH3 emission and dry deposition processes are probably large, these processes may be key factors for improvement of the model performance for PM2.5 NO3(-). The Community Multiscale Air Quality modeling system clearly overestimated the concentration of fine particulate nitrate in the Greater Tokyo Area of Japan, which was attributed to overestimation of production of ammonium nitrate. Sensitivity analyses were conducted for factors associated with the model performance for nitrate. Ammonia emission and dry deposition of nitric acid and ammonia may be key factors for improvement of the model performance.

  14. Hydrophilic property of 316L stainless steel after treatment by atmospheric pressure corona streamer plasma using surface-sensitive analyses

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hamarneh, Ibrahim, E-mail: hamarnehibrahim@yahoo.com [Department of Physics, Faculty of Science, Al-Balqa Applied University, Salt 19117 (Jordan); Pedrow, Patrick [School of Electrical Engineering and Computer Science, Washington State University, Pullman, WA 99164 (United States); Eskhan, Asma; Abu-Lail, Nehal [Gene and Linda Voiland School of Chemical Engineering and Bioengineering, Washington State University, Pullman, WA 99164 (United States)

    2012-10-15

    Highlights: Black-Right-Pointing-Pointer Surface hydrophilic property of surgical-grade 316L stainless steel was enhanced by Ar-O{sub 2} corona streamer plasma treatment. Black-Right-Pointing-Pointer Hydrophilicity, surface morphology, roughness, and chemical composition before and after plasma treatment were evaluated. Black-Right-Pointing-Pointer Contact angle measurements and surface-sensitive analyses techniques, including XPS and AFM, were carried out. Black-Right-Pointing-Pointer Optimum plasma treatment conditions of the SS 316L surface were determined. - Abstract: Surgical-grade 316L stainless steel (SS 316L) had its surface hydrophilic property enhanced by processing in a corona streamer plasma reactor using O{sub 2} gas mixed with Ar at atmospheric pressure. Reactor excitation was 60 Hz ac high-voltage (0-10 kV{sub RMS}) applied to a multi-needle-to-grounded screen electrode configuration. The treated surface was characterized with a contact angle tester. Surface free energy (SFE) for the treated stainless steel increased measurably compared to the untreated surface. The Ar-O{sub 2} plasma was more effective in enhancing the SFE than Ar-only plasma. Optimum conditions for the plasma treatment system used in this study were obtained. X-ray photoelectron spectroscopy (XPS) characterization of the chemical composition of the treated surfaces confirms the existence of new oxygen-containing functional groups contributing to the change in the hydrophilic nature of the surface. These new functional groups were generated by surface reactions caused by reactive oxidation of substrate species. Atomic force microscopy (AFM) images were generated to investigate morphological and roughness changes on the plasma treated surfaces. The aging effect in air after treatment was also studied.

  15. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  16. Systematic comparative and sensitivity analyses of additive and outranking techniques for supporting impact significance assessments

    International Nuclear Information System (INIS)

    Cloquell-Ballester, Vicente-Agustin; Monterde-Diaz, Rafael; Cloquell-Ballester, Victor-Andres; Santamarina-Siurana, Maria-Cristina

    2007-01-01

    Assessing the significance of environmental impacts is one of the most important and all together difficult processes of Environmental Impact Assessment. This is largely due to the multicriteria nature of the problem. To date, decision techniques used in the process suffer from two drawbacks, namely the problem of compensation and the problem of identification of the 'exact boundary' between sub-ranges. This article discusses these issues and proposes a methodology for determining the significance of environmental impacts based on comparative and sensitivity analyses using the Electre TRI technique. An application of the methodology for the environmental assessment of a Power Plant project within the Valencian Region (Spain) is presented, and its performance evaluated. It is concluded that contrary to other techniques, Electre TRI automatically identifies those cases where allocation of significance categories is most difficult and, when combined with sensitivity analysis, offers greatest robustness in the face of variation in weights of the significance attributes. Likewise, this research demonstrates the efficacy of systematic comparison between Electre TRI and sum-based techniques, in the solution of assignment problems. The proposed methodology can therefore be regarded as a successful aid to the decision-maker, who will ultimately take the final decision

  17. Highly sensitive chemiluminescent point mutation detection by circular strand-displacement amplification reaction.

    Science.gov (United States)

    Shi, Chao; Ge, Yujie; Gu, Hongxi; Ma, Cuiping

    2011-08-15

    Single nucleotide polymorphism (SNP) genotyping is attracting extensive attentions owing to its direct connections with human diseases including cancers. Here, we have developed a highly sensitive chemiluminescence biosensor based on circular strand-displacement amplification and the separation by magnetic beads reducing the background signal for point mutation detection at room temperature. This method took advantage of both the T4 DNA ligase recognizing single-base mismatch with high selectivity and the strand-displacement reaction of polymerase to perform signal amplification. The detection limit of this method was 1.3 × 10(-16)M, which showed better sensitivity than that of most of those reported detection methods of SNP. Additionally, the magnetic beads as carrier of immobility was not only to reduce the background signal, but also may have potential apply in high through-put screening of SNP detection in human genome. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Sensitive skin at menopause; dew point and electrometric properties of the stratum corneum.

    Science.gov (United States)

    Paquet, F; Piérard-Franchimont, C; Fumal, I; Goffin, V; Paye, M; Piérard, G E

    1998-01-12

    A number of menopausal women experience skin sensitive to various environmental threats. Two panels of 15 menopausal women on or without HRT were compared. We studied the response of their stratum corneum to variations in environmental humidity, either in air or in response to an emollient. Environment dew point and electrometric measurements on the skin were recorded to search for correlations. Data show that the baseline stratum corneum hydration is influenced by the dew point. HRT improves the barrier function of the skin. The use of emollient further extends the improvement in the functional properties of skin in menopausal women. Both HRT and an emollient can counteract in part some of the deleterious effects of cold and dry weather.

  19. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  20. Accelerator mass spectrometry analyses of environmental radionuclides: sensitivity, precision and standardisation

    Science.gov (United States)

    Hotchkis; Fink; Tuniz; Vogt

    2000-07-01

    Accelerator Mass Spectrometry (AMS) is the analytical technique of choice for the detection of long-lived radionuclides which cannot be practically analysed with decay counting or conventional mass spectrometry. AMS allows an isotopic sensitivity as low as one part in 10(15) for 14C (5.73 ka), 10Be (1.6 Ma), 26Al (720 ka), 36Cl (301 ka), 41Ca (104 ka), 129I (16 Ma) and other long-lived radionuclides occurring in nature at ultra-trace levels. These radionuclides can be used as tracers and chronometers in many disciplines: geology, archaeology, astrophysics, biomedicine and materials science. Low-level decay counting techniques have been developed in the last 40-50 years to detect the concentration of cosmogenic, radiogenic and anthropogenic radionuclides in a variety of specimens. Radioactivity measurements for long-lived radionuclides are made difficult by low counting rates and in some cases the need for complicated radiochemistry procedures and efficient detectors of soft beta-particles and low energy x-rays. The sensitivity of AMS is unaffected by the half-life of the isotope being measured, since the atoms not the radiations that result from their decay, are counted directly. Hence, the efficiency of AMS in the detection of long-lived radionuclides is 10(6)-10(9) times higher than decay counting and the size of the sample required for analysis is reduced accordingly. For example, 14C is being analysed in samples containing as little as 20 microg carbon. There is also a world-wide effort to use AMS for the analysis of rare nuclides of heavy mass, such as actinides, with important applications in safeguards and nuclear waste disposal. Finally, AMS microprobes are being developed for the in-situ analysis of stable isotopes in geological samples, semiconductors and other materials. Unfortunately, the use of AMS is limited by the expensive accelerator technology required, but there are several attempts to develop compact AMS spectrometers at low (advances in AMS

  1. Compact reversed-field pinch reactors (CRFPR): sensitivity study and design-point determination

    International Nuclear Information System (INIS)

    Hagenson, R.L.; Krakowski, R.A.

    1982-07-01

    If the costing assumptions upon which the positive assessment of conventional large superconducting fusion reactors are based proves overly optimistic, approaches that promise considerably increased system power density and reduced mass utilization will be required. These more compact reactor embodiments generally must operate with reduced shield thickness and resistive magnets. Because of the unique, magnetic topology associated with the Reversed-Field Pinch (RFP), the compact reactor embodiment for this approach is particularly attractive from the viewpoint of low-field resistive coils operating with Ohmic losses that can be made small relative to the fusion power. A comprehensive system model is developed and described for a steady-state, compact RFP reactor (CRFPR). This model is used to select a unique cost-optimized design point that will be used for a conceptual engineering design. The cost-optimized CRFPR design presented herein would operate with system power densities and mass utilizations that are comparable to fission power plants and are an order of magnitude more favorable than the conventional approaches to magnetic fusion power. The sensitivity of the base-case design point to changes in plasma transport, profiles, beta, blanket thickness, normal vs superconducting coils, and fuel cycle (DT vs DD) is examined. The RFP approach is found to yield a point design for a high-power-density reactor that is surprisingly resilient to changes in key, but relatively unknown, physics and systems parameters

  2. Sensitive detection of point mutation by electrochemiluminescence and DNA ligase-based assay

    Science.gov (United States)

    Zhou, Huijuan; Wu, Baoyan

    2008-12-01

    The technology of single-base mutation detection plays an increasingly important role in diagnosis and prognosis of genetic-based diseases. Here we reported a new method for the analysis of point mutations in genomic DNA through the integration of allele-specific oligonucleotide ligation assay (OLA) with magnetic beads-based electrochemiluminescence (ECL) detection scheme. In this assay the tris(bipyridine) ruthenium (TBR) labeled probe and the biotinylated probe are designed to perfectly complementary to the mutant target, thus a ligation can be generated between those two probes by Taq DNA Ligase in the presence of mutant target. If there is an allele mismatch, the ligation does not take place. The ligation products are then captured onto streptavidin-coated paramagnetic beads, and detected by measuring the ECL signal of the TBR label. Results showed that the new method held a low detection limit down to 10 fmol and was successfully applied in the identification of point mutations from ASTC-α-1, PANC-1 and normal cell lines in codon 273 of TP53 oncogene. In summary, this method provides a sensitive, cost-effective and easy operation approach for point mutation detection.

  3. Sensitivity analyses of seismic behavior of spent fuel dry cask storage systems

    International Nuclear Information System (INIS)

    Luk, V.K.; Spencer, B.W.; Shaukat, S.K.; Lam, I.P.; Dameron, R.A.

    2003-01-01

    Sandia National Laboratories is conducting a research project to develop a comprehensive methodology for evaluating the seismic behavior of spent fuel dry cask storage systems (DCSS) for the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission (NRC). A typical Independent Spent Fuel Storage Installation (ISFSI) consists of arrays of free-standing storage casks resting on concrete pads. In the safety review process of these cask systems, their seismically induced horizontal displacements and angular rotations must be quantified to determine whether casks will overturn or neighboring casks will collide during a seismic event. The ABAQUS/Explicit code is used to analyze three-dimensional coupled finite element models consisting of three submodels, which are a cylindrical cask or a rectangular module, a flexible concrete pad, and an underlying soil foundation. The coupled model includes two sets of contact surfaces between the submodels with prescribed coefficients of friction. The seismic event is described by one vertical and two horizontal components of statistically independent seismic acceleration time histories. A deconvolution procedure is used to adjust the amplitudes and frequency contents of these three-component reference surface motions before applying them simultaneously at the soil foundation base. The research project focused on examining the dynamic and nonlinear seismic behavior of the coupled model of free-standing DCSS including soil-structure interaction effects. This paper presents a subset of analysis results for a series of parametric analyses. Input variables in the parametric analyses include: designs of the cask/module, time histories of the seismic accelerations, coefficients of friction at the cask/pad interface, and material properties of the soil foundation. In subsequent research, the analysis results will be compiled and presented in nomograms to highlight the sensitivity of seismic response of DCSS to

  4. Demonstrating the efficiency of the EFPC criterion by means of Sensitivity analyses

    International Nuclear Information System (INIS)

    Munier, Raymond

    2007-04-01

    Within the framework of a project to characterise large fractures, a modelling effort was initiated to evaluate the use of a pair of full perimeter criteria, FPC and EFPC, for detecting fractures that could jeopardize the integrity of the canisters in the case of a large nearby earthquake. Though some sensitivity studies were performed in the method study of these mainly targeted aspects of Monte-Carlo simulations. The impact of uncertainties in the DFN model upon the efficiency of the FPI criteria was left unattended. The main purpose of this report is, therefore, to explore the impact of DFN variability upon the efficiency of the FPI criteria. The outcome of the present report may thus be regarded as complementary analyses to the ones presented in SKB-R-06-54. To appreciate the details of the present report, the reader should be acquainted with the simulation procedure described the earlier report. The most important conclusion of this study is that the efficiency of the EFPC is high for all tested model variants. That is, compared to blind deposition, the EFPC is a very powerful tool to identify unsuitable deposition holes and it is essentially insensitive to variations in the DFN Model. If information from adjacent tunnels is used in addition to EFPC, then the probability of detecting a critical deposition hole is almost 100%

  5. Point processes statistics of stable isotopes: analysing water uptake patterns in a mixed stand of Aleppo pine and Holm oak

    Directory of Open Access Journals (Sweden)

    Carles Comas

    2015-04-01

    Full Text Available Aim of study: Understanding inter- and intra-specific competition for water is crucial in drought-prone environments. However, little is known about the spatial interdependencies for water uptake among individuals in mixed stands. The aim of this work was to compare water uptake patterns during a drought episode in two common Mediterranean tree species, Quercus ilex L. and Pinus halepensis Mill., using the isotope composition of xylem water (δ18O, δ2H as hydrological marker. Area of study: The study was performed in a mixed stand, sampling a total of 33 oaks and 78 pines (plot area= 888 m2. We tested the hypothesis that both species uptake water differentially along the soil profile, thus showing different levels of tree-to-tree interdependency, depending on whether neighbouring trees belong to one species or the other. Material and Methods: We used pair-correlation functions to study intra-specific point-tree configurations and the bivariate pair correlation function to analyse the inter-specific spatial configuration. Moreover, the isotopic composition of xylem water was analysed as a mark point pattern. Main results: Values for Q. ilex (δ18O = –5.3 ± 0.2‰, δ2H = –54.3 ± 0.7‰ were significantly lower than for P. halepensis (δ18O = –1.2 ± 0.2‰, δ2H = –25.1 ± 0.8‰, pointing to a greater contribution of deeper soil layers for water uptake by Q. ilex. Research highlights: Point-process analyses revealed spatial intra-specific dependencies among neighbouring pines, showing neither oak-oak nor oak-pine interactions. This supports niche segregation for water uptake between the two species.

  6. Derivation of the point spread function for zero-crossing-demodulated position-sensitive detectors

    International Nuclear Information System (INIS)

    Nowlin, C.H.

    1976-07-01

    This work is a mathematical derivation of a high-quality approximation to the point spread function for position-sensitive detectors (PSDs) that use pulse-shape modulation and crossover-time demodulation. The approximation is determined as a general function of the input signals to the crossover detectors so as to enable later determination of optimum position-decoding filters for PSDs. This work is precisely applicable to PSDs that use either RC or LC transmission line encoders. The effects of random variables, such as charge collection time, in the encoding process are included. In addition, this work presents a new, rigorous method for the determination of upper and lower bounds for conditional crossover-time distribution functions (closely related to first-passage-time distribution functions) for arbitrary signals and arbitrary noise covariance functions

  7. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Northwest Arctic, Alaska: M_MAMPT (Marine Mammal Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for Steller sea lions and polar bears in Northwest Arctic, Alaska. Vector points in this data set represent...

  8. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  9. Contributions to sensitivity analysis and generalized discriminant analysis; Contributions a l'analyse de sensibilite et a l'analyse discriminante generalisee

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, J

    2005-12-15

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  10. Contributions to sensitivity analysis and generalized discriminant analysis; Contributions a l'analyse de sensibilite et a l'analyse discriminante generalisee

    Energy Technology Data Exchange (ETDEWEB)

    Jacques, J

    2005-12-15

    Two topics are studied in this thesis: sensitivity analysis and generalized discriminant analysis. Global sensitivity analysis of a mathematical model studies how the output variables of this last react to variations of its inputs. The methods based on the study of the variance quantify the part of variance of the response of the model due to each input variable and each subset of input variables. The first subject of this thesis is the impact of a model uncertainty on results of a sensitivity analysis. Two particular forms of uncertainty are studied: that due to a change of the model of reference, and that due to the use of a simplified model with the place of the model of reference. A second problem was studied during this thesis, that of models with correlated inputs. Indeed, classical sensitivity indices not having significance (from an interpretation point of view) in the presence of correlation of the inputs, we propose a multidimensional approach consisting in expressing the sensitivity of the output of the model to groups of correlated variables. Applications in the field of nuclear engineering illustrate this work. Generalized discriminant analysis consists in classifying the individuals of a test sample in groups, by using information contained in a training sample, when these two samples do not come from the same population. This work extends existing methods in a Gaussian context to the case of binary data. An application in public health illustrates the utility of generalized discrimination models thus defined. (author)

  11. Neuro Emotional Technique for the treatment of trigger point sensitivity in chronic neck pain sufferers: A controlled clinical trial

    OpenAIRE

    Bablis, Peter; Pollard, Henry; Bonello, Rod

    2008-01-01

    Abstract Background Trigger points have been shown to be active in many myofascial pain syndromes. Treatment of trigger point pain and dysfunction may be explained through the mechanisms of central and peripheral paradigms. This study aimed to investigate whether the mind/body treatment of Neuro Emotional Technique (NET) could significantly relieve pain sensitivity of trigger points presenting in a cohort of chronic neck pain sufferers. Methods Sixty participants presenting to a private chiro...

  12. Use of dual-point fluorodeoxyglucose imaging to enhance sensitivity and specificity.

    Science.gov (United States)

    Schillaci, Orazio

    2012-07-01

    studies, dual-time-point PET improved not only the specificity but also the sensitivity in assessing breast, pulmonary, liver, and other tumors because of increased lesion-to-background ratio, as a consequence of FDG washout from the surrounding normal tissues and increasing neoplastic uptake. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. LSHSIM: A Locality Sensitive Hashing based method for multiple-point geostatistics

    Science.gov (United States)

    Moura, Pedro; Laber, Eduardo; Lopes, Hélio; Mesejo, Daniel; Pavanelli, Lucas; Jardim, João; Thiesen, Francisco; Pujol, Gabriel

    2017-10-01

    Reservoir modeling is a very important task that permits the representation of a geological region of interest, so as to generate a considerable number of possible scenarios. Since its inception, many methodologies have been proposed and, in the last two decades, multiple-point geostatistics (MPS) has been the dominant one. This methodology is strongly based on the concept of training image (TI) and the use of its characteristics, which are called patterns. In this paper, we propose a new MPS method that combines the application of a technique called Locality Sensitive Hashing (LSH), which permits to accelerate the search for patterns similar to a target one, with a Run-Length Encoding (RLE) compression technique that speeds up the calculation of the Hamming similarity. Experiments with both categorical and continuous images show that LSHSIM is computationally efficient and produce good quality realizations. In particular, for categorical data, the results suggest that LSHSIM is faster than MS-CCSIM, one of the state-of-the-art methods.

  14. Sensitivity of the direct stop pair production analyses in phenomenological MSSM simplified models with the ATLAS detectors

    CERN Document Server

    Snyder, Ian Michael; The ATLAS collaboration

    2018-01-01

    The sensitivity of the searches for the direct pair production of stops often has been evaluated in simple SUSY scenarios, where only a limited set of supersymmetric particles take part to the stop decay. In this talk, the interpretations of the analyses requiring zero, one or two leptons in the final states to simple but well motivated MSSM scenarios will be discussed.

  15. Preliminary sensitivity analyses of corrosion models for BWIP [Basalt Waste Isolation Project] container materials

    International Nuclear Information System (INIS)

    Anantatmula, R.P.

    1984-01-01

    A preliminary sensitivity analysis was performed for the corrosion models developed for Basalt Waste Isolation Project container materials. The models describe corrosion behavior of the candidate container materials (low carbon steel and Fe9Cr1Mo), in various environments that are expected in the vicinity of the waste package, by separate equations. The present sensitivity analysis yields an uncertainty in total uniform corrosion on the basis of assumed uncertainties in the parameters comprising the corrosion equations. Based on the sample scenario and the preliminary corrosion models, the uncertainty in total uniform corrosion of low carbon steel and Fe9Cr1Mo for the 1000 yr containment period are 20% and 15%, respectively. For containment periods ≥ 1000 yr, the uncertainty in corrosion during the post-closure aqueous periods controls the uncertainty in total uniform corrosion for both low carbon steel and Fe9Cr1Mo. The key parameters controlling the corrosion behavior of candidate container materials are temperature, radiation, groundwater species, etc. Tests are planned in the Basalt Waste Isolation Project containment materials test program to determine in detail the sensitivity of corrosion to these parameters. We also plan to expand the sensitivity analysis to include sensitivity coefficients and other parameters in future studies. 6 refs., 3 figs., 9 tabs

  16. Evidence of Territoriality and Species Interactions from Spatial Point-Pattern Analyses of Subarctic-Nesting Geese

    Science.gov (United States)

    Reiter, Matthew E.; Andersen, David E.

    2013-01-01

    Quantifying spatial patterns of bird nests and nest fate provides insights into processes influencing a species’ distribution. At Cape Churchill, Manitoba, Canada, recent declines in breeding Eastern Prairie Population Canada geese (Branta canadensis interior) has coincided with increasing populations of nesting lesser snow geese (Chen caerulescens caerulescens) and Ross’s geese (Chen rossii). We conducted a spatial analysis of point patterns using Canada goose nest locations and nest fate, and lesser snow goose nest locations at two study areas in northern Manitoba with different densities and temporal durations of sympatric nesting Canada and lesser snow geese. Specifically, we assessed (1) whether Canada geese exhibited territoriality and at what scale and nest density; and (2) whether spatial patterns of Canada goose nest fate were associated with the density of nesting lesser snow geese as predicted by the protective-association hypothesis. Between 2001 and 2007, our data suggest that Canada geese were territorial at the scale of nearest neighbors, but were aggregated when considering overall density of conspecifics at slightly broader spatial scales. The spatial distribution of nest fates indicated that lesser snow goose nest proximity and density likely influence Canada goose nest fate. Our analyses of spatial point patterns suggested that continued changes in the distribution and abundance of breeding lesser snow geese on the Hudson Bay Lowlands may have impacts on the reproductive performance of Canada geese, and subsequently the spatial distribution of Canada goose nests. PMID:24312520

  17. Tests of methods and software for set-valued model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Janssen PHM; Sanders R; CWM

    1995-01-01

    Testen worden besproken die zijn uitgevoerd op methoden en software voor calibratie middels 'rotated-random-scanning', en voor gevoeligheidsanalyse op basis van de 'dominant direction analysis' en de 'generalized sensitivity analysis'. Deze technieken werden

  18. Cross-section sensitivity analyses for a Tokamak Experimental Power Reactor

    International Nuclear Information System (INIS)

    Simmons, E.L.; Gerstl, S.A.W.; Dudziak, D.J.

    1977-09-01

    The objectives of this report were (1) to determine the sensitivity of neutronic responses in the preliminary design of the Tokamak Experimental Power Reactor by Argonne National Laboratory, and (2) to develop the use of a neutron-gamma coupled cross-section set in the calculation of cross-section sensitivity analysis. Response functions such as neutron plus gamma kerma, Mylar dose, copper transmutation, copper dpa, and activation of the toroidal field coil dewar were investigated. Calculations revealed that the responses were most sensitive to the high-energy group cross sections of iron in the innermost regions containing stainless steel. For example, both the neutron heating of the toroidal field coil and the activation of the toroidal field coil dewar show an integral sensitivity of about -5 with respect to the iron total cross sections. Major contributors are the scattering cross sections of iron, with -2.7 and -4.4 for neutron heating and activation, respectively. The effects of changes in gamma cross sections were generally an order of 10 lower

  19. Sensitivity analyses of biodiesel thermo-physical properties under diesel engine conditions

    DEFF Research Database (Denmark)

    Cheng, Xinwei; Ng, Hoon Kiat; Gan, Suyin

    2016-01-01

    This reported work investigates the sensitivities of spray and soot developments to the change of thermo-physical properties for coconut and soybean methyl esters, using two-dimensional computational fluid dynamics fuel spray modelling. The choice of test fuels made was due to their contrasting s...

  20. Sensitivity Analyses of Alternative Methods for Disposition of High-Level Salt Waste: A Position Statement

    International Nuclear Information System (INIS)

    Harris, S.P.; Tuckfield, R.C.

    1998-01-01

    This position paper provides the approach and detail pertaining to a sensitivity analysis for the Phase II definition of weighted evaluation criteria weights and utility function values on the total utility scores for each Initial List alternative due to uncertainty and bias in engineering judgment

  1. High-sensitivity detection of cardiac troponin I with UV LED excitation for use in point-of-care immunoassay

    OpenAIRE

    Rodenko, Olga; Eriksson, Susann; Tidemand-Lichtenberg, Peter; Troldborg, Carl Peder; Fodgaard, Henrik; van Os, Sylvana; Pedersen, Christian

    2017-01-01

    High-sensitivity cardiac troponin assay development enables determination of biological variation in healthy populations, more accurate interpretation of clinical results and points towards earlier diagnosis and rule-out of acute myocardial infarction. In this paper, we report on preliminary tests of an immunoassay analyzer employing an optimized LED excitation to measure on a standard troponin I and a novel research high-sensitivity troponin I assay. The limit of detection is improved by fac...

  2. Genome-wide functional genomic and transcriptomic analyses for genes regulating sensitivity to vorinostat.

    Science.gov (United States)

    Falkenberg, Katrina J; Gould, Cathryn M; Johnstone, Ricky W; Simpson, Kaylene J

    2014-01-01

    Identification of mechanisms of resistance to histone deacetylase inhibitors, such as vorinostat, is important in order to utilise these anticancer compounds more efficiently in the clinic. Here, we present a dataset containing multiple tiers of stringent siRNA screening for genes that when knocked down conferred sensitivity to vorinostat-induced cell death. We also present data from a miRNA overexpression screen for miRNAs contributing to vorinostat sensitivity. Furthermore, we provide transcriptomic analysis using massively parallel sequencing upon knockdown of 14 validated vorinostat-resistance genes. These datasets are suitable for analysis of genes and miRNAs involved in cell death in the presence and absence of vorinostat as well as computational biology approaches to identify gene regulatory networks.

  3. Pre-waste-emplacement ground-water travel time sensitivity and uncertainty analyses for Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Kaplan, P.G.

    1993-01-01

    Yucca Mountain, Nevada is a potential site for a high-level radioactive-waste repository. Uncertainty and sensitivity analyses were performed to estimate critical factors in the performance of the site with respect to a criterion in terms of pre-waste-emplacement ground-water travel time. The degree of failure in the analytical model to meet the criterion is sensitive to the estimate of fracture porosity in the upper welded unit of the problem domain. Fracture porosity is derived from a number of more fundamental measurements including fracture frequency, fracture orientation, and the moisture-retention characteristic inferred for the fracture domain

  4. Response surfaces and sensitivity analyses for an environmental model of dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Iooss, Bertrand [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)]. E-mail: bertrand.iooss@cea.fr; Van Dorpe, Francois [CEA Cadarache, DEN/DTN/SMTM/LMTE, 13108 Saint Paul lez Durance, Cedex (France); Devictor, Nicolas [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)

    2006-10-15

    A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk.

  5. Sensitivity and uncertainty analyses applied to criticality safety validation, methods development. Volume 1

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Hopper, C.M.; Childs, R.L.; Parks, C.V.

    1999-01-01

    This report presents the application of sensitivity and uncertainty (S/U) analysis methodologies to the code/data validation tasks of a criticality safety computational study. Sensitivity and uncertainty analysis methods were first developed for application to fast reactor studies in the 1970s. This work has revitalized and updated the available S/U computational capabilities such that they can be used as prototypic modules of the SCALE code system, which contains criticality analysis tools currently used by criticality safety practitioners. After complete development, simplified tools are expected to be released for general use. The S/U methods that are presented in this volume are designed to provide a formal means of establishing the range (or area) of applicability for criticality safety data validation studies. The development of parameters that are analogous to the standard trending parameters forms the key to the technique. These parameters are the D parameters, which represent the differences by group of sensitivity profiles, and the ck parameters, which are the correlation coefficients for the calculational uncertainties between systems; each set of parameters gives information relative to the similarity between pairs of selected systems, e.g., a critical experiment and a specific real-world system (the application)

  6. An approach of sensitivity and uncertainty analyses methods installation in a safety calculation

    International Nuclear Information System (INIS)

    Pepin, G.; Sallaberry, C.

    2003-01-01

    Simulation of the migration in deep geological formations leads to solve convection-diffusion equations in porous media, associated with the computation of hydrogeologic flow. Different time-scales (simulation during 1 million years), scales of space, contrasts of properties in the calculation domain, are taken into account. This document deals more particularly with uncertainties on the input data of the model. These uncertainties are taken into account in total analysis with the use of uncertainty and sensitivity analysis. ANDRA (French national agency for the management of radioactive wastes) carries out studies on the treatment of input data uncertainties and their propagation in the models of safety, in order to be able to quantify the influence of input data uncertainties of the models on the various indicators of safety selected. The step taken by ANDRA consists initially of 2 studies undertaken in parallel: - the first consists of an international review of the choices retained by ANDRA foreign counterparts to carry out their uncertainty and sensitivity analysis, - the second relates to a review of the various methods being able to be used in sensitivity and uncertainty analysis in the context of ANDRA's safety calculations. Then, these studies are supplemented by a comparison of the principal methods on a test case which gathers all the specific constraints (physical, numerical and data-processing) of the problem studied by ANDRA

  7. Sensitivity study of micro four-point probe measurements on small samples

    DEFF Research Database (Denmark)

    Wang, Fei; Petersen, Dirch Hjorth; Hansen, Torben Mikael

    2010-01-01

    probes than near the outer ones. The sensitive area is defined for infinite film, circular, square, and rectangular test pads, and convergent sensitivities are observed for small samples. The simulations show that the Hall sheet resistance RH in micro Hall measurements with position error suppression...

  8. Chromatographic air analyser microsystem for the selective and sensitive detection of atmospheric pollutants

    International Nuclear Information System (INIS)

    Sanchez, Jean-Baptiste; Lahlou, Houda; Mohsen, Yehya; Berger, Franck; Vilanova, Xavier; Correig, Xavier

    2011-01-01

    The development of industry and automotive trafic produces Volatile Organic Compounds (VOCs) whose toxicity can affect seriously human health and environment. The level of those contaminants in air must be as low as possible. In this context, there is a need for in situ systems that could monitor selectively the concentration of these compounds. The aim of this study is to demonstrate the efficiency of a system build with a pre-concentrator, a chromatographic micro-column and a tin oxide-based gas sensor for the selective and sensitive detection of atmospheric pollutants. In particular, this study is focused on the selective detection of benzene and 1,3 butadiene.

  9. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    Directory of Open Access Journals (Sweden)

    Gerhard Strydom

    2013-01-01

    Full Text Available The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC transient PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS or Latin Hypercube Sampling (LHS data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.

  10. Total System Performance Assessment Sensitivity Analyses for Final Nuclear Regulatory Commission Regulations

    International Nuclear Information System (INIS)

    Bechtel SAIC Company

    2001-01-01

    This Letter Report presents the results of supplemental evaluations and analyses designed to assess long-term performance of the potential repository at Yucca Mountain. The evaluations were developed in the context of the Nuclear Regulatory Commission (NRC) final public regulation, or rule, 10 CFR Part 63 (66 FR 55732 [DIRS 156671]), which was issued on November 2, 2001. This Letter Report addresses the issues identified in the Department of Energy (DOE) technical direction letter dated October 2, 2001 (Adams 2001 [DIRS 156708]). The main objective of this Letter Report is to evaluate performance of the potential Yucca Mountain repository using assumptions consistent with performance-assessment-related provisions of 10 CFR Part 63. The incorporation of the final Environmental Protection Agency (EPA) standard, 40 CFR Part 197 (66 FR 32074 [DIRS 155216]), and the analysis of the effect of the 40 CFR Part 197 EPA final rule on long-term repository performance are presented in the Total System Performance Assessment--Analyses for Disposal of Commercial and DOE Waste Inventories at Yucca Mountain--Input to Final Environmental Impact Statement and Site Suitability Evaluation (BSC 2001 [DIRS 156460]), referred to hereafter as the FEIS/SSE Letter Report. The Total System Performance Assessment (TSPA) analyses conducted and documented prior to promulgation of the NRC final rule 10 CFR Part 63 (66 FR 55732 [DIRS 156671]), were based on the NRC proposed rule (64 FR 8640 [DIRS 101680]). Slight differences exist between the NRC's proposed and final rules which were not within the scope of the FEIS/SSE Letter Report (BSC 2001 [DIRS 156460]), the Preliminary Site Suitability Evaluation (PSSE) (DOE 2001 [DIRS 155743]), and supporting documents for these reports. These differences include (1) the possible treatment of ''unlikely'' features, events and processes (FEPs) in evaluation of both the groundwater protection standard and the human-intrusion scenario of the individual

  11. UAV-based detection and spatial analyses of periglacial landforms on Demay Point (King George Island, South Shetland Islands, Antarctica)

    Science.gov (United States)

    Dąbski, Maciej; Zmarz, Anna; Pabjanek, Piotr; Korczak-Abshire, Małgorzata; Karsznia, Izabela; Chwedorzewska, Katarzyna J.

    2017-08-01

    High-resolution aerial images allow detailed analyses of periglacial landforms, which is of particular importance in light of climate change and resulting changes in active layer thickness. The aim of this study is to show possibilities of using UAV-based photography to perform spatial analysis of periglacial landforms on the Demay Point peninsula, King George Island, and hence to supplement previous geomorphological studies of the South Shetland Islands. Photogrammetric flights were performed using a PW-ZOOM fixed-winged unmanned aircraft vehicle. Digital elevation models (DEM) and maps of slope and contour lines were prepared in ESRI ArcGIS 10.3 with the Spatial Analyst extension, and three-dimensional visualizations in ESRI ArcScene 10.3 software. Careful interpretation of orthophoto and DEM, allowed us to vectorize polygons of landforms, such as (i) solifluction landforms (solifluction sheets, tongues, and lobes); (ii) scarps, taluses, and a protalus rampart; (iii) patterned ground (hummocks, sorted circles, stripes, nets and labyrinths, and nonsorted nets and stripes); (iv) coastal landforms (cliffs and beaches); (v) landslides and mud flows; and (vi) stone fields and bedrock outcrops. We conclude that geomorphological studies based on commonly accessible aerial and satellite images can underestimate the spatial extent of periglacial landforms and result in incomplete inventories. The PW-ZOOM UAV is well suited to gather detailed geomorphological data and can be used in spatial analysis of periglacial landforms in the Western Antarctic Peninsula region.

  12. Sensitivity of a soil-plant-atmosphere model to changes in air temperature, dew point temperature, and solar radiation

    Energy Technology Data Exchange (ETDEWEB)

    Luxmoore, R.J. (Oak Ridge National Lab.,TN); Stolzy, J.L.; Holdeman, J.T.

    1981-01-01

    Air temperature, dew point temperature and solar radiation were independently varied in an hourly soil-plant-atmosphere model in a sensitivity analysis of these parameters. Results suggested that evapotranspiration in eastern Tennessee is limited more by meteorological conditions that determine the vapor-pressure gradient than by the necessary energy to vaporize water within foliage. Transpiration and soil water drainage were very sensitive to changes in air and dew point temperature and to solar radiation under low atmospheric vapor-pressure deficit conditions associated with reduced air temperature. Leaf water potential and stomatal conductance were reduced under conditions having high evapotranspiration. Representative air and dew point temperature input data for a particular application are necessary for satisfactory results, whereas irradiation may be less well characterized for applications with high atmospheric vapor-pressure deficit. The effects of a general rise in atmospheric temperature on forest water budgets are discussed.

  13. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Florida Panhandle: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for wading birds, shorebirds, raptors, diving birds, and gulls and terns in for the Florida Panhandle....

  14. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Southern California: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for nesting and roosting gulls, terns, seabirds, shorebirds, and T/E species in Southern California. Vector...

  15. ESI-HI09 Naliikakani Point, Island of Hawaii, Hawaii 2001 (Environmental Sensitivity Index Map)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Environmental Sensitivity Index (ESI) maps are an integral component in oil-spill contingency planning and assessment. They serve as a source of information in the...

  16. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Upper Coast of Texas: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for shorebirds, diving birds, raptors, waterfowl, wading birds, terns, and gulls for the Upper Coast of...

  17. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: North Carolina: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for wading birds, shorebirds, raptors, diving birds, passerine birds, and gulls and terns in North...

  18. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: South Florida: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for diving birds, gulls, terns, passerine birds, pelagic birds, raptors, shorebirds, wading birds, and...

  19. Surface-sensitive conductivity measurement using a micro multi-point probe approach

    DEFF Research Database (Denmark)

    Perkins, Edward; Barreto, Lucas; Wells, Justin

    2013-01-01

    An instrument for microscale electrical transport measurements in ultra-high vacuum is presented. The setup is constructed around collinear lithographically-created multi-point probes with a contact spacing down to 500 nm. Most commonly, twelve-point probes are used. These probes are approached...... measurements with an equidistant four-point probe for a wide range of contact spacings. In this way, it is possible to distinguish between bulk-like and surface-like conduction. The paper describes the design of the instrument and the approach to data and error analysis. Application examples are given...

  20. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time

    Directory of Open Access Journals (Sweden)

    Dragan Mijakoski

    2018-04-01

    CONCLUSION: Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time.

  1. SURE: a system of computer codes for performing sensitivity/uncertainty analyses with the RELAP code

    International Nuclear Information System (INIS)

    Bjerke, M.A.

    1983-02-01

    A package of computer codes has been developed to perform a nonlinear uncertainty analysis on transient thermal-hydraulic systems which are modeled with the RELAP computer code. Using an uncertainty around the analyses of experiments in the PWR-BDHT Separate Effects Program at Oak Ridge National Laboratory. The use of FORTRAN programs running interactively on the PDP-10 computer has made the system very easy to use and provided great flexibility in the choice of processing paths. Several experiments simulating a loss-of-coolant accident in a nuclear reactor have been successfully analyzed. It has been shown that the system can be automated easily to further simplify its use and that the conversion of the entire system to a base code other than RELAP is possible

  2. Aleutian Islands Coastal Resources Inventory and Environmental Sensitivity Maps: M_MAMPT (Marine Mammal Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains biological resource data for seals and sea lions in the Aleutian Islands, Alaska. Points in this data set represent locations of haulout and...

  3. Cape Point GAW Station Rn-222 detector: factors affecting sensitivity and accuracy

    CSIR Research Space (South Africa)

    Brunke, EG

    2002-05-01

    Full Text Available Specific factors of a baseline Rn-222 detector installed at Cape Point, South Africa, were studied with the aim of improving its performance. Direct sunlight caused air turbulence within the instrument, resulting in 13.6% variability...

  4. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Central California: ALERTS (Vulnerable Resource Location Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector points representing locations in Central California that should be highlighted for protection due to the presence of certain highly...

  5. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Hudson River: STAGING (Staging Site Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for staging sites along the Hudson River. Vector points in this data set represent locations of possible staging areas...

  6. Aleutian Islands Coastal Resources Inventory and Environmental Sensitivity Maps: VOLCANOS (Volcano Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains point locations of active volcanoes as compiled by Motyka et al., 1993. Eighty-nine volcanoes with eruptive phases in the Quaternary are...

  7. Reduced plantar sole sensitivity facilitates early adaptation to a visual rotation pointing task when standing upright

    Directory of Open Access Journals (Sweden)

    Maxime Billot

    2016-09-01

    Full Text Available Humans are capable of pointing to a target with accuracy. However, when vision is distorted through a visual rotation or mirror-reversed vision, the performance is initially degraded and thereafter improves with practice. There are suggestions this gradual improvement results from a sensorimotor recalibration involving initial gating of the somatosensory information from the pointing hand. In the present experiment, we examined if this process interfered with balance control by asking participants to point to targets with a visual rotation from a standing posture. This duality in processing sensory information (i.e., gating sensory signals from the hand while processing those arising from the control of balance could generate initial interference leading to a degraded pointing performance. We hypothesized that if this is the case, the attenuation of plantar sole somatosensory information through cooling could reduce the sensorimotor interference, and facilitate the early adaptation (i.e. improvement in the pointing task. Results supported this hypothesis. These observations suggest that processing sensory information for balance control interferes with the sensorimotor recalibration process imposed by a pointing task when vision is rotated.

  8. Sensitivity analyses of woody species exposed to air pollution based on ecophysiological measurements.

    Science.gov (United States)

    Wen, Dazhi; Kuang, Yuanwen; Zhou, Guoyi

    2004-01-01

    Air pollution has been of a major problem in the Pearl River Delta of south China, particularly during the last two decades. Emissions of air pollutants from industries have already led to damages in natural communities and environments in a wide range of the Delta area. Leaf parameters such as chlorophyll fluorescence, leaf area (LA), dry weight (DW) and leaf mass per area (LMA) had once been used as specific indexes of environmental stress. This study aims to determine in situ if the daily variation of chlorophyll fluorescence and other ecophysiological parameters in five seedlings of three woody species, Ilex rotunda, Ficus microcarpa and Machilus chinensis, could be used alone or in combination with other measurements for sensitivity indexes to make diagnoses under air pollution stress and, hence, to choose the correct tree species for urban afforestation in the Delta area. Five seedlings of each species were transplanted in pot containers after their acclimation under shadowing conditions. Chlorophyll fluorescence measurements were made in situ by a portable fluorometer (OS-30, Opti-sciences, U.S.A). Ten random samples of leaves were picked from each species for LA measurements by area-meter (CI-203, CID, Inc., U.S.A). DW was determined after the leaf samples were dried to a constant weight at 65 degrees C. LMA was calculated as the ratio of DW/LA. Leaf N content was analyzed according to the Kjeldhal method, and the extraction of pigments was carried out according Lin et al. The daily mean Fv/Fm (Fv is the variable fluorescence and Fm is the maximum fluorescence) analysis showed that Ilex rotunda and Ficus microcarpa were more highly resistant to pollution stress, followed by Machilus chinensis, implying that the efficiency of photosystem II in I. rotunda was less affected by air pollutants than the other two species. Little difference in daily change of Fv/Fm in I. rotunda between the polluted and the clean site was also observed. However, a relatively large

  9. High-throughput, Highly Sensitive Analyses of Bacterial Morphogenesis Using Ultra Performance Liquid Chromatography*

    Science.gov (United States)

    Desmarais, Samantha M.; Tropini, Carolina; Miguel, Amanda; Cava, Felipe; Monds, Russell D.; de Pedro, Miguel A.; Huang, Kerwyn Casey

    2015-01-01

    The bacterial cell wall is a network of glycan strands cross-linked by short peptides (peptidoglycan); it is responsible for the mechanical integrity of the cell and shape determination. Liquid chromatography can be used to measure the abundance of the muropeptide subunits composing the cell wall. Characteristics such as the degree of cross-linking and average glycan strand length are known to vary across species. However, a systematic comparison among strains of a given species has yet to be undertaken, making it difficult to assess the origins of variability in peptidoglycan composition. We present a protocol for muropeptide analysis using ultra performance liquid chromatography (UPLC) and demonstrate that UPLC achieves resolution comparable with that of HPLC while requiring orders of magnitude less injection volume and a fraction of the elution time. We also developed a software platform to automate the identification and quantification of chromatographic peaks, which we demonstrate has improved accuracy relative to other software. This combined experimental and computational methodology revealed that peptidoglycan composition was approximately maintained across strains from three Gram-negative species despite taxonomical and morphological differences. Peptidoglycan composition and density were maintained after we systematically altered cell size in Escherichia coli using the antibiotic A22, indicating that cell shape is largely decoupled from the biochemistry of peptidoglycan synthesis. High-throughput, sensitive UPLC combined with our automated software for chromatographic analysis will accelerate the discovery of peptidoglycan composition and the molecular mechanisms of cell wall structure determination. PMID:26468288

  10. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method

    Directory of Open Access Journals (Sweden)

    D.A. Grant

    2017-08-01

    Full Text Available Aims: As the use of Point of Care Testing (POCT devices for measurement of glycated haemoglobin (HbA1c increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics, which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC method (Biorad D10 for measurement of HbA1c. Methods: Whole blood EDTA samples from subjects (n=100 with and without diabetes were assayed using a BioRad D10 and a Quo-Test analyser. Intra-assay variation was determined by measuring six HbA1c samples in triplicate and inter-assay variation was determined by assaying four samples on 4 days. Stability was determined by assaying three samples stored at −20 °C for 14 and 28 days post collection. Results: Median (IQR HbA1c was 60 (44.0–71.2 mmol/mol (7.6 (6.17–8.66 % and 62 (45.0–69.0 mmol/mol (7.8 (6.27–8.46 % for D10 and Quo-Test, respectively, with very good agreement (R2=0.969, P<0.0001. Mean (range intra- and inter-assay variation was 1.2% (0.0–2.7% and 1.6% (0.0–2.7% for the D10 and 3.5% (0.0–6.7% and 2.7% (0.7–5.1% for the Quo-Test. Mean change in HbA1c after 28 days storage at −20 °C was −0.7% and +0.3% for D10 and Quo-Test respectively. Compared to the D10, Quo-Test showed 98% agreement for diagnosis of glucose intolerance (IGT and T2DM and 100% for diagnosis of T2DM. Conclusion: Good agreement between the D10 and Quo-Test was seen across a wide HbA1c range. The Quo-Test POCT device provided similar performance to a laboratory based HPLC method. Keywords: Point of care testing, HbA1c measurement

  11. IASI's sensitivity to near-surface carbon monoxide (CO): Theoretical analyses and retrievals on test cases

    Science.gov (United States)

    Bauduin, Sophie; Clarisse, Lieven; Theunissen, Michael; George, Maya; Hurtmans, Daniel; Clerbaux, Cathy; Coheur, Pierre-François

    2017-03-01

    Separating concentrations of carbon monoxide (CO) in the boundary layer from the rest of the atmosphere with nadir satellite measurements is of particular importance to differentiate emission from transport. Although thermal infrared (TIR) satellite sounders are considered to have limited sensitivity to the composition of the near-surface atmosphere, previous studies show that they can provide information on CO close to the ground in case of high thermal contrast. In this work we investigate the capability of IASI (Infrared Atmospheric Sounding Interferometer) to retrieve near-surface CO concentrations, and we quantitatively assess the influence of thermal contrast on such retrievals. We present a 3-part analysis, which relies on both theoretical forward simulations and retrievals on real data, performed for a large range of negative and positive thermal contrast situations. First, we derive theoretically the IASI detection threshold of CO enhancement in the boundary layer, and we assess its dependence on thermal contrast. Then, using the optimal estimation formalism, we quantify the role of thermal contrast on the error budget and information content of near-surface CO retrievals. We demonstrate that, contrary to what is usually accepted, large negative thermal contrast values (ground cooler than air) lead to a better decorrelation between CO concentrations in the low and the high troposphere than large positive thermal contrast (ground warmer than the air). In the last part of the paper we use Mexico City and Barrow as test cases to contrast our theoretical predictions with real retrievals, and to assess the accuracy of IASI surface CO retrievals through comparisons to ground-based in-situ measurements.

  12. Analyse of relationships between freezing point and selected indicators of udder health state among cow, goat and sheep milk

    Directory of Open Access Journals (Sweden)

    Oto Hanuš

    2009-01-01

    Full Text Available Milk freezing point (MFP is important quality indicator. Aim was to analyse the relationships of MFP to selected udder health milk indicators (MIs by comparison between cows (reference, goats and sheep. Bulk milk samples came from 3 herds of Czech Fleckvieh (B, n 93 and 1 goat herd and sheep flock (White short-haired, W, n 60; Tsigai, C, n 60. Animal nutrition was performed under the typical country conditions. MIs which were investigated: DM, dry matter; SNF, solid non fat; L, lactose (all in %; SCC, somatic cell count (103 ml−1; EC, electrical conductivity (mS cm−1; MFP (°C; Na and K (in mg kg−1. W MFP was −0.5544 ± 0.0293, B −0.5221 ± 0.0043 and C −0.6048 ± 0.0691 °C. The B MFP was related to L (−0.36; P < 0.01, W was not related to L (−0.07; P > 0.05 and C was related to L (0.40; P < 0.01. These facts could be explainable by worse SCC geometric averages for used W (3,646 103 ml−1 and C (560 103 ml−1 milk as compared to B (159 103 ml−1. Only 0.5 and 10.5% of variations in MFP were explainable by variations in DM and SNF in B, 32.7 and 12.8% in W but already 49.4 and 45.0% in C. Higher C values were caused by high MFP variability, 11.8% (C versus 0.8% (B. There is possible to derive the more reliable MFP qualitative limits for more efficient monitoring rules of milk quality problems in B, W and C.

  13. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method.

    Science.gov (United States)

    Grant, D A; Dunseath, G J; Churm, R; Luzio, S D

    2017-08-01

    As the use of Point of Care Testing (POCT) devices for measurement of glycated haemoglobin (HbA1c) increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics), which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC) method (Biorad D10) for measurement of HbA1c. Whole blood EDTA samples from subjects (n=100) with and without diabetes were assayed using a BioRad D10 and a Quo-Test analyser. Intra-assay variation was determined by measuring six HbA1c samples in triplicate and inter-assay variation was determined by assaying four samples on 4 days. Stability was determined by assaying three samples stored at -20 °C for 14 and 28 days post collection. Median (IQR) HbA1c was 60 (44.0-71.2) mmol/mol (7.6 (6.17-8.66) %) and 62 (45.0-69.0) mmol/mol (7.8 (6.27-8.46) %) for D10 and Quo-Test, respectively, with very good agreement (R 2 =0.969, Pglucose intolerance (IGT and T2DM) and 100% for diagnosis of T2DM. Good agreement between the D10 and Quo-Test was seen across a wide HbA1c range. The Quo-Test POCT device provided similar performance to a laboratory based HPLC method.

  14. Point-of-Care Diagnostic Device for Traumatic Pneumothorax: Low Sensitivity of the Unblinded PneumoScan™

    Directory of Open Access Journals (Sweden)

    M. Rehfeldt

    2018-01-01

    Full Text Available Background. Traumatic Pneumothorax (PTX is a potentially life-threatening injury. It requires a fast and accurate diagnosis and treatment, but diagnostic tools are limited. A new point-of-care device (PneumoScan based on micropower impulse radar (MIR promises to diagnose a PTX within seconds. In this study, we compare standard diagnostics with PneumoScan during shock-trauma-room management. Patients and Methods. Patients with blunt or penetrating chest trauma were consecutively included in the study. All patients were examined including clinical examination with auscultation (CE and supine chest radiography (CXR. In addition, PneumoScan-readings and thoracic ultrasound scan (US were performed. Computed tomography (CT served as gold standard. Results. CT scan revealed PTX in 11 patients. PneumoScan detected two PTX correctly but missed nine. 15 false-positive results were found by PneumoScan, leading to a sensitivity of 20% and specificity of 80%. Six PTX were detected through CE (sensitivity: 54,5%. CXR detected four (sensitivity: 27,3% and thoracic US two PTX correctly (sensitivity: 25%. Conclusion. The unblinded PneumoScan prototype did not confirm the promising results of previous studies. The examined standard diagnostics and thoracic US showed rather weak sensitivity as well. Until now, there is no appropriate point-of-care tool to rule out PTX.

  15. Comparison of apparent diffusion coefficients (ADCs) between two-point and multi-point analyses using high-B-value diffusion MR imaging

    International Nuclear Information System (INIS)

    Kubo, Hitoshi; Maeda, Masayuki; Araki, Akinobu

    2001-01-01

    We evaluated the accuracy of calculating apparent diffusion coefficients (ADCs) using high-B-value diffusion images. Echo planar diffusion-weighted MR images were obtained at 1.5 tesla in five standard locations in six subjects using gradient strengths corresponding to B values from 0 to 3000 s/mm 2 . Estimation of ADCs was made using two methods: a nonlinear regression model using measurements from a full set of B values (multi-point method) and linear estimation using B values of 0 and max only (two-point method). A high correlation between the two methods was noted (r=0.99), and the mean percentage differences were -0.53% and 0.53% in phantom and human brain, respectively. These results suggest there is little error in estimating ADCs calculated by the two-point technique using high-B-value diffusion MR images. (author)

  16. Analytical performance, agreement and user-friendliness of six point-of-care testing urine analysers for urinary tract infection in general practice

    NARCIS (Netherlands)

    Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M

    2015-01-01

    OBJECTIVE: Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection

  17. Performance Assessment and Sensitivity Analyses of Disposal of Plutonium as Can-in-Canister Ceramic

    International Nuclear Information System (INIS)

    Rainer Senger

    2001-01-01

    The purpose of this analysis is to examine whether there is a justification for using high-level waste (HLW) as a surrogate for plutonium disposal in can-in-canister ceramic in the total-system performance assessment (TSPA) model for the Site Recommendation (SR). In the TSPA-SR model, the immobilized plutonium waste form is not explicitly represented, but is implicitly represented as an equal number of canisters of HLW. There are about 50 metric tons of plutonium in the U. S. Department of Energy inventory of surplus fissile material that could be disposed. Approximately 17 tons of this material contain significant quantities of impurities and are considered unsuitable for mixed-oxide (MOX) reactor fuel. This material has been designated for direct disposal by immobilization in a ceramic waste form and encapsulating this waste form in high-level waste (HLW). The remaining plutonium is suitable for incorporation into MOX fuel assemblies for commercial reactors (Shaw 1999, Section 2). In this analysis, two cases of immobilized plutonium disposal are analyzed, the 17-ton case and the 13-ton case (Shaw et al. 2001, Section 2.2). The MOX spent-fuel disposal is not analyzed in this report. In the TSPA-VA (CRWMS M and O 1998a, Appendix B, Section B-4), the calculated dose release from immobilized plutonium waste form (can-in-canister ceramic) did not exceed that from an equivalent amount of HLW glass. This indicates that the HLW could be used as a surrogate for the plutonium can-in-canister ceramic. Representation of can-in-canister ceramic as a surrogate is necessary to reduce the number of waste forms in the TSPA model. This reduction reduces the complexity and running time of the TSPA model and makes the analyses tractable. This document was developed under a Technical Work Plan (CRWMS M and O 2000a), and is compliant with that plan. The application of the Quality Assurance (QA) program to the development of that plan (CRWMS M and O 2000a) and of this Analysis is

  18. High sensitivity point-of-care device for direct virus diagnostics

    DEFF Research Database (Denmark)

    Kiilerich-Pedersen, Katrine; Dapra, Johannes; Cherré, Solène

    2013-01-01

    Influenza infections are associated with high morbidity and mortality, carry the risk of pandemics, and pose a considerable economic burden worldwide. To improve the management of the illness, it is essential with accurate and fast point-of-care diagnostic tools for use in the field or at the pat...

  19. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  20. Chronic whiplash and central sensitization; an evaluation of the role of a myofascial trigger points in pain modulation

    Directory of Open Access Journals (Sweden)

    Freeman Michael D

    2009-04-01

    Full Text Available Abstract Objective it has been established that chronic neck pain following whiplash is associated with the phenomenon of central sensitization, in which injured and uninjured parts of the body exhibit lowered pain thresholds due to an alteration in central pain processing. it has furthermore been hypothesized that peripheral sources of nociception in the muscles may perpetuate central sensitization in chronic whiplash. the hypothesis explored in the present study was whether myofascial trigger points serve as a modulator of central sensitization in subjects with chronic neck pain. Design controlled case series. Setting outpatient chronic pain clinic. Subjects seventeen patients with chronic and intractable neck pain and 10 healthy controls without complaints of neck pain. Intervention symptomatic subjects received anesthetic infiltration of myofascial trigger points in the upper trapezius muscles and controls received the anesthetic in the thigh. Outcome measures: pre and post injection cervical range of motion, pressure pain thresholds (ppt over the infraspinatus, wrist extensor, and tibialis anterior muscles. sensitivity to light (photophobia and subjects' perception of pain using a visual analog scale (vas were also evaluated before and after injections. only the ppt was evaluated in the asymptomatic controls. Results immediate (within 1 minute alterations in cervical range of motion and pressure pain thresholds were observed following an average of 3.8 injections with 1–2 cc of 1% lidocaine into carefully identified trigger points. cervical range of motion increased by an average of 49% (p = 0.000 in flexion and 44% (p = 0.001 in extension, 47% (p = 0.000 and 28% (p Conclusion the present data suggest that myofascial trigger points serve to perpetuate lowered pain thresholds in uninjured tissues. additionally, it appears that lowered pain thresholds associated with central sensitization can be immediately reversed, even when associated

  1. Neuro Emotional Technique for the treatment of trigger point sensitivity in chronic neck pain sufferers: A controlled clinical trial

    Directory of Open Access Journals (Sweden)

    Pollard Henry

    2008-05-01

    Full Text Available Abstract Background Trigger points have been shown to be active in many myofascial pain syndromes. Treatment of trigger point pain and dysfunction may be explained through the mechanisms of central and peripheral paradigms. This study aimed to investigate whether the mind/body treatment of Neuro Emotional Technique (NET could significantly relieve pain sensitivity of trigger points presenting in a cohort of chronic neck pain sufferers. Methods Sixty participants presenting to a private chiropractic clinic with chronic cervical pain as their primary complaint were sequentially allocated into treatment and control groups. Participants in the treatment group received a short course of Neuro Emotional Technique that consists of muscle testing, general semantics and Traditional Chinese Medicine. The control group received a sham NET protocol. Outcome measurements included pain assessment utilizing a visual analog scale and a pressure gauge algometer. Pain sensitivity was measured at four trigger point locations: suboccipital region (S; levator scapulae region (LS; sternocleidomastoid region (SCM and temporomandibular region (TMJ. For each outcome measurement and each trigger point, we calculated the change in measurement between pre- and post- treatment. We then examined the relationships between these measurement changes and six independent variables (i.e. treatment group and the above five additional participant variables using forward stepwise General Linear Model. Results The visual analog scale (0 to 10 had an improvement of 7.6 at S, 7.2 at LS, 7.5 at SCM and 7.1 at the TMJ in the treatment group compared with no improvement of at S, and an improvement of 0.04 at LS, 0.1 at SCM and 0.1 at the TMJ point in the control group, (P Conclusion After a short course of NET treatment, measurements of visual analog scale and pressure algometer recordings of four trigger point locations in a cohort of chronic neck pain sufferers were significantly

  2. Neuro Emotional Technique for the treatment of trigger point sensitivity in chronic neck pain sufferers: a controlled clinical trial.

    Science.gov (United States)

    Bablis, Peter; Pollard, Henry; Bonello, Rod

    2008-05-21

    Trigger points have been shown to be active in many myofascial pain syndromes. Treatment of trigger point pain and dysfunction may be explained through the mechanisms of central and peripheral paradigms. This study aimed to investigate whether the mind/body treatment of Neuro Emotional Technique (NET) could significantly relieve pain sensitivity of trigger points presenting in a cohort of chronic neck pain sufferers. Sixty participants presenting to a private chiropractic clinic with chronic cervical pain as their primary complaint were sequentially allocated into treatment and control groups. Participants in the treatment group received a short course of Neuro Emotional Technique that consists of muscle testing, general semantics and Traditional Chinese Medicine. The control group received a sham NET protocol. Outcome measurements included pain assessment utilizing a visual analog scale and a pressure gauge algometer. Pain sensitivity was measured at four trigger point locations: suboccipital region (S); levator scapulae region (LS); sternocleidomastoid region (SCM) and temporomandibular region (TMJ). For each outcome measurement and each trigger point, we calculated the change in measurement between pre- and post- treatment. We then examined the relationships between these measurement changes and six independent variables (i.e. treatment group and the above five additional participant variables) using forward stepwise General Linear Model. The visual analog scale (0 to 10) had an improvement of 7.6 at S, 7.2 at LS, 7.5 at SCM and 7.1 at the TMJ in the treatment group compared with no improvement of at S, and an improvement of 0.04 at LS, 0.1 at SCM and 0.1 at the TMJ point in the control group, (P algometer recordings of four trigger point locations in a cohort of chronic neck pain sufferers were significantly improved when compared to a control group which received a sham protocol of NET. Chronic neck pain sufferers may benefit from NET treatment in the relief

  3. Sensitivity and uncertainty analyses of unsaturated flow travel time in the CHnz unit of Yucca Mountain, Nevada

    International Nuclear Information System (INIS)

    Nichols, W.E.; Freshley, M.D.

    1991-10-01

    This report documents the results of sensitivity and uncertainty analyses conducted to improve understanding of unsaturated zone ground-water travel time distribution at Yucca Mountain, Nevada. The US Department of Energy (DOE) is currently performing detailed studies at Yucca Mountain to determine its suitability as a host for a geologic repository for the containment of high-level nuclear wastes. As part of these studies, DOE is conducting a series of Performance Assessment Calculational Exercises, referred to as the PACE problems. The work documented in this report represents a part of the PACE-90 problems that addresses the effects of natural barriers of the site that will stop or impede the long-term movement of radionuclides from the potential repository to the accessible environment. In particular, analyses described in this report were designed to investigate the sensitivity of the ground-water travel time distribution to different input parameters and the impact of uncertainty associated with those input parameters. Five input parameters were investigated in this study: recharge rate, saturated hydraulic conductivity, matrix porosity, and two curve-fitting parameters used for the van Genuchten relations to quantify the unsaturated moisture-retention and hydraulic characteristics of the matrix. 23 refs., 20 figs., 10 tabs

  4. The Svalbard intertidal zone: a concept for the use of GIS in applied oil sensitivity, vulnerability and impact analyses

    International Nuclear Information System (INIS)

    Moe, K.A.; Skeie, G.M.; Brude, O.W.; Loevas, S.M.; Nedreboes, M.; Weslawski, J.M.

    2000-01-01

    Historical oil spills have shown that environmental damage on the seashore can be measured by acute mortality of single species and destabilisation of the communities. The biota, however, has the potential to recover over some period of time. Applied to the understanding of the fate of oil and population and community dynamics, the impact can be described by the function of the following two factors: the immediate extent and the duration of damage. A simple and robust mathematical model is developed to describe this process in the Svalbard intertidal. Based on the integral of key biological and physical factors, i.e., community specific sensitivity, oil accumulation and retention capacity of the substrate, ice-cover and wave exposure, the model is implemented by a Geographical Information System (GIS) for characterisation of the habitat's sensitivity and vulnerability. Geomorphologic maps and georeferenced biological data are used as input. Digital maps of intertidal zone are compiled, indicating the shoreline sensitivity and vulnerability in terms of coastal segments and grid aggregations. Selected results have been used in the national assessment programme of oil development in the Barents Sea for priorities in environmental impact assessments and risk analyses as well as oil spill contingency planning. (Author)

  5. Breed differences in dogs sensitivity to human points: a meta-analysis.

    Science.gov (United States)

    Dorey, Nicole R; Udell, Monique A R; Wynne, Clive D L

    2009-07-01

    The last decade has seen a substantial increase in research on the behavioral and cognitive abilities of pet dogs, Canis familiaris. The most commonly used experimental paradigm is the object-choice task in which a dog is given a choice of two containers and guided to the reinforced object by human pointing gestures. We review here studies of this type and attempt a meta-analysis of the available data. In the meta-analysis breeds of dogs were grouped into the eight categories of the American Kennel Club, and into four clusters identified by Parker and Ostrander [Parker, H.G., Ostrander, E.A., 2005. Canine genomics and genetics: running with the pack. PLoS Genet. 1, 507-513] on the basis of a genetic analysis. No differences in performance between breeds categorized in either fashion were identified. Rather, all dog breeds appear to be similarly and highly successful in following human points to locate desired food. We suggest this result could be due to the paucity of data available in published studies, and the restricted range of breeds tested.

  6. Surface Observation and Pore Size Analyses of Polypropylene/Low-Melting Point Polyester Filter Materials: Influences of Heat Treatment

    Directory of Open Access Journals (Sweden)

    Lin Jia-Horng

    2016-01-01

    Full Text Available This study proposes making filter materials with polypropylene (PP and low-melting point (LPET fibers. The influences of temperatures and times of heat treatment on the morphology of thermal bonding points and average pore size of the PP/LPET filter materials. The test results indicate that the morphology of thermal bonding points is highly correlated with the average pore size. When the temperature of heat treatment is increased, the fibers are joined first with the thermal bonding points, and then with the large thermal bonding areas, thereby decreasing the average pore size of the PP/LPET filter materials. A heat treatment of 110 °C for 60 seconds can decrease the pore size from 39.6 μm to 12.0 μm.

  7. IceCube-Gen2 sensitivity improvement for steady neutrino point sources

    Energy Technology Data Exchange (ETDEWEB)

    Coenders, Stefan; Resconi, Elisa [TU Muenchen, Physik-Department, Excellence Cluster Universe, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    The observation of an astrophysical neutrino flux by high-energy events starting in IceCube strengthens the search for sources of astrophysical neutrinos. Identification of these sources requires good pointing at high statistics, mainly using muons created by charged-current muon neutrino interactions going through the IceCube detector. We report about preliminary studies of a possible high-energy extension IceCube-Gen2. Using a 6 times bigger detection volume, effective area as well as reconstruction accuracy will improve with respect to IceCube. Moreover, using (in-ice) active veto techniques will significantly improve the performance for Southern hemisphere events, where possible local candidate neutrino sources are located.

  8. Structural vascular disease in Africans: performance of ethnic-specific waist circumference cut points using logistic regression and neural network analyses: the SABPA study

    OpenAIRE

    Botha, J.; De Ridder, J.H.; Potgieter, J.C.; Steyn, H.S.; Malan, L.

    2013-01-01

    A recently proposed model for waist circumference cut points (RPWC), driven by increased blood pressure, was demonstrated in an African population. We therefore aimed to validate the RPWC by comparing the RPWC and the Joint Statement Consensus (JSC) models via Logistic Regression (LR) and Neural Networks (NN) analyses. Urban African gender groups (N=171) were stratified according to the JSC and RPWC cut point models. Ultrasound carotid intima media thickness (CIMT), blood pressure (BP) and fa...

  9. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser

    OpenAIRE

    Hamilton, Matthew T.; Finger, John W.; Winzeler, Megan E.; Tuberville, Tracey D.

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to d...

  10. High-sensitivity detection of cardiac troponin I with UV LED excitation for use in point-of-care immunoassay.

    Science.gov (United States)

    Rodenko, Olga; Eriksson, Susann; Tidemand-Lichtenberg, Peter; Troldborg, Carl Peder; Fodgaard, Henrik; van Os, Sylvana; Pedersen, Christian

    2017-08-01

    High-sensitivity cardiac troponin assay development enables determination of biological variation in healthy populations, more accurate interpretation of clinical results and points towards earlier diagnosis and rule-out of acute myocardial infarction. In this paper, we report on preliminary tests of an immunoassay analyzer employing an optimized LED excitation to measure on a standard troponin I and a novel research high-sensitivity troponin I assay. The limit of detection is improved by factor of 5 for standard troponin I and by factor of 3 for a research high-sensitivity troponin I assay, compared to the flash lamp excitation. The obtained limit of detection was 0.22 ng/L measured on plasma with the research high-sensitivity troponin I assay and 1.9 ng/L measured on tris-saline-azide buffer containing bovine serum albumin with the standard troponin I assay. We discuss the optimization of time-resolved detection of lanthanide fluorescence based on the time constants of the system and analyze the background and noise sources in a heterogeneous fluoroimmunoassay. We determine the limiting factors and their impact on the measurement performance. The suggested model can be generally applied to fluoroimmunoassays employing the dry-cup concept.

  11. Preliminary investigation of fuel cycle in fast reactors by the correlations method and sensitivity analyses of nuclear characteristics

    International Nuclear Information System (INIS)

    Amorim, E.S. do; Castro Lobo, P.D. de.

    1980-11-01

    A reduction of computing effort was achieved as a result of the application of space - independent continuous slowing down theory in the spectrum averaged cross sections and further expressing then in a quadratic corelation whith the temperature and the composition. The decoupling between variables that express some of the important nuclear characteristics allowed to introduce a sensitivity analyses treatment for the full prediction of the behavior, over the fuel cycle, of the LMFBR considered. As a potential application of the method here in developed is to predict the nuclear characteristics of another reactor, face some reference reactor of the family considered. Excellent agreement with exact calculation is observed only when perturbations occur in nuclear data and/or fuel isotopic characteristics, but fair results are obtained whith variations in system components other than the fuel. (Author) [pt

  12. Increased sensitivity in thick-target particle induced X-ray emission analyses using dry ashing for preconcentration

    International Nuclear Information System (INIS)

    Lill, J.-O.; Harju, L.; Saarela, K.-E.; Lindroos, A.; Heselius, S.-J.

    1999-01-01

    The sensitivity in thick-target particle induced X-ray emission (PIXE) analyses of biological materials can be enhanced by dry ashing. The gain depends mainly on the mass reduction factor and the composition of the residual ash. The enhancement factor was 7 for the certified reference material Pine Needles and the limits of detection (LODs) were below 0.2 μg/g for Zn, Cu, Rb and Sr. When ashing biological materials with low ash contents such as wood of pine or spruce (0.3% of dry weight) and honey (0.1% of wet weight) the gain was far greater. The LODs for these materials were 30 ng/g for wood and below 10 ng/g for honey. In addition, the ashed samples were more homogenous and more resistant to changes during the irradiation than the original biological samples. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  13. Frequency and Proximity Clustering Analyses for Georeferencing Toponyms and Points-of-Interest Names from a Travel Journal

    Science.gov (United States)

    McDermott, Scott D.

    2017-01-01

    This research study uses geographic information retrieval (GIR) to georeference toponyms and points-of-interest (POI) names from a travel journal. Travel journals are an ideal data source with which to conduct this study because they are significant accounts specific to the author's experience, and contain geographic instances based on the…

  14. A DNA microarray-based methylation-sensitive (MS)-AFLP hybridization method for genetic and epigenetic analyses.

    Science.gov (United States)

    Yamamoto, F; Yamamoto, M

    2004-07-01

    We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.

  15. Uncertainty and sensitivity analyses for gas and brine migration at the Waste Isolation Pilot Plant, May 1992

    International Nuclear Information System (INIS)

    Helton, J.C.; Bean, J.E.; Butcher, B.M.; Garner, J.W.; Vaughn, P.; Schreiber, J.D.; Swift, P.N.

    1993-08-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis, stepwise regression analysis and examination of scatterplots are used in conjunction with the BRAGFLO model to examine two phase flow (i.e., gas and brine) at the Waste Isolation Pilot Plant (WIPP), which is being developed by the US Department of Energy as a disposal facility for transuranic waste. The analyses consider either a single waste panel or the entire repository in conjunction with the following cases: (1) fully consolidated shaft, (2) system of shaft seals with panel seals, and (3) single shaft seal without panel seals. The purpose of this analysis is to develop insights on factors that are potentially important in showing compliance with applicable regulations of the US Environmental Protection Agency (i.e., 40 CFR 191, Subpart B; 40 CFR 268). The primary topics investigated are (1) gas production due to corrosion of steel, (2) gas production due to microbial degradation of cellulosics, (3) gas migration into anhydrite marker beds in the Salado Formation, (4) gas migration through a system of shaft seals to overlying strata, and (5) gas migration through a single shaft seal to overlying strata. Important variables identified in the analyses include initial brine saturation of the waste, stoichiometric terms for corrosion of steel and microbial degradation of cellulosics, gas barrier pressure in the anhydrite marker beds, shaft seal permeability, and panel seal permeability

  16. Probabilistic reliability analyses to detect weak points in secondary-side residual heat removal systems of KWU PWR plants

    International Nuclear Information System (INIS)

    Schilling, R.

    1984-01-01

    Requirements made by Federal German licensing authorities called for the analysis of the second-side residual heat removal systems of new PWR plants with regard to availability, possible weak points and the balanced nature of the overall system for different incident sequences. Following a description of the generic concept and the process and safety-related systems for steam generator feed and main steam discharge, the reliability of the latter is analyzed for the small break LOCA and emergency power mode incidents, weak points in the process systems are identified, remedial measures of a system-specific and test-strategic nature are presented and their contribution to improving system availability is quantified. A comparison with the results of the German Risk Study on Nuclear Power Plants (GRS) shows a distinct reduction in core meltdown frequency. (orig.)

  17. Reliability analyses to detect weak points in secondary-side residual heat removal systems of KWU PWR plants

    International Nuclear Information System (INIS)

    Schilling, R.

    1983-01-01

    Requirements made by Federal German licensing authorities called for the analysis of the secondary-side residual heat removal systems of new PWR plants with regard to availability, possible weak points and the balanced nature of the overall system for different incident sequences. Following a description of the generic concept and the process and safety-related systems for steam generator feed and main steam discharge, the reliability of the latter is analyzed for the small break LOCA and emergency power mode incidents, weak points in the process systems identified, remedial measures of a system-specific and test-strategic nature presented and their contribution to improving system availability quantified. A comparison with the results of the German Risk Study on Nuclear Power Plants (GRS) shows a distinct reduction in core meltdown frequency. (orig.)

  18. Trend and change point analyses of annual precipitation in the Souss-Massa Region in Morocco during 1932-2010

    Science.gov (United States)

    Abahous, H.; Ronchail, J.; Sifeddine, A.; Kenny, L.; Bouchaou, L.

    2017-11-01

    In the context of an arid area such as Souss Massa Region, the availability of time series analysis of observed local data is vital to better characterize the regional rainfall configuration. In this paper, dataset of monthly precipitation collected from different local meteorological stations during 1932-2010, are quality controlled and analyzed to detect trend and change points. The temporal distribution of outliers shows an annual cycle and a decrease of their number since the 1980s. The results of the standard normal homogeneity test, penalized maximal t test, and Mann-Whitney-Pettit test show that 42% of the series are homogeneous. The analysis of annual precipitation in the region of Souss Massa during 1932-2010 shows wet conditions with a maximum between 1963 and 1965 followed by a decrease since 1973. The latter is identified as a statistically significant regional change point in Western High Atlas and Anti Atlas Mountains highlighting a decline in long-term average precipitation.

  19. Comparison of a point-of-care analyser for the determination of HbA1c with HPLC method

    OpenAIRE

    Grant, D.A.; Dunseath, G.J.; Churm, R.; Luzio, S.D.

    2017-01-01

    Aims: As the use of Point of Care Testing (POCT) devices for measurement of glycated haemoglobin (HbA1c) increases, it is imperative to determine how their performance compares to laboratory methods. This study compared the performance of the automated Quo-Test POCT device (EKF Diagnostics), which uses boronate fluorescence quenching technology, with a laboratory based High Performance Liquid Chromatography (HPLC) method (Biorad D10) for measurement of HbA1c. Methods: Whole blood EDTA samples...

  20. Uncertainty and sensitivity analyses of energy and visual performances of office building with external venetian blind shading in hot-dry climate

    International Nuclear Information System (INIS)

    Singh, Ramkishore; Lazarus, I.J.; Kishore, V.V.N.

    2016-01-01

    Highlights: • Various alternatives of glazing and venetian blind were simulated for office space. • Daylighting and energy performances were assessed for each alternative. • Large uncertainties were estimated in the energy consumptions and UDI values. • Glazing design parameters were prioritised by performing sensitivity analysis. • WWR, glazing type, blind orientation and slat angle were identified top in priority. - Abstract: Fenestration has become an integral part of the buildings and has a significant impact on the energy and indoor visual performances. Inappropriate design of the fenestration component may lead to low energy efficiency and visual discomfort as a result of high solar and thermal heat gains, excessive daylight and direct sunlight. External venetian blind has been identified as one of the effective shading devices for controlling the heat gains and daylight through fenestration. This study explores uncertainty and sensitivity analyses to identify and prioritize the most influencing parameters for designing glazed components that include external shading devices for office buildings. The study was performed for hot-dry climate of Jodhpur (Latitude 26° 180′N, longitude 73° 010′E) using EnergyPlus, a whole building energy simulation tool providing a large number of inputs for eight façade orientations. A total 150 and 845 data points (for each orientation) for input variables were generated using Hyper Cubic Sampling and extended FAST methods for uncertainty and sensitivity analyses respectively. Results indicated a large uncertainty in the lighting, HVAC, source energy consumptions and useful daylight illuminance (UDI). The estimated coefficients of variation were highest (up to 106%) for UDI, followed by lighting energy (up to 45%) and HVAC energy use (around 33%). The sensitivity analysis identified window to wall ratio, glazing type, blind type (orientation of slats) and slat angle as highly influencing factors for energy and

  1. Evaluation of pain sensitivity by tender point counts and myalgic score in patients with and without obstructive sleep apnea syndrome.

    Science.gov (United States)

    Terzi, Rabia; Yılmaz, Zahide

    2017-03-01

    The purpose of this study was to assess the difference between patients with and without obstructive sleep apnea syndrome (OSAS) with respect to pain sensitivity. The study was conducted on 31 womens diagnosed with OSAS and 31 healthy women. All patients underwent polysomnographic testing. A pressure algometer (dolorimeter) was used to measure the pressure pain threshold. Fibromyalgia was diagnosed based on the 1990 American College of Rheumatology diagnosis criteria. The myalgic score was 73.95 ± 18.09 in patients with OSAS, while this value was 84.18 ± 24.31 in the control group. The difference between the groups was statistically significant (P = 0.041).The number of tender points was 8.19 ± 3.35 in the patient group with OSAS, while this number was 6.35 ± 2.23 in the control group. The difference between the two groups was statistically significant (P = 0.014). No statistically significant differences were found between age, body mass index, Beck depression scores, control point score and the presence of fibromyalgia, between the two groups (P > 0.05). A statistically significant positive correlation was found between the myalgic scores and mean saturation O 2 (%) values of the patients (r = 0.357; P = 0.049). The differences noted between OSAS patients and the control group with respect to myalgic score and the number of tender points suggest that there might be a relation between OSAS and pain sensitivity. There might be an association between low oxygen saturation and total myalgic score. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.

  2. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  3. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  4. Quasi-laminar stability and sensitivity analyses for turbulent flows: Prediction of low-frequency unsteadiness and passive control

    Science.gov (United States)

    Mettot, Clément; Sipp, Denis; Bézard, Hervé

    2014-04-01

    This article presents a quasi-laminar stability approach to identify in high-Reynolds number flows the dominant low-frequencies and to design passive control means to shift these frequencies. The approach is based on a global linear stability analysis of mean-flows, which correspond to the time-average of the unsteady flows. Contrary to the previous work by Meliga et al. ["Sensitivity of 2-D turbulent flow past a D-shaped cylinder using global stability," Phys. Fluids 24, 061701 (2012)], we use the linearized Navier-Stokes equations based solely on the molecular viscosity (leaving aside any turbulence model and any eddy viscosity) to extract the least stable direct and adjoint global modes of the flow. Then, we compute the frequency sensitivity maps of these modes, so as to predict before hand where a small control cylinder optimally shifts the frequency of the flow. In the case of the D-shaped cylinder studied by Parezanović and Cadot [J. Fluid Mech. 693, 115 (2012)], we show that the present approach well captures the frequency of the flow and recovers accurately the frequency control maps obtained experimentally. The results are close to those already obtained by Meliga et al., who used a more complex approach in which turbulence models played a central role. The present approach is simpler and may be applied to a broader range of flows since it is tractable as soon as mean-flows — which can be obtained either numerically from simulations (Direct Numerical Simulation (DNS), Large Eddy Simulation (LES), unsteady Reynolds-Averaged-Navier-Stokes (RANS), steady RANS) or from experimental measurements (Particle Image Velocimetry - PIV) — are available. We also discuss how the influence of the control cylinder on the mean-flow may be more accurately predicted by determining an eddy-viscosity from numerical simulations or experimental measurements. From a technical point of view, we finally show how an existing compressible numerical simulation code may be used in

  5. Economic analysis of hydrogen production through a bio-ethanol steam reforming process: Sensitivity analyses and cost estimations

    International Nuclear Information System (INIS)

    Song, Hua; Ozkan, Umit S.

    2010-01-01

    In this study, the hydrogen selling price from ethanol steam reforming has been estimated for two different production scenarios in the United States, i.e. central production (150,000 kg H 2 /day) and distributed (forecourt) production (1500 kg H 2 /day), based on a process flowchart generated by Aspen Plus registered including downstream purification steps and economic analysis model template published by the U.S Department of Energy (DOE). The effect of several processing parameters as well as catalyst properties on the hydrogen selling price has been evaluated. 2.69/kg is estimated as the selling price for a central production process of 150,000 kg H 2 /day and 4.27/kg for a distributed hydrogen production process at a scale of 1500 kg H 2 /day. Among the parameters investigated through sensitivity analyses, ethanol feedstock cost, catalyst cost, and catalytic performance are found to play a significant role on determining the final hydrogen selling price. (author)

  6. Evaluation of bentonite alteration due to interactions with iron. Sensitivity analyses to identify the important factors for the bentonite alteration

    International Nuclear Information System (INIS)

    Sasamoto, Hiroshi; Wilson, James; Sato, Tsutomu

    2013-01-01

    Performance assessment of geological disposal systems for high-level radioactive waste requires a consideration of long-term systems behaviour. It is possible that the alteration of swelling clay present in bentonite buffers might have an impact on buffer functions. In the present study, iron (as a candidate overpack material)-bentonite (I-B) interactions were evaluated as the main buffer alteration scenario. Existing knowledge on alteration of bentonite during I-B interactions was first reviewed, then the evaluation methodology was developed considering modeling techniques previously used overseas. A conceptual model for smectite alteration during I-B interactions was produced. The following reactions and processes were selected: 1) release of Fe 2+ due to overpack corrosion; 2) diffusion of Fe 2+ in compacted bentonite; 3) sorption of Fe 2+ on smectite edge and ion exchange in interlayers; 4) dissolution of primary phases and formation of alteration products. Sensitivity analyses were performed to identify the most important factors for the alteration of bentonite by I-B interactions. (author)

  7. Angiographic core laboratory reproducibility analyses: implications for planning clinical trials using coronary angiography and left ventriculography end-points.

    Science.gov (United States)

    Steigen, Terje K; Claudio, Cheryl; Abbott, David; Schulzer, Michael; Burton, Jeff; Tymchak, Wayne; Buller, Christopher E; John Mancini, G B

    2008-06-01

    To assess reproducibility of core laboratory performance and impact on sample size calculations. Little information exists about overall reproducibility of core laboratories in contradistinction to performance of individual technicians. Also, qualitative parameters are being adjudicated increasingly as either primary or secondary end-points. The comparative impact of using diverse indexes on sample sizes has not been previously reported. We compared initial and repeat assessments of five quantitative parameters [e.g., minimum lumen diameter (MLD), ejection fraction (EF), etc.] and six qualitative parameters [e.g., TIMI myocardial perfusion grade (TMPG) or thrombus grade (TTG), etc.], as performed by differing technicians and separated by a year or more. Sample sizes were calculated from these results. TMPG and TTG were also adjudicated by a second core laboratory. MLD and EF were the most reproducible, yielding the smallest sample size calculations, whereas percent diameter stenosis and centerline wall motion require substantially larger trials. Of the qualitative parameters, all except TIMI flow grade gave reproducibility characteristics yielding sample sizes of many 100's of patients. Reproducibility of TMPG and TTG was only moderately good both within and between core laboratories, underscoring an intrinsic difficulty in assessing these. Core laboratories can be shown to provide reproducibility performance that is comparable to performance commonly ascribed to individual technicians. The differences in reproducibility yield huge differences in sample size when comparing quantitative and qualitative parameters. TMPG and TTG are intrinsically difficult to assess and conclusions based on these parameters should arise only from very large trials.

  8. Evaluation of portable point-of-care CD4 counter with high sensitivity for detecting patients eligible for antiretroviral therapy.

    Directory of Open Access Journals (Sweden)

    Yukari C Manabe

    Full Text Available BACKGROUND: Accurate, inexpensive point-of-care CD4+ T cell testing technologies are needed that can deliver CD4+ T cell results at lower level health centers or community outreach voluntary counseling and testing. We sought to evaluate a point-of-care CD4+ T cell counter, the Pima CD4 Test System, a portable, battery-operated bench-top instrument that is designed to use finger stick blood samples suitable for field use in conjunction with rapid HIV testing. METHODS: Duplicate measurements were performed on both capillary and venous samples using Pima CD4 analyzers, compared to the BD FACSCalibur (reference method. The mean bias was estimated by paired Student's t-test. Bland Altman plots were used to assess agreement. RESULTS: 206 participants were enrolled with a median CD4 count of 396 (range; 18-1500. The finger stick PIMA had a mean bias of -66.3 cells/µL (95%CI -83.4-49.2, P500 cells/µL with a mean bias of -120.6 (95%CI -162.8, -78.4, P<0.001. The sensitivity (95%CI of the Pima CD4 analyzer was 96.3% (79.1-99.8% for a <250 cells/ul cut-off with a negative predictive value of 99.2% (95.1-99.9%. CONCLUSIONS: The Pima CD4 finger stick test is an easy-to-use, portable, relatively fast device to test CD4+ T cell counts in the field. Issues of negatively-biased CD4 cell counts especially at higher absolute numbers will limit its utility for longitudinal immunologic response to ART. The high sensitivity and negative predictive value of the test makes it an attractive option for field use to identify patients eligible for ART, thus potentially reducing delays in linkage to care and ART initiation.

  9. Analysing the Zenith Tropospheric Delay Estimates in On-line Precise Point Positioning (PPP) Services and PPP Software Packages.

    Science.gov (United States)

    Mendez Astudillo, Jorge; Lau, Lawrence; Tang, Yu-Ting; Moore, Terry

    2018-02-14

    As Global Navigation Satellite System (GNSS) signals travel through the troposphere, a tropospheric delay occurs due to a change in the refractive index of the medium. The Precise Point Positioning (PPP) technique can achieve centimeter/millimeter positioning accuracy with only one GNSS receiver. The Zenith Tropospheric Delay (ZTD) is estimated alongside with the position unknowns in PPP. Estimated ZTD can be very useful for meteorological applications, an example is the estimation of water vapor content in the atmosphere from the estimated ZTD. PPP is implemented with different algorithms and models in online services and software packages. In this study, a performance assessment with analysis of ZTD estimates from three PPP online services and three software packages is presented. The main contribution of this paper is to show the accuracy of ZTD estimation achievable in PPP. The analysis also provides the GNSS users and researchers the insight of the processing algorithm dependence and impact on PPP ZTD estimation. Observation data of eight whole days from a total of nine International GNSS Service (IGS) tracking stations spread in the northern hemisphere, the equatorial region and the southern hemisphere is used in this analysis. The PPP ZTD estimates are compared with the ZTD obtained from the IGS tropospheric product of the same days. The estimates of two of the three online PPP services show good agreement (<1 cm) with the IGS ZTD values at the northern and southern hemisphere stations. The results also show that the online PPP services perform better than the selected PPP software packages at all stations.

  10. Analysing the Zenith Tropospheric Delay Estimates in On-line Precise Point Positioning (PPP Services and PPP Software Packages

    Directory of Open Access Journals (Sweden)

    Jorge Mendez Astudillo

    2018-02-01

    Full Text Available As Global Navigation Satellite System (GNSS signals travel through the troposphere, a tropospheric delay occurs due to a change in the refractive index of the medium. The Precise Point Positioning (PPP technique can achieve centimeter/millimeter positioning accuracy with only one GNSS receiver. The Zenith Tropospheric Delay (ZTD is estimated alongside with the position unknowns in PPP. Estimated ZTD can be very useful for meteorological applications, an example is the estimation of water vapor content in the atmosphere from the estimated ZTD. PPP is implemented with different algorithms and models in online services and software packages. In this study, a performance assessment with analysis of ZTD estimates from three PPP online services and three software packages is presented. The main contribution of this paper is to show the accuracy of ZTD estimation achievable in PPP. The analysis also provides the GNSS users and researchers the insight of the processing algorithm dependence and impact on PPP ZTD estimation. Observation data of eight whole days from a total of nine International GNSS Service (IGS tracking stations spread in the northern hemisphere, the equatorial region and the southern hemisphere is used in this analysis. The PPP ZTD estimates are compared with the ZTD obtained from the IGS tropospheric product of the same days. The estimates of two of the three online PPP services show good agreement (<1 cm with the IGS ZTD values at the northern and southern hemisphere stations. The results also show that the online PPP services perform better than the selected PPP software packages at all stations.

  11. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  12. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Puget Sound and Strait of Juan de Fuca, Washington: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains points that represent the following sensitive human-use socioeconomic sites in Puget Sound and the Strait of Juan de Fuca, Washington: access...

  13. Phenotypic and genetic analyses of the varroa sensitive hygienic trait in Russian honey bee (hymenoptera: apidae) colonies.

    Science.gov (United States)

    Kirrane, Maria J; de Guzman, Lilia I; Holloway, Beth; Frake, Amanda M; Rinderer, Thomas E; Whelan, Pádraig M

    2014-01-01

    Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an "actual brood removal assay" that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL) to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25). Only two (percentages of brood removed and reproductive foundress Varroa) out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock.

  14. Phenotypic and genetic analyses of the varroa sensitive hygienic trait in Russian honey bee (hymenoptera: apidae colonies.

    Directory of Open Access Journals (Sweden)

    Maria J Kirrane

    Full Text Available Varroa destructor continues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH, provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secondly, the same colonies were assessed using an "actual brood removal assay" that measured the removal of brood in a section created within the donor combs as a potential alternative measure of hygiene towards Varroa-infested brood. All colonies were then analysed for the recently discovered VSH quantitative trait locus (QTL to determine whether the genetic mechanisms were similar across different stocks. Based on the two assays, RHB colonies were consistently more hygienic toward Varroa-infested brood than Italian honey bee colonies. The actual number of brood cells removed in the defined section was negatively correlated with the Varroa infestations of the colonies (r2 = 0.25. Only two (percentages of brood removed and reproductive foundress Varroa out of nine phenotypic parameters showed significant associations with genotype distributions. However, the allele associated with each parameter was the opposite of that determined by VSH mapping. In this study, RHB colonies showed high levels of hygienic behaviour towards Varroa -infested brood. The genetic mechanisms are similar to those of the VSH stock, though the opposite allele associates in RHB, indicating a stable recombination event before the selection of the VSH stock. The measurement of brood removal is a simple, reliable alternative method of measuring hygienic behaviour towards Varroa mites, at least in RHB stock.

  15. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time

    Science.gov (United States)

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-01-01

    AIM: The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. METHODS: Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. RESULTS: Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). CONCLUSION: Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time. PMID:29731948

  16. Job Demands, Burnout, and Teamwork in Healthcare Professionals Working in a General Hospital that Was Analysed At Two Points in Time.

    Science.gov (United States)

    Mijakoski, Dragan; Karadzhinska-Bislimovska, Jovanka; Stoleski, Sasho; Minov, Jordan; Atanasovska, Aneta; Bihorac, Elida

    2018-04-15

    The purpose of the paper was to assess job demands, burnout, and teamwork in healthcare professionals (HPs) working in a general hospital that was analysed at two points in time with a time lag of three years. Time 1 respondents (N = 325) were HPs who participated during the first wave of data collection (2011). Time 2 respondents (N = 197) were HPs from the same hospital who responded at Time 2 (2014). Job demands, burnout, and teamwork were measured with Hospital Experience Scale, Maslach Burnout Inventory, and Hospital Survey on Patient Safety Culture, respectively. Significantly higher scores of emotional exhaustion (21.03 vs. 15.37, t = 5.1, p job demands were found at Time 2. Teamwork levels were similar at both points in time (Time 1 = 3.84 vs. Time 2 = 3.84, t = 0.043, p = 0.97). Actual longitudinal study revealed significantly higher mean values of emotional exhaustion and depersonalization in 2014 that could be explained by significantly increased job demands between analysed points in time.

  17. Individual Test Point Fluctuations of Macular Sensitivity in Healthy Eyes and Eyes With Age-Related Macular Degeneration Measured With Microperimetry.

    Science.gov (United States)

    Barboni, Mirella Telles Salgueiro; Szepessy, Zsuzsanna; Ventura, Dora Fix; Németh, János

    2018-04-01

    To establish fluctuation limits, it was considered that not only overall macular sensitivity but also fluctuations of individual test points in the macula might have clinical value. Three repeated measurements of microperimetry were performed using the Standard Expert test of Macular Integrity Assessment (MAIA) in healthy subjects ( N = 12, age = 23.8 ± 1.5 years old) and in patients with age-related macular degeneration (AMD) ( N = 11, age = 68.5 ± 7.4 years old). A total of 37 macular points arranged in four concentric rings and in four quadrants were analyzed individually and in groups. The data show low fluctuation of macular sensitivity of individual test points in healthy subjects (average = 1.38 ± 0.28 dB) and AMD patients (average = 2.12 ± 0.60 dB). Lower sensitivity points are more related to higher fluctuation than to the distance from the central point. Fixation stability showed no effect on the sensitivity fluctuation. The 95th percentile of the standard deviations of healthy subjects was, on average, 2.7 dB, ranging from 1.2 to 4 dB, depending on the point tested. Point analysis and regional analysis might be considered prior to evaluating macular sensitivity fluctuation in order to distinguish between normal variation and a clinical change. S tatistical methods were used to compare repeated microperimetry measurements and to establish fluctuation limits of the macular sensitivity. This analysis could add information regarding the integrity of different macular areas and provide new insights into fixation points prior to the biofeedback fixation training.

  18. Influence of Immersion Conditions on The Tensile Strength of Recycled Kevlar®/Polyester/Low-Melting-Point Polyester Nonwoven Geotextiles through Applying Statistical Analyses

    Directory of Open Access Journals (Sweden)

    Jing-Chzi Hsieh

    2016-05-01

    Full Text Available The recycled Kevlar®/polyester/low-melting-point polyester (recycled Kevlar®/PET/LPET nonwoven geotextiles are immersed in neutral, strong acid, and strong alkali solutions, respectively, at different temperatures for four months. Their tensile strength is then tested according to various immersion periods at various temperatures, in order to determine their durability to chemicals. For the purpose of analyzing the possible factors that influence mechanical properties of geotextiles under diverse environmental conditions, the experimental results and statistical analyses are incorporated in this study. Therefore, influences of the content of recycled Kevlar® fibers, implementation of thermal treatment, and immersion periods on the tensile strength of recycled Kevlar®/PET/LPET nonwoven geotextiles are examined, after which their influential levels are statistically determined by performing multiple regression analyses. According to the results, the tensile strength of nonwoven geotextiles can be enhanced by adding recycled Kevlar® fibers and thermal treatment.

  19. Sensitivities and Tipping Points of Power System Operations to Fluctuations Caused by Water Availability and Fuel Prices

    Science.gov (United States)

    O'Connell, M.; Macknick, J.; Voisin, N.; Fu, T.

    2017-12-01

    The western US electric grid is highly dependent upon water resources for reliable operation. Hydropower and water-cooled thermoelectric technologies represent 67% of generating capacity in the western region of the US. While water resources provide a significant amount of generation and reliability for the grid, these same resources can represent vulnerabilities during times of drought or low flow conditions. A lack of water affects water-dependent technologies and can result in more expensive generators needing to run in order to meet electric grid demand, resulting in higher electricity prices and a higher cost to operate the grid. A companion study assesses the impact of changes in water availability and air temperatures on power operations by directly derating hydro and thermo-electric generators. In this study we assess the sensitivities and tipping points of water availability compared with higher fuel prices in electricity sector operations. We evaluate the impacts of varying electricity prices by modifying fuel prices for coal and natural gas. We then analyze the difference in simulation results between changes in fuel prices in combination with water availability and air temperature variability. We simulate three fuel price scenarios for a 2010 baseline scenario along with 100 historical and future hydro-climate conditions. We use the PLEXOS electricity production cost model to optimize power system dispatch and cost decisions under each combination of fuel price and water constraint. Some of the metrics evaluated are total production cost, generation type mix, emissions, transmission congestion, and reserve procurement. These metrics give insight to how strained the system is, how much flexibility it still has, and to what extent water resource availability or fuel prices drive changes in the electricity sector operations. This work will provide insights into current electricity operations as well as future cases of increased penetration of variable

  20. Relationship between line spread function (LSF), or slice sensitivity profile (SSP), and point spread function (PSF) in CT image system

    International Nuclear Information System (INIS)

    Ohkubo, Masaki; Wada, Shinichi; Kobayashi, Teiji; Lee, Yongbum; Tsai, Du-Yih

    2004-01-01

    In the CT image system, we revealed the relationship between line spread function (LSF), or slice sensitivity profile (SSP), and point spread function (PSF). In the system, the following equation has been reported; I(x,y)=O(x,y) ** PSF(x,y), in which I(x,y) and O(x,y) are CT image and object function, respectively, and ** is 2-dimensional convolution. In the same way, the following 3-dimensional expression applies; I'(x,y,z)=O'(x,y,z) *** PSF'(x,y,z), in which z-axis is the direction perpendicular to the x/y-scan plane. We defined that the CT image system was separable, when the above two equations could be transformed into following equations; I(x,y)=[O(x,y) * LSF x (x)] * LSF y (y) and I'(x,y,z) =[O'(x,y,z) * SSP(z)] ** PSF(x,y), respectively, in which LSF x (x) and LSF y (y) are LSFs in x- and y-direction, respectively. Previous reports for the LSF and SSP are considered to assume the separable-system. Under the condition of separable-system, we derived following equations; PSF(x,y)=LSF x (x) ·LSF y (y) and PSF'(x,y,z)=PSF(x,y)·SSP(z). They were validated by the computer-simulations. When the study based on 1-dimensional functions of LSF and SSP are expanded to that based on 2- or 3-dimensional functions of PSF, derived equations must be required. (author)

  1. Sensitivity of landscape resistance estimates based on point selection functions to scale and behavioral state: Pumas as a case study

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce

    2014-01-01

    Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...

  2. Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide

    Energy Technology Data Exchange (ETDEWEB)

    Wainwright, Haruko Murakami [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Univ. of California, Berkeley, CA (United States); Finsterle, Stefan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Earth Sciences Division; Univ. of California, Berkeley, CA (United States)

    2016-07-15

    This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.

  3. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available observation of atmospheric CO(sub2) concentrations at fixed monitoring stations. The LPDM model, which can be used to derive the sensitivity matrix used in an inversion, was run for each potential site for the months of July (representative of the Southern...

  4. Temperature-referenced high-sensitivity point-probe optical fiber chem-sensors based on cladding etched fiber Bragg gratings

    OpenAIRE

    Zhou, Kaiming; Chen, Xianfeng F.; Zhang, Lin; Bennion, Ian

    2004-01-01

    Point-probe optical fiber chem-sensors have been implemented using cladding etched fiber Bragg gratings. The sensors possess refractive index sensing capability that can be utilized to measure chemical concentrations. The Bragg wavelength shift reaches 8 nm when the index of surrounding medium changes from 1.33 to 1.44, giving maximum sensitivity more than 10 times higher than that of previously reported devices. More importantly, the dual-grating configuration of the point-probe sensors offe...

  5. Long-term gas and brine migration at the Waste Isolation Pilot Plant: Preliminary sensitivity analyses for post-closure 40 CFR 268 (RCRA), May 1992

    International Nuclear Information System (INIS)

    1992-12-01

    This report describes preliminary probabilistic sensitivity analyses of long term gas and brine migration at the Waste Isolation Pilot Plant (WIPP). Because gas and brine are potential transport media for organic compounds and heavy metals, understanding two-phase flow in the repository and the surrounding Salado Formation is essential to evaluating long-term compliance with 40 CFR 268.6, which is the portion of the Land Disposal Restrictions of the Hazardous and Solid Waste Amendments to the Resource Conservation and Recovery Act that states the conditions for disposal of specified hazardous wastes. Calculations described here are designed to provide guidance to the WIPP Project by identifying important parameters and helping to recognize processes not yet modeled that may affect compliance. Based on these analyses, performance is sensitive to shaft-seal permeabilities, parameters affecting gas generation, and the conceptual model used for the disturbed rock zone surrounding the excavation. Brine migration is less likely to affect compliance with 40 CFR 268.6 than gas migration. However, results are preliminary, and additional iterations of uncertainty and sensitivity analyses will be required to provide the confidence needed for a defensible compliance evaluation. Specifically, subsequent analyses will explicitly include effects of salt creep and, when conceptual and computational models are available, pressure-dependent fracturing of anhydrite marker beds

  6. Analysed a defective of the machine for a cap-tube nuclear fuel element ME-27 from its electricity point of view

    International Nuclear Information System (INIS)

    Achmad Suntoro

    2009-01-01

    It has been analysed a defective of the machine for a cap-tube nuclear fuel element ME-27 from its electricity point of view. The machine uses magnetic force resistance welding technique. A short circuit was happened within the machine because the nut for tightening high voltage cable for welding transformer was broken so that the cable touched the machine body and produced the short circuit. This condition made both the primary circuit breaker in the building down and produced high voltage pulse induction to the electronic circuit within the machine so that one of its electronic components was defective. This case becomes warnings on how important of tightening a nut according to its strength specification (using wrench torque) and the necessity of voltage transient limitation circuit to be installed. Both of the warnings are necessary for any equipment consuming high electric current oriented such as the ME-27 machine. (author)

  7. Sensitivity of LDEF foil analyses using ultra-low background germanium vs. large NaI(Tl) multidimensional spectrometers

    International Nuclear Information System (INIS)

    Reeves, J.H.; Arthur, R.J.; Brodzinski, R.L.

    1992-06-01

    Cobalt foils and stainless steel samples were analyzed for induced 6O Co activity with both an ultra-low background germanium gamma-ray spectrometer and with a large NaI(Tl) multidimensional spectrometer, both of which use electronic anticoincidence shielding to reduce background counts resulting from cosmic rays. Aluminum samples were analyzed for 22 Na. The results, in addition to the relative sensitivities and precisions afforded by the two methods, are presented

  8. Phenotypic and genetic analyses of the varroa sensitive hygienic trait in Russian honey bee (Hymenoptera: Apidae) colonies

    OpenAIRE

    Kirrane, Maria J.; de Guzman, Lilia I.; Holloway, Beth; Frake, Amanda M.; Rinderer, Thomas E.; Whelan, Padraig M.

    2015-01-01

    Varroa destructorcontinues to threaten colonies of European honey bees. General hygiene, and more specific Varroa Sensitive Hygiene (VSH), provide resistance towards the Varroa mite in a number of stocks. In this study, 32 Russian (RHB) and 14 Italian honey bee colonies were assessed for the VSH trait using two different assays. Firstly, colonies were assessed using the standard VSH behavioural assay of the change in infestation of a highly infested donor comb after a one-week exposure. Secon...

  9. Evaluating the effect of sample type on American alligator (Alligator mississippiensis) analyte values in a point-of-care blood analyser.

    Science.gov (United States)

    Hamilton, Matthew T; Finger, John W; Winzeler, Megan E; Tuberville, Tracey D

    2016-01-01

    The assessment of wildlife health has been enhanced by the ability of point-of-care (POC) blood analysers to provide biochemical analyses of non-domesticated animals in the field. However, environmental limitations (e.g. temperature, atmospheric humidity and rain) and lack of reference values may inhibit researchers from using such a device with certain wildlife species. Evaluating the use of alternative sample types, such as plasma, in a POC device may afford researchers the opportunity to delay sample analysis and the ability to use banked samples. In this study, we examined fresh whole blood, fresh plasma and frozen plasma (sample type) pH, partial pressure of carbon dioxide (PCO2), bicarbonate (HCO3 (-)), total carbon dioxide (TCO2), base excess (BE), partial pressure of oxygen (PO2), oxygen saturation (sO2) and lactate concentrations in 23 juvenile American alligators (Alligator mississippiensis) using an i-STAT CG4+ cartridge. Our results indicate that sample type had no effect on lactate concentration values (F 2,65 = 0.37, P = 0.963), suggesting that the i-STAT analyser can be used reliably to quantify lactate concentrations in fresh and frozen plasma samples. In contrast, the other seven blood parameters measured by the CG4+ cartridge were significantly affected by sample type. Lastly, we were able to collect blood samples from all alligators within 2 min of capture to establish preliminary reference ranges for juvenile alligators based on values obtained using fresh whole blood.

  10. Sensitivity and uncertainty analyses applied to one-dimensional radionuclide transport in a layered fractured rock: MULTFRAC --Analytic solutions and local sensitivities

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Wu, Y.T.; Sagar, B.

    1992-12-01

    Exact analytical solutions based on the Laplace transforms are derived for describing the one-dimensional space-time-dependent, advective transport of a decaying species in a layered, saturated rock system intersected by a planar fracture of varying aperture. These solutions, which account for advection in fracture, molecular diffusion into the rock matrix, adsorption in both fracture and matrix, and radioactive decay, predict the concentrations in both fracture and rock matrix and the cumulative mass in the fracture. The solute migration domain in both fracture and rock is assumed to be semi-infinite with non-zero initial conditions. The concentration of each nuclide at the source is allowed to decay either continuously or according to some periodical fluctuations where both are subjected to either a step or band release mode. Two numerical examples related to the transport of Np-237 and Cm-245 in a five-layered system of fractured rock were used to verify these solutions with several well established evaluation methods of Laplace inversion integrals in the real and complex domain. In addition, with respect to the model parameters, a comparison of the analytically derived local sensitivities for the concentration and cumulative mass of Np-237 in the fracture with the ones obtained through a finite-difference method of approximation is also reported

  11. Analyses of single nucleotide polymorphisms in selected nutrient-sensitive genes in weight-regain prevention: the DIOGENES study.

    Science.gov (United States)

    Larsen, Lesli H; Angquist, Lars; Vimaleswaran, Karani S; Hager, Jörg; Viguerie, Nathalie; Loos, Ruth J F; Handjieva-Darlenska, Teodora; Jebb, Susan A; Kunesova, Marie; Larsen, Thomas M; Martinez, J Alfredo; Papadaki, Angeliki; Pfeiffer, Andreas F H; van Baak, Marleen A; Sørensen, Thorkild Ia; Holst, Claus; Langin, Dominique; Astrup, Arne; Saris, Wim H M

    2012-05-01

    Differences in the interindividual response to dietary intervention could be modified by genetic variation in nutrient-sensitive genes. This study examined single nucleotide polymorphisms (SNPs) in presumed nutrient-sensitive candidate genes for obesity and obesity-related diseases for main and dietary interaction effects on weight, waist circumference, and fat mass regain over 6 mo. In total, 742 participants who had lost ≥ 8% of their initial body weight were randomly assigned to follow 1 of 5 different ad libitum diets with different glycemic indexes and contents of dietary protein. The SNP main and SNP-diet interaction effects were analyzed by using linear regression models, corrected for multiple testing by using Bonferroni correction and evaluated by using quantile-quantile (Q-Q) plots. After correction for multiple testing, none of the SNPs were significantly associated with weight, waist circumference, or fat mass regain. Q-Q plots showed that ALOX5AP rs4769873 showed a higher observed than predicted P value for the association with less waist circumference regain over 6 mo (-3.1 cm/allele; 95% CI: -4.6, -1.6; P/Bonferroni-corrected P = 0.000039/0.076), independently of diet. Additional associations were identified by using Q-Q plots for SNPs in ALOX5AP, TNF, and KCNJ11 for main effects; in LPL and TUB for glycemic index interaction effects on waist circumference regain; in GHRL, CCK, MLXIPL, and LEPR on weight; in PPARC1A, PCK2, ALOX5AP, PYY, and ADRB3 on waist circumference; and in PPARD, FABP1, PLAUR, and LPIN1 on fat mass regain for dietary protein interaction. The observed effects of SNP-diet interactions on weight, waist, and fat mass regain suggest that genetic variation in nutrient-sensitive genes can modify the response to diet. This trial was registered at clinicaltrials.gov as NCT00390637.

  12. Stochastic methods for the quantification of sensitivities and uncertainties in criticality analyses; Stochastische Methoden zur Quantifizierung von Sensitivitaeten und Unsicherheiten in Kritikalitaetsanalysen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias; Bock, Matthias; Stuke, Maik; Wagner, Markus

    2014-06-15

    This work describes statistical analyses based on Monte Carlo sampling methods for criticality safety analyses. The methods analyse a large number of calculations of a given problem with statistically varied model parameters to determine uncertainties and sensitivities of the computed results. The GRS development SUnCISTT (Sensitivities and Uncertainties in Criticality Inventory and Source Term Tool) is a modular, easily extensible abstract interface program, designed to perform such Monte Carlo sampling based uncertainty and sensitivity analyses in the field of criticality safety. It couples different criticality and depletion codes commonly used in nuclear criticality safety assessments to the well-established GRS tool SUSA for sensitivity and uncertainty analyses. For uncertainty analyses of criticality calculations, SunCISTT couples various SCALE sequences developed at Oak Ridge National Laboratory and the general Monte Carlo N-particle transport code MCNP from Los Alamos National Laboratory to SUSA. The impact of manufacturing tolerances of a fuel assembly configuration on the neutron multiplication factor for the various sequences is shown. Uncertainties in nuclear inventories, dose rates, or decay heat can be investigated via the coupling of the GRS depletion system OREST to SUSA. Some results for a simplified irradiated Pressurized Water Reactor (PWR) UO{sub 2} fuel assembly are shown. SUnCISTT also combines the two aforementioned modules for burnup credit criticality analysis of spent nuclear fuel to ensures an uncertainty and sensitivity analysis using the variations of manufacturing tolerances in the burn-up code and criticality code simultaneously. Calculations and results for a storage cask loaded with typical irradiated PWR UO{sub 2} fuel are shown, including Monte Carlo sampled axial burn-up profiles. The application of SUnCISTT in the field of code validation, specifically, how it is applied to compare a simulation model to available benchmark

  13. ESI-VI3 East Point, St. Croix, U.S. Virgin Islands 2000 (Environmental Sensitivity Index Map)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Environmental Sensitivity Index (ESI) maps are an integral component in oil-spill contingency planning and assessment. They serve as a source of information in the...

  14. High-resolution linkage analyses to identify genes that influence Varroa sensitive hygiene behavior in honey bees.

    Science.gov (United States)

    Tsuruda, Jennifer M; Harris, Jeffrey W; Bourgeois, Lanie; Danka, Robert G; Hunt, Greg J

    2012-01-01

    Varroa mites (V. destructor) are a major threat to honey bees (Apis melilfera) and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL). Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21) and a suggestive QTL on chromosome 1 (LOD = 1.95). The QTL confidence interval on chromosome 9 contains the gene 'no receptor potential A' and a dopamine receptor. 'No receptor potential A' is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection.

  15. High-resolution linkage analyses to identify genes that influence Varroa sensitive hygiene behavior in honey bees.

    Directory of Open Access Journals (Sweden)

    Jennifer M Tsuruda

    Full Text Available Varroa mites (V. destructor are a major threat to honey bees (Apis melilfera and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL. Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21 and a suggestive QTL on chromosome 1 (LOD = 1.95. The QTL confidence interval on chromosome 9 contains the gene 'no receptor potential A' and a dopamine receptor. 'No receptor potential A' is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection.

  16. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Florida Panhandle: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data (e.g., abandoned vessels, access points, airports, aquaculture sites, archaeological sites, artificial reefs, beaches,...

  17. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: South Florida: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for abandoned vessels, access points, airports, aquaculture sites, beaches, boat ramps, coast guard stations, ferries,...

  18. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Upper Coast of Texas: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for access points, aquaculture sites, airports, artificial reefs, boat ramps, coast guard stations, heliports,...

  19. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: North Carolina: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource data for abandoned vessels, access points, airports, archaeological sites, artifical reefs, beaches, boat ramps,...

  20. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Southern California: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains human-use resource point data for access sites, airports, aquaculture sites, beaches, boat ramps, marinas, coast guard facilities, oil...

  1. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Northwest Arctic, Alaska: SOCECON (Socioeconomic Resource Points and Lines)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains vector points and lines representing human-use resource data for airports, marinas, and mining sites in Northwest Arctic, Alaska....

  2. High-sensitivity detection of cardiac troponin I with UV LED excitation for use in point-of-care immunoassay

    DEFF Research Database (Denmark)

    Rodenko, Olga; Eriksson, Susann; Tidemand-Lichtenberg, Peter

    2017-01-01

    of an immunoassay analyzer employing an optimized LED excitation to measure on a standard troponin I and a novel research high-sensitivity troponin I assay. The limit of detection is improved by factor of 5 for standard troponin I and by factor of 3 for a research high-sensitivity troponin I assay, compared...... to the flash lamp excitation. The obtained limit of detection was 0.22 ng/L measured on plasma with the research highsensitivity troponin I assay and 1.9 ng/L measured on tris-saline-azide buffer containing bovine serum albumin with the standard troponin I assay. We discuss the optimization of time...

  3. Large-scale analyses of synonymous substitution rates can be sensitive to assumptions about the process of mutation.

    Science.gov (United States)

    Aris-Brosou, Stéphane; Bielawski, Joseph P

    2006-08-15

    A popular approach to examine the roles of mutation and selection in the evolution of genomes has been to consider the relationship between codon bias and synonymous rates of molecular evolution. A significant relationship between these two quantities is taken to indicate the action of weak selection on substitutions among synonymous codons. The neutral theory predicts that the rate of evolution is inversely related to the level of functional constraint. Therefore, selection against the use of non-preferred codons among those coding for the same amino acid should result in lower rates of synonymous substitution as compared with sites not subject to such selection pressures. However, reliably measuring the extent of such a relationship is problematic, as estimates of synonymous rates are sensitive to our assumptions about the process of molecular evolution. Previous studies showed the importance of accounting for unequal codon frequencies, in particular when synonymous codon usage is highly biased. Yet, unequal codon frequencies can be modeled in different ways, making different assumptions about the mutation process. Here we conduct a simulation study to evaluate two different ways of modeling uneven codon frequencies and show that both model parameterizations can have a dramatic impact on rate estimates and affect biological conclusions about genome evolution. We reanalyze three large data sets to demonstrate the relevance of our results to empirical data analysis.

  4. Quantitative analyses reveal distinct sensitivities of the capture of HIV-1 primary viruses and pseudoviruses to broadly neutralizing antibodies.

    Science.gov (United States)

    Kim, Jiae; Jobe, Ousman; Peachman, Kristina K; Michael, Nelson L; Robb, Merlin L; Rao, Mangala; Rao, Venigalla B

    2017-08-01

    Development of vaccines capable of eliciting broadly neutralizing antibodies (bNAbs) is a key goal to controlling the global AIDS epidemic. To be effective, bNAbs must block the capture of HIV-1 to prevent viral acquisition and establishment of reservoirs. However, the role of bNAbs, particularly during initial exposure of primary viruses to host cells, has not been fully examined. Using a sensitive, quantitative, and high-throughput qRT-PCR assay, we found that primary viruses were captured by host cells and converted into a trypsin-resistant form in less than five minutes. We discovered, unexpectedly, that bNAbs did not block primary virus capture, although they inhibited the capture of pseudoviruses/IMCs and production of progeny viruses at 48h. Further, viruses escaped bNAb inhibition unless the bNAbs were present in the initial minutes of exposure of virus to host cells. These findings will have important implications for HIV-1 vaccine design and determination of vaccine efficacy. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 contains an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.

  6. Sensitivity and uncertainty analyses applied to one-dimensional radionuclide transport in a layered fractured rock: Evaluation of the Limit State approach, Iterative Performance Assessment, Phase 2

    International Nuclear Information System (INIS)

    Wu, Y.T.; Gureghian, A.B.; Sagar, B.; Codell, R.B.

    1992-12-01

    The Limit State approach is based on partitioning the parameter space into two parts: one in which the performance measure is smaller than a chosen value (called the limit state), and the other in which it is larger. Through a Taylor expansion at a suitable point, the partitioning surface (called the limit state surface) is approximated as either a linear or quadratic function. The success and efficiency of the limit state method depends upon choosing an optimum point for the Taylor expansion. The point in the parameter space that has the highest probability of producing the value chosen as the limit state is optimal for expansion. When the parameter space is transformed into a standard Gaussian space, the optimal expansion point, known as the lost Probable Point (MPP), has the property that its location on the Limit State surface is closest to the origin. Additionally, the projections onto the parameter axes of the vector from the origin to the MPP are the sensitivity coefficients. Once the MPP is determined and the Limit State surface approximated, formulas (see Equations 4-7 and 4-8) are available for determining the probability of the performance measure being less than the limit state. By choosing a succession of limit states, the entire cumulative distribution of the performance measure can be detemined. Methods for determining the MPP and also for improving the estimate of the probability are discussed in this report

  7. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    International Nuclear Information System (INIS)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de

    2013-02-01

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of 135 Cs, 59 Ni, 230 Th and 226 Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a tendency to

  8. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  9. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)

    2013-02-15

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a

  10. Cloud point extraction-fluorimetric combined methodology for the determination of trace warfarin based on the sensitization effect of supramolecule

    Energy Technology Data Exchange (ETDEWEB)

    Chang Zheng [Department of Applied Chemistry of College of Science, Xi' an University of Technology, Xi' an 710048 (China); College of Chemistry and Materials Science, Northwest University, 229 North Taibai Road, Xi' an 710069 (China); Yan Hongtao, E-mail: cz610@163.com [College of Chemistry and Materials Science, Northwest University, 229 North Taibai Road, Xi' an 710069 (China)

    2012-03-15

    Compared to the fluorescence spectra of warfarin in pure ethanol and in the presence of the nonionic surfactant Tergitol 15-S-7 after cloud point extraction (CPE), it can be seen that the fluorescence emission peak underwent an obvious red shift and the fluorescence intensity of warfarin was significantly increased in the presence of Tergitol 15-S-7. In order to confirm Tergitol 15-S-7-induced supramolecular effects, the investigations on the fluorescence quantum yields of warfarin in the micellar medium and pure ethanol were performed. The experimental results showed that the supramolecular interactions between Tergitol 15-S-7 and the warfarin excimers played a key role for improving the warfarin fluorescence properties. Based on these facts, a simple fluorometric method combined with CPE for the determination of trace warfarin was developed for the first time. Under optimized experimental conditions, the linear concentration range for warfarin was 3.0 Multiplication-Sign 1.0{sup -9}-1.0 Multiplication-Sign 10{sup -6} mol L{sup -1} and the detection limit was 3.3 Multiplication-Sign 10{sup -10} mol L{sup -1}. And, the proposed method was approved to be appropriate for monitoring warfarin in actual pharmaceutical formulations and biological fluid samples by recovery test, in comparison with other reported methods being satisfactory. - Highlights: Black-Right-Pointing-Pointer A CPE fluorescence method for trace warfarin was developed for the first time. Black-Right-Pointing-Pointer Supramolecule effects play a key role for improving the fluorescence property. Black-Right-Pointing-Pointer Notion presents an opportunity so far neglected area of CPE investigation. Black-Right-Pointing-Pointer Without previous treatment, urine species after CPE had no significant interference.

  11. Measurement of home-made LaCl3 : Ce scintillation detector sensitivity with different energy points in range of fission energy

    International Nuclear Information System (INIS)

    Hu Mengchun; Li Rurong; Si Fenni

    2010-01-01

    Gamma rays of different energy were obtained in the range of fission energy by Compton scattering in intense 60 Co gamma source and the standard isotopic gamma sources which are 0.67 MeV 137 Cs and l.25 MeV 60 Co sources of point form. Sensitivity of LaCl 3 : Ce scintillator was measured in these gamma ray energy by a fast response scintillation detector with the home-made LaCl 3 : Ce scintillator. Results were normalized by the sensitivity to 0.67 MeV gamma ray. Sensitivity of LaCl 3 : Ce to 1.25 MeV gamma ray is about l.28. For ø40 mm × 2 mm LaCl 3 : Ce scintillator, the biggest sensitivity is l.18 and the smallest is 0.96 with gamma ray from 0.39 to 0.78 MeV. And for ø40 mm × 10 mm LaCl 3 : Ce scintillator, the biggest sensitivity is l.06 and the smallest is 0.98. The experimental results can provide references for theoretical study of the LaCl 3 : Ce scintillator and data to obtain the compounded sensitivity of LaCl 3 : Ce scintillator in the range of fission energy. (authors)

  12. Sensitivity analyses of finite element method for estimating residual stress of dissimilar metal multi-pass weldment in nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Song, Tae Kwang; Bae, Hong Yeol; Kim, Yun Jae [Korea Unviersity, Seoul (Korea, Republic of); Lee, Kyoung Soo; Park, Chi Yong [Korea Electric Power Research Institute, Daejeon (Korea, Republic of)

    2008-09-15

    In nuclear power plants, ferritic low alloy steel components were connected with austenitic stainless steel piping system through alloy 82/182 butt weld. There have been incidents recently where cracking has been observed in the dissimilar metal weld. Alloy 82/182 is susceptible to primary water stress corrosion cracking. Weld-induced residual stress is main factor for crack growth. Therefore exact estimation of residual stress is important for reliable operating. This paper presents residual stress computation performed by 6'' safety and relief nozzle. Based on 2 dimensional and 3 dimensional finite element analyses, effect of welding variables on residual stress variation is estimated for sensitivity analysis.

  13. Modification and Validation of the Triglyceride-to-HDL Cholesterol Ratio as a Surrogate of Insulin Sensitivity in White Juveniles and Adults without Diabetes Mellitus: The Single Point Insulin Sensitivity Estimator (SPISE).

    Science.gov (United States)

    Paulmichl, Katharina; Hatunic, Mensud; Højlund, Kurt; Jotic, Aleksandra; Krebs, Michael; Mitrakou, Asimina; Porcellati, Francesca; Tura, Andrea; Bergsten, Peter; Forslund, Anders; Manell, Hannes; Widhalm, Kurt; Weghuber, Daniel; Anderwald, Christian-Heinz

    2016-09-01

    The triglyceride-to-HDL cholesterol (TG/HDL-C) ratio was introduced as a tool to estimate insulin resistance, because circulating lipid measurements are available in routine settings. Insulin, C-peptide, and free fatty acids are components of other insulin-sensitivity indices but their measurement is expensive. Easier and more affordable tools are of interest for both pediatric and adult patients. Study participants from the Relationship Between Insulin Sensitivity and Cardiovascular Disease [43.9 (8.3) years, n = 1260] as well as the Beta-Cell Function in Juvenile Diabetes and Obesity study cohorts [15 (1.9) years, n = 29] underwent oral-glucose-tolerance tests and euglycemic clamp tests for estimation of whole-body insulin sensitivity and calculation of insulin sensitivity indices. To refine the TG/HDL ratio, mathematical modeling was applied including body mass index (BMI), fasting TG, and HDL cholesterol and compared to the clamp-derived M-value as an estimate of insulin sensitivity. Each modeling result was scored by identifying insulin resistance and correlation coefficient. The Single Point Insulin Sensitivity Estimator (SPISE) was compared to traditional insulin sensitivity indices using area under the ROC curve (aROC) analysis and χ(2) test. The novel formula for SPISE was computed as follows: SPISE = 600 × HDL-C(0.185)/(TG(0.2) × BMI(1.338)), with fasting HDL-C (mg/dL), fasting TG concentrations (mg/dL), and BMI (kg/m(2)). A cutoff value of 6.61 corresponds to an M-value smaller than 4.7 mg · kg(-1) · min(-1) (aROC, M:0.797). SPISE showed a significantly better aROC than the TG/HDL-C ratio. SPISE aROC was comparable to the Matsuda ISI (insulin sensitivity index) and equal to the QUICKI (quantitative insulin sensitivity check index) and HOMA-IR (homeostasis model assessment-insulin resistance) when calculated with M-values. The SPISE seems well suited to surrogate whole-body insulin sensitivity from inexpensive fasting single-point blood draw and BMI

  14. Integration of an optical CMOS sensor with a microfluidic channel allows a sensitive readout for biological assays in point-of-care tests.

    Science.gov (United States)

    Van Dorst, Bieke; Brivio, Monica; Van Der Sar, Elfried; Blom, Marko; Reuvekamp, Simon; Tanzi, Simone; Groenhuis, Roelf; Adojutelegan, Adewole; Lous, Erik-Jan; Frederix, Filip; Stuyver, Lieven J

    2016-04-15

    In this manuscript, a microfluidic detection module, which allows a sensitive readout of biological assays in point-of-care (POC) tests, is presented. The proposed detection module consists of a microfluidic flow cell with an integrated Complementary Metal-Oxide-Semiconductor (CMOS)-based single photon counting optical sensor. Due to the integrated sensor-based readout, the detection module could be implemented as the core technology in stand-alone POC tests, for use in mobile or rural settings. The performance of the detection module was demonstrated in three assays: a peptide, a protein and an antibody detection assay. The antibody detection assay with readout in the detection module proved to be 7-fold more sensitive that the traditional colorimetric plate-based ELISA. The protein and peptide assay showed a lower limit of detection (LLOD) of 200 fM and 460 fM respectively. Results demonstrate that the sensitivity of the immunoassays is comparable with lab-based immunoassays and at least equal or better than current mainstream POC devices. This sensitive readout holds the potential to develop POC tests, which are able to detect low concentrations of biomarkers. This will broaden the diagnostic capabilities at the clinician's office and at patient's home, where currently only the less sensitive lateral flow and dipstick POC tests are implemented. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 4: Uncertainty and sensitivity analyses for 40 CFR 191, Subpart B

    Energy Technology Data Exchange (ETDEWEB)

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to the EPA`s Environmental Protection Standards for Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Additional information about the 1992 PA is provided in other volumes. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions, the choice of parameters selected for sampling, and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect compliance with 40 CFR 191B are: drilling intensity, intrusion borehole permeability, halite and anhydrite permeabilities, radionuclide solubilities and distribution coefficients, fracture spacing in the Culebra Dolomite Member of the Rustler Formation, porosity of the Culebra, and spatial variability of Culebra transmissivity. Performance with respect to 40 CFR 191B is insensitive to uncertainty in other parameters; however, additional data are needed to confirm that reality lies within the assigned distributions.

  16. [Male identity, sport and health : Starting points for gender-sensitive support of boys and young men].

    Science.gov (United States)

    Blomberg, Christoph; Neuber, Nils

    2016-08-01

    Sport is highly relevant in the life of boys and young men. It is not only one of the most common and important leisure activities, but also helps male self-assurance through physical conflicts and competitions as well as through physical proximity and social involvement. At the same time, sport is an ambivalent area that preserves health, but can also be dangerous to it. By considering the development of male identity, the specific possibilities of sport, as well as an overview of the health situation of boys, this article develops starting points for lifestyle-oriented health promotion of boys and young men in the area of exercise, games and sport. In sports, physical practices are learned that can have long-term effects as somatic cultures on health behavior. The work with boys in sports can be health-promoting if opportunities and risks are reflected upon and considered in the didactic planning and execution.

  17. Global sensitivity analysis of thermo-mechanical models in numerical weld modelling; Analyse de sensibilite globale de modeles thermomecaniques de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2007-10-15

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  18. Cloud point extraction-fluorimetric combined methodology for the determination of trace warfarin based on the sensitization effect of supramolecule

    International Nuclear Information System (INIS)

    Chang Zheng; Yan Hongtao

    2012-01-01

    Compared to the fluorescence spectra of warfarin in pure ethanol and in the presence of the nonionic surfactant Tergitol 15-S-7 after cloud point extraction (CPE), it can be seen that the fluorescence emission peak underwent an obvious red shift and the fluorescence intensity of warfarin was significantly increased in the presence of Tergitol 15-S-7. In order to confirm Tergitol 15-S-7-induced supramolecular effects, the investigations on the fluorescence quantum yields of warfarin in the micellar medium and pure ethanol were performed. The experimental results showed that the supramolecular interactions between Tergitol 15-S-7 and the warfarin excimers played a key role for improving the warfarin fluorescence properties. Based on these facts, a simple fluorometric method combined with CPE for the determination of trace warfarin was developed for the first time. Under optimized experimental conditions, the linear concentration range for warfarin was 3.0×1.0 −9 –1.0×10 −6 mol L −1 and the detection limit was 3.3×10 −10 mol L −1 . And, the proposed method was approved to be appropriate for monitoring warfarin in actual pharmaceutical formulations and biological fluid samples by recovery test, in comparison with other reported methods being satisfactory. - Highlights: ► A CPE fluorescence method for trace warfarin was developed for the first time. ► Supramolecule effects play a key role for improving the fluorescence property. ► Notion presents an opportunity so far neglected area of CPE investigation. ► Without previous treatment, urine species after CPE had no significant interference.

  19. NUPEC BWR Full-size Fine-mesh Bundle Test (BFBT) Benchmark. Volume II: uncertainty and sensitivity analyses of void distribution and critical power - Specification

    International Nuclear Information System (INIS)

    Aydogan, F.; Hochreiter, L.; Ivanov, K.; Martin, M.; Utsuno, H.; Sartori, E.

    2010-01-01

    experimental cases from the BFBT database for both steady-state void distribution and steady-state critical power uncertainty analyses. In order to study the basic thermal-hydraulics in a single channel, where the concern regarding the cross-flow effect modelling could be removed, an elemental task is proposed, consisting of two sub-tasks that are placed in each phase of the benchmark scope as follows: - Sub-task 1: Void fraction in elemental channel benchmark; - Sub-task 2: Critical power in elemental channel benchmark. The first task can also be utilised as an uncertainty analysis exercise for fine computational fluid dynamics (CFD) models for which the full bundle sensitivity or uncertainty analysis is more difficult. The task is added to the second volume of the specification as an optional exercise. Chapter 2 of this document provides the definition of UA/SA terms. Chapter 3 provides the selection and characterisation of the input uncertain parameters for the BFBT benchmark and the description of the elemental task. Chapter 4 describes the suggested approach for UA/SA of the BFBT benchmark. Chapter 5 provides the selection of data sets for the uncertainty analysis and the elemental task from the BFBT database. Chapter 6 specifies the requested output for void distribution and critical power uncertainty analyses (Exercises I-4 and II-3) as well as for the elemental task. Chapter 7 provides conclusions. Appendix 1 discusses the UA/SA methods. Appendix 2 presents the Phenomena Identification Ranking Tables (PIRT) developed at PSU for void distribution and critical power predictions in order to assist participants in selecting the most sensitive/uncertain code model parameters

  20. Determination of sensitivity, specificity and cut off point of visual- Motor Bender Gestalt Test in the diagnosis of traumatic brain injury

    Directory of Open Access Journals (Sweden)

    tayebeh Rezaie nasab

    2013-02-01

    Results: In this study, cut-off point was calculated as 6.5%, sensitivity as 55.8%, characteristic as 81.2%, and the area under the Roc curve as 0.69. Moreover, positive predictive value, negative predictive value and efficiency were 95.08%, 22.03%, and 59.17%, respectively. Conclusion: Results of this study revealed that Bender Gestalt Test is relatively weak in diagnosis of mild TBI. Hence, its characteristic is high and it was successful in diagnosing healthy individuals.

  1. Reprogramming the body weight set point by a reciprocal interaction of hypothalamic leptin sensitivity and Pomc gene expression reverts extreme obesity

    Directory of Open Access Journals (Sweden)

    Kavaljit H. Chhabra

    2016-10-01

    Conclusions: Pomc reactivation in previously obese, calorie-restricted ArcPomc−/− mice normalized energy homeostasis, suggesting that their body weight set point was restored to control levels. In contrast, massively obese and hyperleptinemic ArcPomc−/− mice or those weight-matched and treated with PASylated leptin to maintain extreme hyperleptinemia prior to Pomc reactivation converged to an intermediate set point relative to lean control and obese ArcPomc−/− mice. We conclude that restoration of hypothalamic leptin sensitivity and Pomc expression is necessary for obese ArcPomc−/− mice to achieve and sustain normal metabolic homeostasis; whereas deficits in either parameter set a maladaptive allostatic balance that defends increased adiposity and body weight.

  2. Multiple active myofascial trigger points and pressure pain sensitivity maps in the temporalis muscle are related in women with chronic tension type headache.

    Science.gov (United States)

    Fernández-de-las-Peñas, César; Caminero, Ana B; Madeleine, Pascal; Guillem-Mesado, Amparo; Ge, Hong-You; Arendt-Nielsen, Lars; Pareja, Juan A

    2009-01-01

    To describe the common locations of active trigger points (TrPs) in the temporalis muscle and their referred pain patterns in chronic tension type headache (CTTH), and to determine if pressure sensitivity maps of this muscle can be used to describe the spatial distribution of active TrPs. Forty women with CTTH were included. An electronic pressure algometer was used to assess pressure pain thresholds (PPT) from 9 points over each temporalis muscle: 3 points in the anterior, medial and posterior part, respectively. Both muscles were examined for the presence of active TrPs over each of the 9 points. The referred pain pattern of each active TrP was assessed. Two-way analysis of variance detected significant differences in mean PPT levels between the measurement points (F=30.3; P<0.001), but not between sides (F=2.1; P=0.2). PPT scores decreased from the posterior to the anterior column (P<0.001). No differences were found in the number of active TrPs (F=0.3; P=0.9) between the dominant side the nondominant side. Significant differences were found in the distribution of the active TrPs (chi2=12.2; P<0.001): active TrPs were mostly found in the anterior column and in the middle of the muscle belly. The analysis of variance did not detect significant differences in the referred pain pattern between active TrPs (F=1.1, P=0.4). The topographical pressure pain sensitivity maps showed the distinct distribution of the TrPs indicated by locations with low PPTs. Multiple active TrPs in the temporalis muscle were found, particularly in the anterior column and in the middle of the muscle belly. Bilateral posterior to anterior decreased distribution of PPTs in the temporalis muscle in women with CTTH was found. The locations of active TrPs in the temporalis muscle corresponded well to the muscle areas with lower PPT, supporting the relationship between multiple active muscle TrPs and topographical pressure sensitivity maps in the temporalis muscle in women with CTTH.

  3. Rapid, sensitive and reproducible method for point-of-collection screening of liquid milk for adulterants using a portable Raman spectrometer with novel optimized sample well

    Science.gov (United States)

    Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.

    2017-02-01

    Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.

  4. Sensitivity of the Hydrogen Epoch of Reionization Array and its build-out stages to one-point statistics from redshifted 21 cm observations

    Science.gov (United States)

    Kittiwisit, Piyanat; Bowman, Judd D.; Jacobs, Daniel C.; Beardsley, Adam P.; Thyagarajan, Nithyanandan

    2018-03-01

    We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.

  5. Short-term changes in neck pain, widespread pressure pain sensitivity, and cervical range of motion after the application of trigger point dry needling in patients with acute mechanical neck pain: a randomized clinical trial.

    Science.gov (United States)

    Mejuto-Vázquez, María J; Salom-Moreno, Jaime; Ortega-Santiago, Ricardo; Truyols-Domínguez, Sebastián; Fernández-de-Las-Peñas, César

    2014-04-01

    Randomized clinical trial. To determine the effects of trigger point dry needling (TrPDN) on neck pain, widespread pressure pain sensitivity, and cervical range of motion in patients with acute mechanical neck pain and active trigger points in the upper trapezius muscle. TrPDN seems to be effective for decreasing pain in individuals with upper-quadrant pain syndromes. Potential effects of TrPDN for decreasing pain and sensitization in individuals with acute mechanical neck pain are needed. Methods Seventeen patients (53% female) were randomly assigned to 1 of 2 groups: a single session of TrPDN or no intervention (waiting list). Pressure pain thresholds over the C5-6 zygapophyseal joint, second metacarpal, and tibialis anterior muscle; neck pain intensity; and cervical spine range-of-motion data were collected at baseline (pretreatment) and 10 minutes and 1 week after the intervention by an assessor blinded to the treatment allocation of the patient. Mixed-model analyses of variance were used to examine the effects of treatment on each outcome variable. Patients treated with 1 session of TrPDN experienced greater decreases in neck pain, greater increases in pressure pain threshold, and higher increases in cervical range of motion than those who did not receive an intervention at both 10 minutes and 1 week after the intervention (Ppain intensity and widespread pressure pain sensitivity, and also increase active cervical range of motion, in patients with acute mechanical neck pain. Changes in pain, pressure pain threshold, and cervical range of motion surpassed their respective minimal detectable change values, supporting clinically relevant treatment effects. Level of Evidence Therapy, level 1b-.

  6. Rapid and sensitive detection of Feline immunodeficiency virus using an insulated isothermal PCR-based assay with a point-of-need PCR detection platform.

    Science.gov (United States)

    Wilkes, Rebecca Penrose; Kania, Stephen A; Tsai, Yun-Long; Lee, Pei-Yu Alison; Chang, Hsiu-Hui; Ma, Li-Juan; Chang, Hsiao-Fen Grace; Wang, Hwa-Tang Thomas

    2015-07-01

    Feline immunodeficiency virus (FIV) is an important infectious agent of cats. Clinical syndromes resulting from FIV infection include immunodeficiency, opportunistic infections, and neoplasia. In our study, a 5' long terminal repeat/gag region-based reverse transcription insulated isothermal polymerase chain reaction (RT-iiPCR) was developed to amplify all known FIV strains to facilitate point-of-need FIV diagnosis. The RT-iiPCR method was applied in a point-of-need PCR detection platform--a field-deployable device capable of generating automatically interpreted RT-iiPCR results from nucleic acids within 1 hr. Limit of detection 95% of FIV RT-iiPCR was calculated to be 95 copies standard in vitro transcription RNA per reaction. Endpoint dilution studies with serial dilutions of an ATCC FIV type strain showed that the sensitivity of lyophilized FIV RT-iiPCR reagent was comparable to that of a reference nested PCR. The established reaction did not amplify any nontargeted feline pathogens, including Felid herpesvirus 1, feline coronavirus, Feline calicivirus, Feline leukemia virus, Mycoplasma haemofelis, and Chlamydophila felis. Based on analysis of 76 clinical samples (including blood and bone marrow) with the FIV RT-iiPCR, test sensitivity was 97.78% (44/45), specificity was 100.00% (31/31), and agreement was 98.65% (75/76), determined against a reference nested-PCR assay. A kappa value of 0.97 indicated excellent correlation between these 2 methods. The lyophilized FIV RT-iiPCR reagent, deployed on a user-friendly portable device, has potential utility for rapid and easy point-of-need detection of FIV in cats. © 2015 The Author(s).

  7. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    Science.gov (United States)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  8. An insulated isothermal PCR method on a field-deployable device for rapid and sensitive detection of canine parvovirus type 2 at points of need.

    Science.gov (United States)

    Wilkes, Rebecca P; Lee, Pei-Yu A; Tsai, Yun-Long; Tsai, Chuan-Fu; Chang, Hsiu-Hui; Chang, Hsiao-Fen G; Wang, Hwa-Tang T

    2015-08-01

    Canine parvovirus type 2 (CPV-2), including subtypes 2a, 2b and 2c, causes an acute enteric disease in both domestic and wild animals. Rapid and sensitive diagnosis aids effective disease management at points of need (PON). A commercially available, field-deployable and user-friendly system, designed with insulated isothermal PCR (iiPCR) technology, displays excellent sensitivity and specificity for nucleic acid detection. An iiPCR method was developed for on-site detection of all circulating CPV-2 strains. Limit of detection was determined using plasmid DNA. CPV-2a, 2b and 2c strains, a feline panleukopenia virus (FPV) strain, and nine canine pathogens were tested to evaluate assay specificity. Reaction sensitivity and performance were compared with an in-house real-time PCR using serial dilutions of a CPV-2b strain and 100 canine fecal clinical samples collected from 2010 to 2014, respectively. The 95% limit of detection of the iiPCR method was 13 copies of standard DNA and detection limits for CPV-2b DNA were equivalent for iiPCR and real-time PCR. The iiPCR reaction detected CPV-2a, 2b and 2c and FPV. Non-targeted pathogens were not detected. Test results of real-time PCR and iiPCR from 99 fecal samples agreed with each other, while one real-time PCR-positive sample tested negative by iiPCR. Therefore, excellent agreement (k = 0.98) with sensitivity of 98.41% and specificity of 100% in detecting CPV-2 in feces was found between the two methods. In conclusion, the iiPCR system has potential to serve as a useful tool for rapid and accurate PON, molecular detection of CPV-2. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Mechanistic study on lowering the sensitivity of positive atmospheric pressure photoionization mass spectrometric analyses: size-dependent reactivity of solvent clusters.

    Science.gov (United States)

    Ahmed, Arif; Choi, Cheol Ho; Kim, Sunghwan

    2015-11-15

    Understanding the mechanism of atmospheric pressure photoionization (APPI) is important for studies employing APPI liquid chromatography/mass spectrometry (LC/MS). In this study, the APPI mechanism for polyaromatic hydrocarbon (PAH) compounds dissolved in toluene and methanol or water mixture was investigated by use of MS analysis and quantum mechanical simulation. In particular, four different mechanisms that could contribute to the signal reduction were considered based on a combination of MS data and quantum mechanical calculations. The APPI mechanism is clarified by combining MS data and density functional theory (DFT) calculations. To obtain MS data, a positive-mode (+) APPI Q Exactive Orbitrap mass spectrometer was used to analyze each solution. DFT calculations were performed using the general atomic and molecular electronic structure system (GAMESS). The experimental results indicated that methanol significantly reduced the signal in (+) APPI, but no significative signal reduction was observed when water was used as a co-solvent with toluene. The signal reduction is more significant especially for molecular ions than for protonated ions. Therefore, important information about the mechanism of methanol-induced signal reduction in (+) APPI-MS can be gained due its negative impact on APPI efficiency. The size-dependent reactivity of methanol clusters ((CH3 OH)n , n = 1-8) is an important factor in determining the sensitivity of (+) APPI-MS analyses. Clusters can compete with toluene radical ions for electrons. The reactivity increases as the sizes of the methanol clusters increase and this effect can be caused by the size-dependent ionization energy of the solvent clusters. The resulting increase in cluster reactivity explains the flow rate and temperature-dependent signal reduction observed in the analytes. Based on the results presented here, minimizing the sizes of methanol clusters can improve the sensitivity of LC/(+)-APPI-MS. Copyright © 2015 John

  10. Supplementing five-point body condition score with body fat percentage increases the sensitivity for assessing overweight status of small to medium sized dogs

    Directory of Open Access Journals (Sweden)

    Arai T

    2012-09-01

    Full Text Available Gebin Li,1 Peter Lee,1 Nobuko Mori,1 Ichiro Yamamoto,1 Koh Kawasumi,1 Hisao Tanabe,2 Toshiro Arai11Department of Veterinary Science, School of Veterinary Medicine, Nippon Veterinary and Life Science University, 2Komazawa Animal Hospital, Tokyo, JapanBackground and methods: Currently, five-point body condition scoring (BCS is widely used by veterinarians and clinicians to assess adiposity in dogs in Japan. However, BCS score assignment is subjective in nature, and most clinicians do not score with half points, instead preferring to round off values, thereby rendering less accurate assessments. Therefore, we sought to determine whether assessing body fat percentage using simple morphometric measurements and supplementing this with five-point BCS can have increased sensitivity for detecting increasing adiposity in overweight small-medium sized dog breeds via plasma metabolite validation.Results: Overall, lean body fat percentage was determined to be 15%–22% for male (non-neutered/neutered dogs and 15%–25% for female (nonspayed/spayed. Dogs categorized as overweight by BCS had significantly higher levels of nonesterified fatty acids (P = 0.005, whereas animals categorized as overweight by BCS + body fat percentage were observed to have significantly higher levels of nonesterified fatty acids (P = 0.006, total cholesterol (P = 0.029, and triglycerides (P = 0.001 than lean animals. The increased sensitivity due to body fat percentage for gauging alterations in plasma metabolite levels may be due to increased correlation strength. Body fat percentage correlated positively with plasma insulin (r = 0.627, P = 0.002, nonesterified fatty acids (r = 0.674, P < 0.001, total cholesterol (r = 0.825, P < 0.0001, triglycerides (r = 0.5823, P < 0.005, blood urea nitrogen (r = 0.429, P < 0.05, creatinine (r = 0.490, P = 0.021, and total protein (r = 0.737, P< 0.0001 levels, which all tend to increase as a result of increasing adiposity

  11. iTRAQ-Based Proteomics Analyses of Sterile/Fertile Anthers from a Thermo-Sensitive Cytoplasmic Male-Sterile Wheat with Aegilops kotschyi Cytoplasm

    Directory of Open Access Journals (Sweden)

    Gaoming Zhang

    2018-05-01

    Full Text Available A “two-line hybrid system” was developed, previously based on thermo-sensitive cytoplasmic male sterility in Aegilops kotschyi (K-TCMS, which can be used in wheat breeding. The K-TCMS line exhibits complete male sterility and it can be used to produce hybrid wheat seeds during the normal wheat-growing season; it propagates via self-pollination at high temperatures. Isobaric tags for relative and absolute quantification-based quantitative proteome and bioinformatics analyses of the TCMS line KTM3315A were conducted under different fertility conditions to understand the mechanisms of fertility conversion in the pollen development stages. In total, 4639 proteins were identified, the differentially abundant proteins that increased/decreased in plants with differences in fertility were mainly involved with energy metabolism, starch and sucrose metabolism, phenylpropanoid biosynthesis, protein synthesis, translation, folding, and degradation. Compared with the sterile condition, many of the proteins that related to energy and phenylpropanoid metabolism increased during the anther development stage. Thus, we suggest that energy and phenylpropanoid metabolism pathways are important for fertility conversion in K-TCMS wheat. These findings provide valuable insights into the proteins involved with anther and pollen development, thereby, helping to further understand the mechanism of TCMS in wheat.

  12. Uncertainty and Sensitivity Analyses for CFD Codes: an Attempt of a State of the Art on the Basis of the CEA Experience

    International Nuclear Information System (INIS)

    Crecy, Agnes de; Bazin, Pascal

    2013-01-01

    Uncertainty and sensitivity analyses, associated to best-estimate calculations become paramount for licensing processes and are known as BEPU (Best-Estimate Plus Uncertainties) methods. A recent activity such as the BEMUSE benchmark has shown that the present methods are mature enough for the system thermal-hydraulics codes, even if issues such as the quantification of the uncertainties of the input parameters, and especially, the physical models must be improved. But CFD codes are more and more used for fine 3-D modeling such as, for example, those necessary in dilution or stratification problems. The application of the BEPU methods to CFD codes becomes an issue that must be now addressed. That is precisely the goal of this paper. It consists of two main parts. In the chapter 2, the specificities of CFD codes for BEPU methods are listed, with focuses on the possible difficulties. In the chapter 3, the studies performed at CEA are described. It is important to note that CEA research in this field is only beginning and must not be viewed as a reference approach. (authors)

  13. Substantial Variability Exists in Utilities' Nuclear Decommissioning Funding Adequacy: Baseline Trends (1997-2001); and Scenario and Sensitivity Analyses (Year 2001)

    International Nuclear Information System (INIS)

    Williams, D. G.

    2003-01-01

    This paper explores the trends over 1997-2001 in my baseline simulation analysis of the sufficiency of electric utilities' funds to eventually decommission the nation's nuclear power plants. Further, for 2001, I describe the utilities' funding adequacy results obtained using scenario and sensitivity analyses, respectively. In this paper, I focus more on the wide variability observed in these adequacy measures among utilities than on the results for the ''average'' utility in the nuclear industry. Only individual utilities, not average utilities -- often used by the nuclear industry to represent its funding adequacy -- will decommission their nuclear plants. Industry-wide results tend to mask the varied results for individual utilities. This paper shows that over 1997-2001, the variability of my baseline decommissioning funding adequacy measures (in percentages) for both utility fund balances and current contributions has remained very large, reflected in the sizable ranges and frequency distributions of these percentages. The relevance of this variability for nuclear decommissioning funding adequacy is, of course, focused more on those utilities that show below ideal balances and contribution levels. Looking backward, 42 of 67 utility fund (available) balances, in 2001, were above (and 25 below) their ideal baseline levels; in 1997, 42 of 76 were above (and 34 below) ideal levels. Of these, many utility balances were far above, and many far below, such ideal levels. The problem of certain utilities continuing to show balances much below ideal persists even with increases in the adequacy of ''average'' utility balances

  14. Semi-quantitative and simulation analyses of effects of {gamma} rays on determination of calibration factors of PET scanners with point-like {sup 22}Na sources

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, Tomoyuki [School of Allied Health Sciences, Kitasato University, 1-15-1, Kitasato, Minamiku, Sagamihara, Kanagawa, 252-0373 (Japan); Sato, Yasushi [National Institute of Advanced Industrial Science and Technology, 1-1-1, Umezono, Tsukuba, Ibaraki, 305-8568 (Japan); Oda, Keiichi [Tokyo Metropolitan Institute of Gerontology, 1-1, Nakamachi, Itabashi, Tokyo, 173-0022 (Japan); Wada, Yasuhiro [RIKEN Center for Molecular Imaging Science, 6-7-3, Minamimachi, Minatoshima, Chuo, Kobe, Hyogo, 650-0047 (Japan); Murayama, Hideo [National Institute of Radiological Sciences, 4-9-1, Anagawa, Inage, Chiba, 263-8555 (Japan); Yamada, Takahiro, E-mail: hasegawa@kitasato-u.ac.jp [Japan Radioisotope Association, 2-28-45, Komagome, Bunkyo-ku, Tokyo, 113-8941 (Japan)

    2011-09-21

    The uncertainty of radioactivity concentrations measured with positron emission tomography (PET) scanners ultimately depends on the uncertainty of the calibration factors. A new practical calibration scheme using point-like {sup 22}Na radioactive sources has been developed. The purpose of this study is to theoretically investigate the effects of the associated 1.275 MeV {gamma} rays on the calibration factors. The physical processes affecting the coincidence data were categorized in order to derive approximate semi-quantitative formulae. Assuming the design parameters of some typical commercial PET scanners, the effects of the {gamma} rays as relative deviations in the calibration factors were evaluated by semi-quantitative formulae and a Monte Carlo simulation. The relative deviations in the calibration factors were less than 4%, depending on the details of the PET scanners. The event losses due to rejecting multiple coincidence events of scattered {gamma} rays had the strongest effect. The results from the semi-quantitative formulae and the Monte Carlo simulation were consistent and were useful in understanding the underlying mechanisms. The deviations are considered small enough to correct on the basis of precise Monte Carlo simulation. This study thus offers an important theoretical basis for the validity of the calibration method using point-like {sup 22}Na radioactive sources.

  15. Point-of-care heart-type fatty acid binding protein versus high-sensitivity troponin T testing in emergency patients at high risk for acute coronary syndrome.

    Science.gov (United States)

    Kellens, Sebastiaan; Verbrugge, Frederik H; Vanmechelen, Maxime; Grieten, Lars; Van Lierde, Johan; Dens, Joseph; Vrolix, Mathias; Vandervoort, Pieter

    2016-04-01

    High-sensitivity cardiac troponin testing is used to detect myocardial damage in patients with acute chest pain. Heart-type fatty acid binding protein (H-FABP) may be an alternative, available as point-of-care test. Patients (n=203) referred by general practitioners for suspected acute coronary syndrome or presenting with typical chest pain and one major cardiovascular risk factor at the emergency department were prospectively included in a single-centre cohort study. High-sensitivity cardiac troponin T (hs-TnT) and point-of-care H-FABP testing were concomitantly performed at admission and after 6h. Maximal hs-TnT levels above the 99th percentile were observed in 152 patients (75%) with 127 (63%) fulfilling criteria for myocardial infarction. Upon admission, hs-TnT and H-FABP were associated with an area under the curve (95% CI) of 0.83 (0.77-0.89) and 0.79 (0.73-0.85), respectively, to predict myocardial infarction, which increased to 0.93 (0.90-0.97) and 0.88 (0.84-0.93), respectively, after 6h. The diagnostic accuracy for non-ST-segment elevation myocardial infarction was somewhat lower with an area under the curve (95% CI) of 0.80 (0.72-0.87), 0.90 (0.84-0.96), 0.73 (0.64-0.81) and 0.77 (0.67-0.86), respectively. When assessment was performed within 3h of chest pain onset, diagnostic accuracy of H-FABP versus hs-TnT was similar. Each standard deviation increase in admission H-FABP was associated with a 68% relative risk increase of all-cause mortality (p-value=0.027) during 666 ± 155 days of follow-up. Point-of-care H-FABP testing has lower diagnostic accuracy compared with hs-TnT assessment in patients with high pre-test acute coronary syndrome probability, but might be of interest when assessment is possible early after chest pain onset. © The European Society of Cardiology 2015.

  16. A three-stage hybrid model for regionalization, trends and sensitivity analyses of temperature anomalies in China from 1966 to 2015

    Science.gov (United States)

    Wu, Feifei; Yang, XiaoHua; Shen, Zhenyao

    2018-06-01

    Temperature anomalies have received increasing attention due to their potentially severe impacts on ecosystems, economy and human health. To facilitate objective regionalization and examine regional temperature anomalies, a three-stage hybrid model with stages of regionalization, trends and sensitivity analyses was developed. Annual mean and extreme temperatures were analyzed using the daily data collected from 537 stations in China from 1966 to 2015, including the annual mean, minimum and maximum temperatures (Tm, TNm and TXm) as well as the extreme minimum and maximum temperatures (TNe and TXe). The results showed the following: (1) subregions with coherent temperature changes were identified using the rotated empirical orthogonal function analysis and K-means clustering algorithm. The numbers of subregions were 6, 7, 8, 9 and 8 for Tm, TNm, TXm, TNe and TXe, respectively. (2) Significant increases in temperature were observed in most regions of China from 1966 to 2015, although warming slowed down over the last decade. This warming primarily featured a remarkable increase in its minimum temperature. For Tm and TNm, 95% of the stations showed a significant upward trend at the 99% confidence level. TNe increased the fastest, at a rate of 0.56 °C/decade, whereas 21% of the stations in TXe showed a downward trend. (3) The mean temperatures (Tm, TNm and TXm) in the high-latitude regions increased more quickly than those in the low-latitude regions. The maximum temperature increased significantly at high elevations, whereas the minimum temperature increased greatly at middle-low elevations. The most pronounced warming occurred in eastern China in TNe and northwestern China in TXe, with mean elevations of 51 m and 2098 m, respectively. A cooling trend in TXe was observed at the northwestern end of China. The warming rate in TNe varied the most among the subregions (0.63 °C/decade).

  17. Integrated proteomic and N-glycoproteomic analyses of doxorubicin sensitive and resistant ovarian cancer cells reveal glycoprotein alteration in protein abundance and glycosylation

    Science.gov (United States)

    Hou, Junjie; Zhang, Chengqian; Xue, Peng; Wang, Jifeng; Chen, Xiulan; Guo, Xiaojing; Yang, Fuquan

    2017-01-01

    Ovarian cancer is one of the most common cancer among women in the world, and chemotherapy remains the principal treatment for patients. However, drug resistance is a major obstacle to the effective treatment of ovarian cancers and the underlying mechanism is not clear. An increased understanding of the mechanisms that underline the pathogenesis of drug resistance is therefore needed to develop novel therapeutics and diagnostic. Herein, we report the comparative analysis of the doxorubicin sensitive OVCAR8 cells and its doxorubicin-resistant variant NCI/ADR-RES cells using integrated global proteomics and N-glycoproteomics. A total of 1525 unique N-glycosite-containing peptides from 740 N-glycoproteins were identified and quantified, of which 253 N-glycosite-containing peptides showed significant change in the NCI/ADR-RES cells. Meanwhile, stable isotope labeling by amino acids in cell culture (SILAC) based comparative proteomic analysis of the two ovarian cancer cells led to the quantification of 5509 proteins. As about 50% of the identified N-glycoproteins are low-abundance membrane proteins, only 44% of quantified unique N-glycosite-containing peptides had corresponding protein expression ratios. The comparison and calibration of the N-glycoproteome versus the proteome classified 14 change patterns of N-glycosite-containing peptides, including 8 up-regulated N-glycosite-containing peptides with the increased glycosylation sites occupancy, 35 up-regulated N-glycosite-containing peptides with the unchanged glycosylation sites occupancy, 2 down-regulated N-glycosite-containing peptides with the decreased glycosylation sites occupancy, 46 down-regulated N-glycosite-containing peptides with the unchanged glycosylation sites occupancy. Integrated proteomic and N-glycoproteomic analyses provide new insights, which can help to unravel the relationship of N-glycosylation and multidrug resistance (MDR), understand the mechanism of MDR, and discover the new diagnostic and

  18. ANÁLISIS DEL RIESGO DE INCENDIOS FORESTALES: UN ENFOQUE BASADO EN PROCESOS PUNTUALES // FOREST WILDFIRE HAZARD ANALYSES: A POINT PROCESSES APPROACH

    Directory of Open Access Journals (Sweden)

    Rafael González de Gouveia

    2017-06-01

    Full Text Available Los procesos estocásticos puntuales representan una herramienta de gran utilidad para el análisis de los factores de riesgo en los incendios forestales. En este artículo se estudia la ocurrencia de los incendios forestales a partir de un proceso de Poisson espacio temporal, en el que se considera la función de intensidad del mismo como una caracterización del riesgo de incendio a partir de técnicas paramétricas y no paramétricas. Finalmente, se considera un conjunto de datos reales, suministrados por el Ministerio del Poder Popular para el Ambiente a través del Instituto Nacional de Meteorología e Hidrología (INAMEH en Venezuela, relativos a los incendios forestales producidos en un día en particular. Se estiman las funciones de riesgo basadas en el modelo propuesto y se generan mapas de riesgo de incendios lo cuales se ajustan a las características geográficas y climáticas del país. // Point stochastic processes represent a very useful tool for the analysis of hazard factors in wildfire. In this article, the occurrence of wildfire is studied from a spatial-temporal Poisson process, in which the intensity function thereof is considered as a wildfire hazard characterization based on parametric and non-parametric techniques. Finally, it is considered a set of real data, provided by the Ministerio del Poder Popular para el Ambiente from Instituto Nacional de Meteorología e Hidrologia (INAMEH of Venezuela, relating to widlfire produced on a particular day. The hazard functions are estimated based on the proposed model and wildfire hazard maps are generated, which are adjusted to the geographical and climatic characteristics of the country.

  19. A New Method for Blood NT-proBNP Determination Based on a Near-infrared Point of Care Testing Device with High Sensitivity and Wide Scope.

    Science.gov (United States)

    Zhang, Xiao Guang; Shu, Yao Gen; Gao, Ju; Wang, Xuan; Liu, Li Peng; Wang, Meng; Cao, Yu Xi; Zeng, Yi

    2017-06-01

    To develop a rapid, highly sensitive, and quantitative method for the detection of NT-proBNP levels based on a near-infrared point-of-care diagnostic (POCT) device with wide scope. The lateral flow assay (LFA) strip of NT-proBNP was first prepared to achieve rapid detection. Then, the antibody pairs for NT-proBNP were screened and labeled with the near-infrared fluorescent dye Dylight-800. The capture antibody was fixed on a nitrocellulose membrane by a scribing device. Serial dilutions of serum samples were prepared using NT-proBNP-free serum series. The prepared test strips, combined with a near-infrared POCT device, were validated by known concentrations of clinical samples. The POCT device gave the output of the ratio of the intensity of the fluorescence signal of the detection line to that of the quality control line. The relationship between the ratio value and the concentration of the specimen was plotted as a work curve. The results of 62 clinical specimens obtained from our method were compared in parallel with those obtained from the Roche E411 kit. Based on the log-log plot, the new method demonstrated that there was a good linear relationship between the ratio value and NT-proBNP concentrations ranging from 20 pg/mL to 10 ng/mL. The results of the 62 clinical specimens measured by our method showed a good linear correlation with those measured by the Roche E411 kit. The new LFA detection method of NT-proBNP levels based on the near-infrared POCT device was rapid and highly sensitive with wide scope and was thus suitable for rapid and early clinical diagnosis of cardiac impairment. Copyright © 2017 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  20. Comparative study of reference points by dosimetric analyses for late complications after uniform external radiotherapy and high-dose-rate brachytherapy for cervical cancer

    International Nuclear Information System (INIS)

    Chen, S.-W.; Liang, J.-A.; Yeh, L.-S.; Yang, S.-N.; Shiau, A.-C.; Lin, F.-J.

    2004-01-01

    Purpose: This study aimed to correlate and compare the predictive values of rectal and bladder reference doses of uniform external beam radiotherapy without shielding and high-dose-rate intracavitary brachytherapy (HDRICB) with late sequelae in patients with uterine cervical cancer. Methods and materials: Between September 1992 and December 1998, 154 patients who survived more than 12 months after treatment were studied. Initially, they were treated with 10-MV X-rays (44 to 45 Gy/22 to 25 fractions over 4 to 5 weeks) to the whole pelvis, after which HDRICB was performed using 192 Ir remote afterloading at 1-week intervals for 4 weeks. The standard prescribed dose for each HDRICB was 6.0 Gy to point A. Patient- and treatment-related-factors were evaluated for late rectal complications using logistic regression modeling. Results: The probability of rectal complications showed better correlation of dose-response with increasing total ICRU (International Committee on Radiotherapy Units and Measurements) rectal dose. Multivariate logistic regression demonstrated a high risk of late rectal sequelae in patients who developed rectal complications (p 0.0001;relative risk, 15.06;95% CI, 2.89∼43.7) and total ICRU rectal dose greater than 16 Gy (p = 0.02;relative risk, 2.07;95% CI, 1.13∼4.55). The high risk factors for bladder complications were seen in patients who developed rectal complications (p = 0.0001;relative risk, 15.2;95% CI, 2.81∼44.9) and total ICRU bladder dose greater than 24 Gy (p = 0.02;relative risk, 8.93;95% CI, 1.79∼33.1). Conclusion: This study demonstrated the predictive value of ICRU rectal and bladder reference dosing in HDRICB for patients receiving uniform external beam radiation therapy without central shielding. Patients who had a total ICRU rectal dose greater than 16 Gy, or a total ICRU bladder dose over 24 Gy, were at risk of late sequelae

  1. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    International Nuclear Information System (INIS)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee; Costa, Pedro; Borgnat, Pierre

    2015-01-01

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ CEP varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  2. Sensitivity of predictions in an effective model: Application to the chiral critical end point position in the Nambu-Jona-Lasinio model

    Energy Technology Data Exchange (ETDEWEB)

    Biguet, Alexandre; Hansen, Hubert; Brugiere, Timothee [Universite Claude Bernard de Lyon, Institut de Physique Nucleaire de Lyon, CNRS/IN2P3, Villeurbanne Cedex (France); Costa, Pedro [Universidade de Coimbra, Centro de Fisica Computacional, Departamento de Fisica, Coimbra (Portugal); Borgnat, Pierre [CNRS, l' Ecole normale superieure de Lyon, Laboratoire de Physique, Lyon Cedex 07 (France)

    2015-09-15

    The measurement of the position of the chiral critical end point (CEP) in the QCD phase diagram is under debate. While it is possible to predict its position by using effective models specifically built to reproduce some of the features of the underlying theory (QCD), the quality of the predictions (e.g., the CEP position) obtained by such effective models, depends on whether solving the model equations constitute a well- or ill-posed inverse problem. Considering these predictions as being inverse problems provides tools to evaluate if the problem is ill-conditioned, meaning that infinitesimal variations of the inputs of the model can cause comparatively large variations of the predictions. If it is ill-conditioned, it has major consequences because of finite variations that could come from experimental and/or theoretical errors. In the following, we shall apply such a reasoning on the predictions of a particular Nambu-Jona-Lasinio model within the mean field + ring approximations, with special attention to the prediction of the chiral CEP position in the (T-μ) plane. We find that the problem is ill-conditioned (i.e. very sensitive to input variations) for the T-coordinate of the CEP, whereas, it is well-posed for the μ-coordinate of the CEP. As a consequence, when the chiral condensate varies in a 10MeV range, μ {sub CEP} varies far less. As an illustration to understand how problematic this could be, we show that the main consequence when taking into account finite variation of the inputs, is that the existence of the CEP itself cannot be predicted anymore: for a deviation as low as 0.6% with respect to vacuum phenomenology (well within the estimation of the first correction to the ring approximation) the CEP may or may not exist. (orig.)

  3. Bayesian change-point analyses in ecology

    Science.gov (United States)

    Brian Bekcage; Lawrence Joseph; Patrick Belisle; David B. Wolfson; William J. Platt

    2007-01-01

    Ecological and biological processes can change from one state to another once a threshold has been crossed in space or time. Threshold responses to incremental changes in underlying variables can characterize diverse processes from climate change to the desertification of arid lands from overgrazing.

  4. Surface Alpha Interactions in P-Type Point-Contact HPGe Detectors: Maximizing Sensitivity of 76Ge Neutrinoless Double-Beta Decay Searches

    Science.gov (United States)

    Gruszko, Julieta

    Though the existence of neutrino oscillations proves that neutrinos must have non-zero mass, Beyond-the-Standard-Model physics is needed to explain the origins of that mass. One intriguing possibility is that neutrinos are Majorana particles, i.e., they are their own anti-particles. Such a mechanism could naturally explain the observed smallness of the neutrino masses, and would have consequences that go far beyond neutrino physics, with implications for Grand Unification and leptogenesis. If neutrinos are Majorana particles, they could undergo neutrinoless double-beta decay (0nBB), a hypothesized rare decay in which two antineutrinos annihilate one another. This process, if it exists, would be exceedingly rare, with a half-life over 1E25 years. Therefore, searching for it requires experiments with extremely low background rates. One promising technique in the search for 0nBB is the use of P-type point-contact (P-PC) high-purity Germanium (HPGe) detectors enriched in 76Ge, operated in large low-background arrays. This approach is used, with some key differences, by the MAJORANA and GERDA Collaborations. A problematic background in such large granular detector arrays is posed by alpha particles incident on the surfaces of the detectors, often caused by 222Rn contamination of parts or of the detectors themselves. In the MAJORANA DEMONSTRATOR, events have been observed that are consistent with energy-degraded alphas originating near the passivated surface of the detectors, leading to a potential background contribution in the region-of-interest for neutrinoless double-beta decay. However, it is also observed that when energy deposition occurs very close to the passivated surface, high charge trapping occurs along with subsequent slow charge re-release. This leads to both a reduced prompt signal and a measurable change in slope of the tail of a recorded pulse. Here we discuss the characteristics of these events and the development of a filter that can identify the

  5. Utility of ultrasound assisted-cloud point extraction and spectophotometry as a preconcentration and determination tool for the sensitive quantification of mercury species in fish samples

    Science.gov (United States)

    Altunay, Nail

    2018-01-01

    The current study reports, for the first time, the development of a new analytical method employing ultrasound assisted-cloud point extraction (UA-CPE) for the extraction of CH3Hg+ and Hg2 + species from fish samples. Detection and quantification of mercury species were performed at 550 nm by spectrophotometry. The analytical variables affecting complex formation and extraction efficiency were extensively evaluated and optimized by univariate method. Due to behave 14-fold more sensitive and selective of thiophene 2,5-dicarboxylic acid (H2TDC) to Hg2 + ions than CH3Hg+ in presence of mixed surfactant, Tween 20 and SDS at pH 5.0, the amounts of free Hg2 + and total Hg were spectrophotometrically established at 550 nm by monitoring Hg2 + in the pretreated- and extracted-fish samples in ultrasonic bath to speed up extraction using diluted acid mixture (1:1:1, v/v, 4 mol L- 1 HNO3, 4 mol L- 1 HCl, and 0.5 mol L- 1 H2O2), before and after pre-oxidation with permanganate in acidic media. The amount of CH3Hg+ was calculated from difference between total Hg and Hg2 + amounts. The UA-CPE method showed to be suitable for the extraction and determination of mercury species in certified reference materials. The results were in a good agreement (with Student's t-test at 95% confidence limit) with the certified values, and the relative standard deviation was lower than 3.2%. The limits of detection have been 0.27 and 1.20 μg L- 1, for Hg2 + from aqueous calibration solutions and matrix-matched calibration solutions spiked before digestion, respectively, while it is 2.43 μg L- 1 for CH3Hg+ from matrix-matched calibration solutions. A significant matrix effect was not observed from comparison of slopes of both calibration curves, so as to represent the sample matrix. The method was applied to fish samples for speciation analysis of Hg2 + and CH3Hg+. In terms of speciation, while total Hg is detected in range of 2.42-32.08 μg kg- 1, the distribution of mercury in fish were in

  6. Global sensitivity analysis of thermomechanical models in modelling of welding; Analyse de sensibilite globale de modeles thermomecanique de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2008-07-01

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  7. Sensitivity of Coastal Environments and Wildlife to Spilled Oil: Puget Sound and Strait of Juan de Fuca, Washington: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for bald eagle, great blue heron, and seabird nesting sites in Puget Sound and Strait of Juan de Fuca,...

  8. Deterministic sensitivity analysis for the numerical simulation of contaminants transport; Analyse de sensibilite deterministe pour la simulation numerique du transfert de contaminants

    Energy Technology Data Exchange (ETDEWEB)

    Marchand, E

    2007-12-15

    The questions of safety and uncertainty are central to feasibility studies for an underground nuclear waste storage site, in particular the evaluation of uncertainties about safety indicators which are due to uncertainties concerning properties of the subsoil or of the contaminants. The global approach through probabilistic Monte Carlo methods gives good results, but it requires a large number of simulations. The deterministic method investigated here is complementary. Based on the Singular Value Decomposition of the derivative of the model, it gives only local information, but it is much less demanding in computing time. The flow model follows Darcy's law and the transport of radionuclides around the storage site follows a linear convection-diffusion equation. Manual and automatic differentiation are compared for these models using direct and adjoint modes. A comparative study of both probabilistic and deterministic approaches for the sensitivity analysis of fluxes of contaminants through outlet channels with respect to variations of input parameters is carried out with realistic data provided by ANDRA. Generic tools for sensitivity analysis and code coupling are developed in the Caml language. The user of these generic platforms has only to provide the specific part of the application in any language of his choice. We also present a study about two-phase air/water partially saturated flows in hydrogeology concerning the limitations of the Richards approximation and of the global pressure formulation used in petroleum engineering. (author)

  9. Production and repair of chromosome damage in an X-ray sensitive CHO mutant visualized and analysed in interphase using the technique of premature chromosome condensation

    International Nuclear Information System (INIS)

    Iliakis, G.E.; Pantelias, G.E.

    1990-01-01

    Production of chromosome damage per unit of absorbed radiation dose was in xrs-5 cells larger by a factor of 2.6 than in CHO cells (5.2 breaks per cell per Gy). Changes in chromatin structure, associated with the radiation-sensitive pheno-type of xrs-5 cells, that increase the probability of conversion of a DNA double-strand break (dsb) to a chromosome break are invoked to explain this. Repair of chromosome breaks as measured in plateau-phase G 1 cells was deficient in xrs-5 cells and the number of residual chromosome breaks practically identical to the number of lethal lesions calculated from survival data, suggesting that non-repaired chromosome breaks are likely to be manifestations of lethal events in the cell. The yield of ring chromosomes scored after a few hours of repair was higher by a factor of three in xrs-5 compared with CHO cells. (author)

  10. Improving the Sensitivity and Functionality of Mobile Webcam-Based Fluorescence Detectors for Point-of-Care Diagnostics in Global Health

    Science.gov (United States)

    Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham

    2016-01-01

    Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings. PMID:27196933

  11. Improving the Sensitivity and Functionality of Mobile Webcam-Based Fluorescence Detectors for Point-of-Care Diagnostics in Global Health

    Directory of Open Access Journals (Sweden)

    Reuven Rasooly

    2016-05-01

    Full Text Available Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB, and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1 image stacking to improve signal-to-noise ratios; (2 lasers to enable fluorescence excitation for flow cytometry; and (3 streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities, and provide for their use in low-cost medical diagnostics in resource-poor settings.

  12. Point mutation of a conserved aspartate, D69, in the muscarinic M2 receptor does not modify voltage-sensitive agonist potency.

    Science.gov (United States)

    Ågren, Richard; Sahlholm, Kristoffer; Nilsson, Johanna; Århem, Peter

    2018-01-29

    The muscarinic M 2 receptor (M 2 R) has been shown to display voltage-sensitive agonist binding, based on G protein-activated inward rectifier potassium channel (GIRK) opening and radioligand binding at different membrane voltages. A conserved aspartate in transmembrane segment (TM) II of M 2 R, D69, has been proposed as the voltage sensor. While a recent paper instead presented evidence of tyrosines in TMs III, VI, and VII acting as voltage sensors, these authors were not able to record GIRK channel activation by a D69N mutant M 2 R. In the present study, we succeeded in recording ACh-induced GIRK channel activation by this mutant at -80 and 0 mV. The acetylcholine EC 50 was about 2.5-fold higher at 0 mV, a potency shift very similar to that observed at wild-type M 2 R, indicating that voltage sensitivity persists at the D69N mutant. Thus, our present observations corroborate the notion that D69 is not responsible for voltage sensitivity of the M 2 R. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Area under the curve predictions of dalbavancin, a new lipoglycopeptide agent, using the end of intravenous infusion concentration data point by regression analyses such as linear, log-linear and power models.

    Science.gov (United States)

    Bhamidipati, Ravi Kanth; Syed, Muzeeb; Mullangi, Ramesh; Srinivas, Nuggehally

    2018-02-01

    1. Dalbavancin, a lipoglycopeptide, is approved for treating gram-positive bacterial infections. Area under plasma concentration versus time curve (AUC inf ) of dalbavancin is a key parameter and AUC inf /MIC ratio is a critical pharmacodynamic marker. 2. Using end of intravenous infusion concentration (i.e. C max ) C max versus AUC inf relationship for dalbavancin was established by regression analyses (i.e. linear, log-log, log-linear and power models) using 21 pairs of subject data. 3. The predictions of the AUC inf were performed using published C max data by application of regression equations. The quotient of observed/predicted values rendered fold difference. The mean absolute error (MAE)/root mean square error (RMSE) and correlation coefficient (r) were used in the assessment. 4. MAE and RMSE values for the various models were comparable. The C max versus AUC inf exhibited excellent correlation (r > 0.9488). The internal data evaluation showed narrow confinement (0.84-1.14-fold difference) with a RMSE models predicted AUC inf with a RMSE of 3.02-27.46% with fold difference largely contained within 0.64-1.48. 5. Regardless of the regression models, a single time point strategy of using C max (i.e. end of 30-min infusion) is amenable as a prospective tool for predicting AUC inf of dalbavancin in patients.

  14. Sensitivity of surface roughness parameters to changes in the density of scanning points in multi-scale AFM studies. Application to a biomaterial surface

    International Nuclear Information System (INIS)

    Mendez-Vilas, A.; Bruque, J.M.; Gonzalez-Martin, M.L.

    2007-01-01

    In the field of biomaterials surfaces, the ability of the atomic force microscope (AFM) to access the surface structure at unprecedented spatial (vertical and lateral) resolution, is helping in a better understanding on how topography affects the overall interaction of biological cells with the material surface. Since cells in a wide range of sizes are in contact with the biomaterial surface, a quantification of the surface structure in such a wide range of dimensional scales is needed. With the advent of the AFM, this can be routinely done in the lab. In this work, we show that even when it is clear that such a scale-dependent study is needed, AFM maps of the biomaterial surface taken at different scanning lengths are not completely consistent when they are taken at the same scanning resolution, as it is usually done: AFM images of different scanning areas have different point-to-point physical distances. We show that this effect influences the quantification of the average (R a ) and rms (R q ) roughness parameters determined at different length scales. This is the first time this inconsistency is reported and should be taken into account when roughness is measured in this way. Since differences will be in general in the range of nanometres, this is especially interesting for those processes involving the interaction of the biomaterial surface with small biocolloids as bacteria, while this effect should not represent any problems for larger animal cells

  15. Climate Sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Lindzen, Richard [M.I.T.

    2011-11-09

    Warming observed thus far is entirely consistent with low climate sensitivity. However, the result is ambiguous because the sources of climate change are numerous and poorly specified. Model predictions of substantial warming aredependent on positive feedbacks associated with upper level water vapor and clouds, but models are notably inadequate in dealing with clouds and the impacts of clouds and water vapor are intimately intertwined. Various approaches to measuring sensitivity based on the physics of the feedbacks will be described. The results thus far point to negative feedbacks. Problems with these approaches as well as problems with the concept of climate sensitivity will be described.

  16. To Fill or Not to Fill: Sensitivity Analysis of the Influence of Resolution and Hole Filling on Point Cloud Surface Modeling and Individual Rockfall Event Detection

    Directory of Open Access Journals (Sweden)

    Michael J. Olsen

    2015-09-01

    Full Text Available Monitoring unstable slopes with terrestrial laser scanning (TLS has been proven effective. However, end users still struggle immensely with the efficient processing, analysis, and interpretation of the massive and complex TLS datasets. Two recent advances described in this paper now improve the ability to work with TLS data acquired on steep slopes. The first is the improved processing of TLS data to model complex topography and fill holes. This processing step results in a continuous topographic surface model that seamlessly characterizes the rock and soil surface. The second is an advance in the automated interpretation of the surface model in such a way that a magnitude and frequency relationship of rockfall events can be quantified, which can be used to assess maintenance strategies and forecast costs. The approach is applied to unstable highway slopes in the state of Alaska, U.S.A. to evaluate its effectiveness. Further, the influence of the selected model resolution and degree of hole filling on the derived slope metrics were analyzed. In general, model resolution plays a pivotal role in the ability to detect smaller rockfall events when developing magnitude-frequency relationships. The total volume estimates are also influenced by model resolution, but were comparatively less sensitive. In contrast, hole filling had a noticeable effect on magnitude-frequency relationships but to a lesser extent than modeling resolution. However, hole filling yielded a modest increase in overall volumetric quantity estimates. Optimal analysis results occur when appropriately balancing high modeling resolution with an appropriate level of hole filling.

  17. On-line complexation/cloud point preconcentration for the sensitive determination of dysprosium in urine by flow injection inductively coupled plasma-optical emission spectrometry

    International Nuclear Information System (INIS)

    Ortega, Claudia; Cerutti, Soledad; Silva, Maria F.; Olsina, Roberto A.; Martinez, Luis D.

    2003-01-01

    An on-line dysprosium preconcentration and determination system based on the hyphenation of cloud point extraction (CPE) to flow injection analysis (FIA) associated with ICP-OES was studied. For the preconcentration of dysprosium, a Dy(III)-2-(5-bromo-2-pyridylazo)-5-diethylaminophenol complex was formed on-line at pH 9.22 in the presence of nonionic micelles of PONPE-7.5. The micellar system containing the complex was thermostated at 30 C in order to promote phase separation, and the surfactant-rich phase was retained in a microcolumn packed with cotton at pH 9.2. The surfactant-rich phase was eluted with 4 mol L -1 nitric acid at a flow rate of 1.5 mL min -1 , directly in the nebulizer of the plasma. An enhancement factor of 50 was obtained for the preconcentration of 50 mL of sample solution. The detection limit value for the preconcentration of 50 mL of aqueous solution of Dy was 0.03 μg L -1 . The precision for 10 replicate determinations at the 2.0 μg L -1 Dy level was 2.2% relative standard deviation (RSD), calculated from the peak heights obtained. The calibration graph using the preconcentration system for dysprosium was linear with a correlation coefficient of 0.9994 at levels near the detection limits up to at least 100 μg L -1 . The method was successfully applied to the determination of dysprosium in urine. (orig.)

  18. WE-DE-201-11: Sensitivity and Specificity of Verification Methods Based On Total Reference Air Kerma (TRAK) Or On User Provided Dose Points for Graphically Planned Skin HDR Brachytherapy

    International Nuclear Information System (INIS)

    Damato, A; Devlin, P; Bhagwat, M; Buzurovic, I; Hansen, J; O’Farrell, D; Cormack, R

    2016-01-01

    Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of the clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.

  19. WE-DE-201-11: Sensitivity and Specificity of Verification Methods Based On Total Reference Air Kerma (TRAK) Or On User Provided Dose Points for Graphically Planned Skin HDR Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Damato, A; Devlin, P; Bhagwat, M; Buzurovic, I; Hansen, J; O’Farrell, D; Cormack, R [Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of the clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.

  20. The hemispherical deflector analyser revisited

    Energy Technology Data Exchange (ETDEWEB)

    Benis, E.P. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece)], E-mail: benis@iesl.forth.gr; Zouros, T.J.M. [Institute of Electronic Structure and Laser, P.O. Box 1385, 71110 Heraklion, Crete (Greece); Department of Physics, University of Crete, P.O. Box 2208, 71003 Heraklion, Crete (Greece)

    2008-04-15

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R{sub 0} and the nominal value of the potential V(R{sub 0}) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD.

  1. The hemispherical deflector analyser revisited

    International Nuclear Information System (INIS)

    Benis, E.P.; Zouros, T.J.M.

    2008-01-01

    Using the basic spectrometer trajectory equation for motion in an ideal 1/r potential derived in Eq. (101) of part I [T.J.M. Zouros, E.P. Benis, J. Electron Spectrosc. Relat. Phenom. 125 (2002) 221], the operational characteristics of a hemispherical deflector analyser (HDA) such as dispersion, energy resolution, energy calibration, input lens magnification and energy acceptance window are investigated from first principles. These characteristics are studied as a function of the entry point R 0 and the nominal value of the potential V(R 0 ) at entry. Electron-optics simulations and actual laboratory measurements are compared to our theoretical results for an ideal biased paracentric HDA using a four-element zoom lens and a two-dimensional position sensitive detector (2D-PSD). These results should be of particular interest to users of modern HDAs utilizing a PSD

  2. Utilization of the ex vivo LLNA: BrdU-ELISA to distinguish the sensitizers from irritants in respect of 3 end points-lymphocyte proliferation, ear swelling, and cytokine profiles.

    Science.gov (United States)

    Arancioglu, Seren; Ulker, Ozge Cemiloglu; Karakaya, Asuman

    2015-01-01

    Dermal exposure to chemicals may result in allergic or irritant contact dermatitis. In this study, we performed ex vivo local lymph node assay: bromodeoxyuridine-enzyme-linked immunosorbent assay (LLNA: BrdU-ELISA) to compare the differences between irritation and sensitization potency of some chemicals in terms of the 3 end points: lymphocyte proliferation, cytokine profiles (interleukin 2 [IL-2], interferon-γ (IFN-γ), IL-4, IL-5, IL-1, and tumor necrosis factor α [TNF-α]), and ear swelling. Different concentrations of the following well-known sensitizers and irritant chemicals were applied to mice: dinitrochlorobenzene, eugenol, isoeugenol, sodium lauryl sulfate (SLS), and croton oil. According to the lymph node results; the auricular lymph node weights and lymph node cell counts increased after application of both sensitizers and irritants in high concentrations. On the other hand, according to lymph node cell proliferation results, there was a 3-fold increase in proliferation of lymph node cells (stimulation index) for sensitizer chemicals and SLS in the applied concentrations; however, there was not a 3-fold increase for croton oil and negative control. The SLS gave a false-positive response. Cytokine analysis demonstrated that 4 cytokines including IL-2, IFN-γ, IL-4, and IL-5 were released in lymph node cell cultures, with a clear dose trend for sensitizers whereas only TNF-α was released in response to irritants. Taken together, our results suggest that the ex vivo LLNA: BrdU-ELISA method can be useful for discriminating irritants and allergens. © The Author(s) 2015.

  3. Tipping Point

    Medline Plus

    Full Text Available ... en español Blog About OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by ... danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe ...

  4. Fixed Points

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 5. Fixed Points - From Russia with Love - A Primer of Fixed Point Theory. A K Vijaykumar. Book Review Volume 5 Issue 5 May 2000 pp 101-102. Fulltext. Click here to view fulltext PDF. Permanent link:

  5. Tipping Point

    Medline Plus

    Full Text Available ... OnSafety CPSC Stands for Safety The Tipping Point Home > 60 Seconds of Safety (Videos) > The Tipping Point ... 24 hours a day. For young children whose home is a playground, it’s the best way to ...

  6. Tipping Point

    Medline Plus

    Full Text Available ... 60 Seconds of Safety (Videos) > The Tipping Point The Tipping Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  7. Fit of experimental points to the sum of two (or one) exponentials with background. Program for ODRA 1305 computer. Part 2: for time analysers with constant dead time after each registered pulse (AC-256 type)

    International Nuclear Information System (INIS)

    Drozdowicz, K.; Krynicka-Drozdowicz, E.

    1979-01-01

    The LAMA program (in FORTRAN 1900 language), which fits the set of decaying experimental values to the sum of the two (or one) exponentials with background, is described. The method of calculation and its accuracy and the interpretation of the program results are given. The changes and the extensions of the calculation, referred to the dead time effect taken into account for time analysers having the constant dead time after each registered pulse, are described. (author)

  8. Dew Point

    OpenAIRE

    Goldsmith, Shelly

    1999-01-01

    Dew Point was a solo exhibition originating at PriceWaterhouseCoopers Headquarters Gallery, London, UK and toured to the Centre de Documentacio i Museu Textil, Terrassa, Spain and Gallery Aoyama, Tokyo, Japan.

  9. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  10. Tipping Point

    Science.gov (United States)

    ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head injury product safety television tipover tv Watch the video in Adobe Flash ...

  11. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head ... see news reports about horrible accidents involving young children and furniture, appliance and tv tip-overs. The ...

  12. Tipping Point

    Medline Plus

    Full Text Available ... Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture head ... TV falls with about the same force as child falling from the third story of a building. ...

  13. Tipping Point

    Medline Plus

    Full Text Available ... Tipping Point by CPSC Blogger September 22, 2009 appliance child Childproofing CPSC danger death electrical fall furniture ... about horrible accidents involving young children and furniture, appliance and tv tip-overs. The force of a ...

  14. Hybrid Capture-Based Comprehensive Genomic Profiling Identifies Lung Cancer Patients with Well-Characterized Sensitizing Epidermal Growth Factor Receptor Point Mutations That Were Not Detected by Standard of Care Testing.

    Science.gov (United States)

    Suh, James H; Schrock, Alexa B; Johnson, Adrienne; Lipson, Doron; Gay, Laurie M; Ramkissoon, Shakti; Vergilio, Jo-Anne; Elvin, Julia A; Shakir, Abdur; Ruehlman, Peter; Reckamp, Karen L; Ou, Sai-Hong Ignatius; Ross, Jeffrey S; Stephens, Philip J; Miller, Vincent A; Ali, Siraj M

    2018-03-14

    In our recent study, of cases positive for epidermal growth factor receptor ( EGFR ) exon 19 deletions using comprehensive genomic profiling (CGP), 17/77 (22%) patients with prior standard of care (SOC) EGFR testing results available were previously negative for exon 19 deletion. Our aim was to compare the detection rates of CGP versus SOC testing for well-characterized sensitizing EGFR point mutations (pm) in our 6,832-patient cohort. DNA was extracted from 40 microns of formalin-fixed paraffin-embedded sections from 6,832 consecutive cases of non-small cell lung cancer (NSCLC) of various histologies (2012-2015). CGP was performed using a hybrid capture, adaptor ligation-based next-generation sequencing assay to a mean coverage depth of 576×. Genomic alterations (pm, small indels, copy number changes and rearrangements) involving EGFR were recorded for each case and compared with prior testing results if available. Overall, there were 482 instances of EGFR exon 21 L858R (359) and L861Q (20), exon 18 G719X (73) and exon 20 S768I (30) pm, of which 103 unique cases had prior EGFR testing results that were available for review. Of these 103 cases, CGP identified 22 patients (21%) with sensitizing EGFR pm that were not detected by SOC testing, including 9/75 (12%) patients with L858R, 4/7 (57%) patients with L861Q, 8/20 (40%) patients with G719X, and 4/7 (57%) patients with S768I pm (some patients had multiple EGFR pm). In cases with available clinical data, benefit from small molecule inhibitor therapy was observed. CGP, even when applied to low tumor purity clinical-grade specimens, can detect well-known EGFR pm in NSCLC patients that would otherwise not be detected by SOC testing. Taken together with EGFR exon 19 deletions, over 20% of patients who are positive for EGFR -activating mutations using CGP are previously negative by SOC EGFR mutation testing, suggesting that thousands of such patients per year in the U.S. alone could experience improved clinical

  15. Publication point indicators

    DEFF Research Database (Denmark)

    Elleby, Anita; Ingwersen, Peter

    2010-01-01

    ; the Cumulated Publication Point Indicator (CPPI), which graphically illustrates the cumulated gain of obtained vs. ideal points, both seen as vectors; and the normalized Cumulated Publication Point Index (nCPPI) that represents the cumulated gain of publication success as index values, either graphically......The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdisciplinary Danish Institute of International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...

  16. Sensitivity and uncertainty analysis

    CERN Document Server

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  17. Analyse de Le web 2.0 en classe de langue – Une réflexion théorique et des activités pratiques pour faire le point

    Directory of Open Access Journals (Sweden)

    Maud Ciekanski

    2012-12-01

    Full Text Available 1. Délimitation de l'ouvrage L'ouvrage Le web 2.0 en classe de langue, co-écrit par Christian Ollivier et Laurent Puren, de l'université de La Réunion, tous deux experts dans la formation de formateurs et dans l'utilisation des TIC (technologies de l'information et de la communication en situation d'apprentissage des langues, se propose de faire le point sur ce que les auteurs nomment "le phénomène web 2.0" de façon à permettre aux enseignants de langues d'enrichir leurs pratiques en découvr...

  18. Sensitive Media

    Directory of Open Access Journals (Sweden)

    Malinowska Anna

    2017-12-01

    Full Text Available The paper engages with what we refer to as “sensitive media,” a concept associated with developments in the overall media environment, our relationships with media devices, and the quality of the media themselves. Those developments point to the increasing emotionality of the media world and its infrastructures. Mapping the trajectories of technological development and impact that the newer media exert on human condition, our analysis touches upon various forms of emergent affect, emotion, and feeling in order to trace the histories and motivations of the sensitization of “the media things” as well as the redefinition of our affective and emotional experiences through technologies that themselves “feel.”

  19. Hawaii ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for seabird nesting colonies in coastal Hawaii. Vector points in this data set represent locations of...

  20. Virginia ESI: REPTPT (Reptile Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for sea turtles in Virginia. Vector points in this data set represent nesting sites. Species-specific...

  1. Maryland ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for raptors in Maryland. Vector points in this data set represent bird nesting sites. Species-specific...

  2. Louisiana ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for seabird and wading bird nesting colonies in coastal Louisiana. Vector points in this data set represent...

  3. In situ sulfur isotopes (δ{sup 34}S and δ{sup 33}S) analyses in sulfides and elemental sulfur using high sensitivity cones combined with the addition of nitrogen by laser ablation MC-ICP-MS

    Energy Technology Data Exchange (ETDEWEB)

    Fu, Jiali [State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, Wuhan 430074 (China); Hu, Zhaochu, E-mail: zchu@vip.sina.com [State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, Wuhan 430074 (China); The Beijing SHRIMP Center, Institute of Geology, Chinese Academy of Geological Sciences, Beijing 102206 (China); Zhang, Wen [State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, Wuhan 430074 (China); Yang, Lu [State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, Wuhan 430074 (China); National Research Council Canada, 1200 Montreal Rd., Ottawa, Ontario K1A 0R6 (Canada); Liu, Yongsheng; Li, Ming; Zong, Keqing; Gao, Shan; Hu, Shenghong [State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, Wuhan 430074 (China)

    2016-03-10

    The sulfur isotope is an important geochemical tracer in diverse fields of geosciences. In this study, the effects of three different cone combinations with the addition of N{sub 2} on the performance of in situ S isotope analyses were investigated in detail. The signal intensities of S isotopes were improved by a factor of 2.3 and 3.6 using the X skimmer cone combined with the standard sample cone or the Jet sample cone, respectively, compared with the standard arrangement (H skimmer cone combined with the standard sample cone). This signal enhancement is important for the improvement of the precision and accuracy of in situ S isotope analysis at high spatial resolution. Different cone combinations have a significant effect on the mass bias and mass bias stability for S isotopes. Poor precisions of S isotope ratios were obtained using the Jet and X cones combination at their corresponding optimum makeup gas flow when using Ar plasma only. The addition of 4–8 ml min{sup −1} nitrogen to the central gas flow in laser ablation MC-ICP-MS was found to significantly enlarge the mass bias stability zone at their corresponding optimum makeup gas flow in these three different cone combinations. The polyatomic interferences of OO, SH, OOH were also significantly reduced, and the interference free plateaus of sulfur isotopes became broader and flatter in the nitrogen mode (N{sub 2} = 4 ml min{sup −1}). However, the signal intensity of S was not increased by the addition of nitrogen in this study. The laser fluence and ablation mode had significant effects on sulfur isotope fractionation during the analysis of sulfides and elemental sulfur by laser ablation MC-ICP-MS. The matrix effect among different sulfides and elemental sulfur was observed, but could be significantly reduced by line scan ablation in preference to single spot ablation under the optimized fluence. It is recommended that the d{sub 90} values of the particles in pressed powder pellets for accurate

  4. Publication point indicators

    DEFF Research Database (Denmark)

    Elleby, Anita; Ingwersen, Peter

    2010-01-01

    The paper presents comparative analyses of two publication point systems, The Norwegian and the in-house system from the interdiscplinary Danish Institute for International Studies (DIIS), used as case in the study for publications published 2006, and compares central citation-based indicators...... with novel publication point indicators (PPIs) that are formalized and exemplified. Two diachronic citation windows are applied: 2006-07 and 2006-08. Web of Science (WoS) as well as Google Scholar (GS) are applied to observe the cite delay and citedness for the different document types published by DIIS...... for all document types. Statistical significant correlations were only found between WoS and GS and the two publication point systems in between, respectively. The study demonstrates how the nCPPI can be applied to institutions as evaluation tools supplementary to JCI in various combinations...

  5. Fertility chip, a point-of-care semen analyser

    NARCIS (Netherlands)

    Segerink, Loes Irene

    2011-01-01

    Before assistive reproductive treatment will be started for a couple that is childless by default, the cause of the fertility disorder needs to be investigated for both the man as well as the woman. For the man this implies that the quality of his semen needs to be known. Currently, at the hospital

  6. Analysis of sensitivity and uncertainty on the leukemia risk attributable to the nuclear installations of North Cotentin; Analyse de sensibilite et d'incertitude sur le risque de leucemie attribuable aux installations nucleaires du Nord-Cotentin

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-07-15

    The study realised includes several phases: the delimitation of the field of the study, the identification of the paramount parameters, the determination of the variations intervals of the paramount parameters, the analysis of the sensitivity and finally the analysis of uncertainty. (N.C.)

  7. Analyse de la sensibilité aux paramètres gazoles d'un moteur diesel d'automobile à injection directe Small Direct Injection Diesel Engine Sensitivity to the Diesel Fuel Characteristics

    Directory of Open Access Journals (Sweden)

    Montagne X.

    2006-12-01

    particules totales sont plutôt dépendantes de la viscosité et des fractions légères des carburants. Les émissions sonores sont étroitement liées à l'indice de cétane. Par ailleurs, l'ensemble des résultats acquis semble indiquer que les paramètres pilotant le délai d'auto-inflammation sont importants sur ce type de convertisseur. Il serait cependant nécessaire de disposer de mesures directes des caractéristiques des jets d'injection (taille des gouttelettes, pénétration du spray en fonction des différents carburants pour pouvoir quantifier l'effet des paramètres tels que la viscosité et la densité sur la partie physique du délai d'auto-inflammation. Among the technical solutions that can lead to energy converters with low pollutant emissions and low fuel consumption, diesel engines rank, by nature, in a good position. On this base, direct injection diesel engine has been developed and are now spreading in private passanger cars because of their performances, especially in terms of fuel consumption. However, this equipment requires an efficient injection system, electronically driven, needs EGR and an oxidation catalyst to improve the pollutant emissions and the noise level. Thus, it is a major concern to be able to assess precisely the sensitivity to fuel characteristics of direct injection engines as to take the best advantage of this technology. With a set of fuels formulated to cover a large range of chemical nature, viscosity, cetane number and density, an Audi direct injection engine (1Z model was run at the test bench. The impact of the fuel characteristics on pollutant emissions, regulated or unregulated (PAH, aldehydes, and on noise levels was assessed either under standard tuning conditions, either by changing the EGR rate and the injection timing. The results obtained at the end of this program point out the main criteria that have an influence on emissions. They also allow a comparison between direct injection engines and their homologues

  8. Reliability Analyses of Groundwater Pollutant Transport

    Energy Technology Data Exchange (ETDEWEB)

    Dimakis, Panagiotis

    1997-12-31

    This thesis develops a probabilistic finite element model for the analysis of groundwater pollution problems. Two computer codes were developed, (1) one using finite element technique to solve the two-dimensional steady state equations of groundwater flow and pollution transport, and (2) a first order reliability method code that can do a probabilistic analysis of any given analytical or numerical equation. The two codes were connected into one model, PAGAP (Probability Analysis of Groundwater And Pollution). PAGAP can be used to obtain (1) the probability that the concentration at a given point at a given time will exceed a specified value, (2) the probability that the maximum concentration at a given point will exceed a specified value and (3) the probability that the residence time at a given point will exceed a specified period. PAGAP could be used as a tool for assessment purposes and risk analyses, for instance the assessment of the efficiency of a proposed remediation technique or to study the effects of parameter distribution for a given problem (sensitivity study). The model has been applied to study the greatest self sustained, precipitation controlled aquifer in North Europe, which underlies Oslo`s new major airport. 92 refs., 187 figs., 26 tabs.

  9. Multiple pathways to gender-sensitive budget support in the education sector: Analysing the effectiveness of sex-disaggregated indicators in performance assessment frameworks and gender working groups in (education) budget support to Sub-Saharan Africa countries

    OpenAIRE

    Holvoet, Nathalie; Inberg, Liesbeth

    2013-01-01

    In order to correct for the initial gender blindness of the Paris Declaration and related aid modalities as general and sector budget support, it has been proposed to integrate a gender dimension into budget support entry points. This paper studies the effectiveness of (joint) gender working groups and the integration of sex-disaggregated indicators and targets in performance assessment frameworks in the context of education sector budget support delivered to a sample of 17 Sub-Saharan Africa...

  10. ACS Zero Point Verification

    Science.gov (United States)

    Dolphin, Andrew

    2005-07-01

    The uncertainties in the photometric zero points create a fundamental limit to the accuracy of photometry. The current state of the ACS calibration is surprisingly poor, with zero point uncertainties of 0.03 magnitudes. The reason for this is that the ACS calibrations are based primarily on semi-emprical synthetic zero points and observations of fields too crowded for accurate ground-based photometry. I propose to remedy this problem by obtaining ACS images of the omega Cen standard field with all nine broadband ACS/WFC filters. This will permit the direct determination of the ACS zero points by comparison with excellent ground-based photometry, and should reduce their uncertainties to less than 0.01 magnitudes. A second benefit is that it will facilitate the comparison of the WFPC2 and ACS photometric systems, which will be important as WFPC2 is phased out and ACS becomes HST's primary imager. Finally, three of the filters will be repeated from my Cycle 12 observations, allowing for a measurement of any change in sensitivity.

  11. Severe Accident Recriticality Analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. [Swedish Nuclear Power Inspectorate, Stockholm (Sweden); Hoejerup, F. [Risoe National Lab. (Denmark); Lindholm, I.; Miettinen, J.; Puska, E.K. [VTT Energy, Helsinki (Finland); Nilsson, Lars [Studsvik Eco and Safety AB, Nykoeping (Sweden); Sjoevall, H. [Teoliisuuden Voima Oy (Finland)

    1999-11-01

    rate of 2000 kg/s. In most cases, however, the predicted energy deposition was much smaller, below the regulatory limits for fuel failure, but close or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated quasi steady-state power following initial power excursion was in most cases about 20 % of the nominal reactor power, according to SIMULATE-3K and APROS. RECRIT predictions were in general different in this respect with either oscillating power or power increase approaching 50 % of nominal power which in both cases resulted in fuel temperatures above the melting point as a result of insufficient cooling. Long-term containment response to recriticality was assessed through MELCOR calculations for Olkiluoto 1 plant. At stabilised reactor power of 19 % of nominal power the containment failure due to overpressurization was predicted to occur 1.3 h after recriticality, if the accident is not mitigated. The SARA studies have clearly shown the sensitivity of recriticality phenomena to thermal-hydraulic modelling, the specifics of accident scenario, such as distribution of boron-carbide, and importance of multi-dimensional kinetics for determination of local power distribution in the core. The results of the project have pointed out the importance of adequate accident management procedures to be used by reactor operators and emergency staff during recovery actions. Recommendations in this area are given in the report.

  12. Severe accident recriticality analyses (SARA)

    Energy Technology Data Exchange (ETDEWEB)

    Frid, W. E-mail: wiktor.frid@ski.se; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H

    2001-11-01

    quasi steady-state power following initial power excursion was in most cases approximately 20% of the nominal reactor power, according to SIMULATE-3K and APROS. However, in some RECRIT cases higher power levels, approaching 50% of the nominal power, were predicted leading to fuel temperatures exceeding the melting point, as a result of insufficient cooling of the fuel. Long-term containment response to recriticality was assessed through MELCOR calculations for the Olkiluoto 1 plant. At a stabilised reactor power of 19% of nominal power, the containment failure due to overpressurisation was predicted to occur 1.3 h after recriticality, if the accident is not mitigated. The SARA studies have clearly shown the sensitivity of recriticality phenomena to thermal-hydraulic modelling, the specifics of accident scenario, such as distribution of boron-carbide, and importance of multi-dimensional kinetics for determination of local power distribution in the core. The results of the project have pointed out the importance of adequate accident management strategies to be used by reactor operators and emergency staff during recovery actions. Recommendations in this area are given in the paper.

  13. Severe accident recriticality analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Nilsson, L.; Puska, E.K.; Sjoevall, H.

    2001-01-01

    -state power following initial power excursion was in most cases approximately 20% of the nominal reactor power, according to SIMULATE-3K and APROS. However, in some RECRIT cases higher power levels, approaching 50% of the nominal power, were predicted leading to fuel temperatures exceeding the melting point, as a result of insufficient cooling of the fuel. Long-term containment response to recriticality was assessed through MELCOR calculations for the Olkiluoto 1 plant. At a stabilised reactor power of 19% of nominal power, the containment failure due to overpressurisation was predicted to occur 1.3 h after recriticality, if the accident is not mitigated. The SARA studies have clearly shown the sensitivity of recriticality phenomena to thermal-hydraulic modelling, the specifics of accident scenario, such as distribution of boron-carbide, and importance of multi-dimensional kinetics for determination of local power distribution in the core. The results of the project have pointed out the importance of adequate accident management strategies to be used by reactor operators and emergency staff during recovery actions. Recommendations in this area are given in the paper

  14. Severe Accident Recriticality Analyses (SARA)

    International Nuclear Information System (INIS)

    Frid, W.; Hoejerup, F.; Lindholm, I.; Miettinen, J.; Puska, E.K.; Nilsson, Lars; Sjoevall, H.

    1999-11-01

    rate of 2000 kg/s. In most cases, however, the predicted energy deposition was much smaller, below the regulatory limits for fuel failure, but close or above recently observed thresholds for fragmentation and dispersion of high burn-up fuel. The highest calculated quasi steady-state power following initial power excursion was in most cases about 20 % of the nominal reactor power, according to SIMULATE-3K and APROS. RECRIT predictions were in general different in this respect with either oscillating power or power increase approaching 50 % of nominal power which in both cases resulted in fuel temperatures above the melting point as a result of insufficient cooling. Long-term containment response to recriticality was assessed through MELCOR calculations for Olkiluoto 1 plant. At stabilised reactor power of 19 % of nominal power the containment failure due to overpressurization was predicted to occur 1.3 h after recriticality, if the accident is not mitigated. The SARA studies have clearly shown the sensitivity of recriticality phenomena to thermal-hydraulic modelling, the specifics of accident scenario, such as distribution of boron-carbide, and importance of multi-dimensional kinetics for determination of local power distribution in the core. The results of the project have pointed out the importance of adequate accident management procedures to be used by reactor operators and emergency staff during recovery actions. Recommendations in this area are given in the report

  15. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  16. Development, Testing, and Sensitivity and Uncertainty Analyses of a Transport and Reaction Simulation Engine (TaRSE) for Spatially Distributed Modeling of Phosphorus in South Florida Peat Marsh Wetlands

    Science.gov (United States)

    Jawitz, James W.; Munoz-Carpena, Rafael; Muller, Stuart; Grace, Kevin A.; James, Andrew I.

    2008-01-01

    in the phosphorus cycling mechanisms were simulated in these case studies using different combinations of phosphorus reaction equations. Changes in water column phosphorus concentrations observed under the controlled conditions of laboratory incubations, and mesocosm studies were reproduced with model simulations. Short-term phosphorus flux rates and changes in phosphorus storages were within the range of values reported in the literature, whereas unknown rate constants were used to calibrate the model output. In STA-1W Cell 4, the dominant mechanism for phosphorus flow and transport is overland flow. Over many life cycles of the biological components, however, soils accrue and become enriched in phosphorus. Inflow total phosphorus concentrations and flow rates for the period between 1995 and 2000 were used to simulate Cell 4 phosphorus removal, outflow concentrations, and soil phosphorus enrichment over time. This full-scale application of the model successfully incorporated parameter values derived from the literature and short-term experiments, and reproduced the observed long-term outflow phosphorus concentrations and increased soil phosphorus storage within the system. A global sensitivity and uncertainty analysis of the model was performed using modern techniques such as a qualitative screening tool (Morris method) and the quantitative, variance-based, Fourier Amplitude Sensitivity Test (FAST) method. These techniques allowed an in-depth exploration of the effect of model complexity and flow velocity on model outputs. Three increasingly complex levels of possible application to southern Florida were studied corresponding to a simple soil pore-water and surface-water system (level 1), the addition of plankton (level 2), and of macrophytes (level 3). In the analysis for each complexity level, three surface-water velocities were considered that each correspond to residence times for the selected area (1-kilometer long) of 2, 10, and 20

  17. Sensitivity analysis

    Science.gov (United States)

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  18. The goal of ape pointing.

    Science.gov (United States)

    Halina, Marta; Liebal, Katja; Tomasello, Michael

    2018-01-01

    Captive great apes regularly use pointing gestures in their interactions with humans. However, the precise function of this gesture is unknown. One possibility is that apes use pointing primarily to direct attention (as in "please look at that"); another is that they point mainly as an action request (such as "can you give that to me?"). We investigated these two possibilities here by examining how the looking behavior of recipients affects pointing in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus). Upon pointing to food, subjects were faced with a recipient who either looked at the indicated object (successful-look) or failed to look at the indicated object (failed-look). We predicted that, if apes point primarily to direct attention, subjects would spend more time pointing in the failed-look condition because the goal of their gesture had not been met. Alternatively, we expected that, if apes point primarily to request an object, subjects would not differ in their pointing behavior between the successful-look and failed-look conditions because these conditions differed only in the looking behavior of the recipient. We found that subjects did differ in their pointing behavior across the successful-look and failed-look conditions, but contrary to our prediction subjects spent more time pointing in the successful-look condition. These results suggest that apes are sensitive to the attentional states of gestural recipients, but their adjustments are aimed at multiple goals. We also found a greater number of individuals with a strong right-hand than left-hand preference for pointing.

  19. A TOUCH-SENSITIVE DEVICE

    DEFF Research Database (Denmark)

    2009-01-01

    The present invention relates to an optical touch-sensitive device and a method of determining a position and determining a position change of an object contacting an optical touch sensitive device. In particular, the present invention relates to an optical touch pad and a method of determining...... a position and determining a position change of an object contacting an optical touch pad. A touch-sensitive device, according to the present invention may comprise a light source, a touch- sensitive waveguide, a detector array, and a first light redirecting member, wherein at least a part of the light...... propagating towards a specific point of the detector array is prevented from being incident upon the specific point of the detector array when an object contacts a touch-sensitive surface of the touch-sensitive waveguide at a corresponding specific contact point....

  20. The Temporal Tipping Point

    DEFF Research Database (Denmark)

    Hermann, A. K.

    2016-01-01

    “Slow journalism” is a term anthropologist and sociologists sometimes use to describe their empirical work, ethnography. To journalists and media observers, meanwhile, “slow journalism” signifies a newfound dedication to serious long-form journalism. Not surprisingly, thus, “ethnographic journalism......”—a genre where reporters adopt research strategies from social science—takes “slow” to the extreme. Immersing themselves in communities for weeks, months and years, ethnographic journalists seek to gain what anthropologists call “the native's point of view”. Based on in-depth interviews with practitioners...... and analyses of their journalistic works, this paper offers a study of ethnographic journalism suggesting that slow time operates in at least three separate registers. First, in terms of regimentation, ethnographic journalism is mostly long-form pieces that demand time-consuming research and careful writing...

  1. Development of a simple, sensitive and inexpensive ion-pairing cloud point extraction approach for the determination of trace inorganic arsenic species in spring water, beverage and rice samples by UV-Vis spectrophotometry.

    Science.gov (United States)

    Gürkan, Ramazan; Kır, Ufuk; Altunay, Nail

    2015-08-01

    The determination of inorganic arsenic species in water, beverages and foods become crucial in recent years, because arsenic species are considered carcinogenic and found at high concentrations in the samples. This communication describes a new cloud-point extraction (CPE) method for the determination of low quantity of arsenic species in the samples, purchased from the local market by UV-Visible Spectrophotometer (UV-Vis). The method is based on selective ternary complex of As(V) with acridine orange (AOH(+)) being a versatile fluorescence cationic dye in presence of tartaric acid and polyethylene glycol tert-octylphenyl ether (Triton X-114) at pH 5.0. Under the optimized conditions, a preconcentration factor of 65 and detection limit (3S blank/m) of 1.14 μg L(-1) was obtained from the calibration curve constructed in the range of 4-450 μg L(-1) with a correlation coefficient of 0.9932 for As(V). The method is validated by the analysis of certified reference materials (CRMs). Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. A point mutation (L1015F) of the voltage-sensitive sodium channel gene associated with lambda-cyhalothrin resistance in Apolygus lucorum (Meyer-Dür) population from the transgenic Bt cotton field of China.

    Science.gov (United States)

    Zhen, Congai; Gao, Xiwu

    2016-02-01

    In China, the green mirid bug, Apolygus lucorum (Meyer-Dür), has caused severe economic damage to many kinds of crops, especially the cotton and jujubes. Pyrethroid insecticides have been widely used for controlling this pest in the transgenic Bt cotton field. Five populations of A. lucorum collected from cotton crops at different locations in China were evaluated for lambda-cyhalothrin resistance. The results showed that only the population collected from Shandong Province exhibited 30-fold of resistance to lambda-cyhalothrin. Neither PBO nor DEF had obvious synergism when compared the synergistic ratio between SS and RR strain which was originated from the Shandong population. Besides, there were no statistically significant differences (p>0.05) in the carboxylesterase, glutathione S-transferase, or 7-ethoxycoumarin O-deethylase activities between the Shandong population and the laboratory susceptible strain (SS). The full-length sodium channel gene named AlVSSC encoding 2028 amino acids was obtained by RT-PCR and rapid amplification of cDNA ends (RACE). One single point mutation L1015F in the AlVSSC was detected only in the Shandong population. Our results revealed that the L1015F mutation associated with pyrethroid resistance was identified in A. lucorum populations in China. These results will be useful for the rational chemical control of A. lucorum in the transgenic Bt cotton field. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Parametric Sensitivity Analysis of Oscillatory Delay Systems with an Application to Gene Regulation.

    Science.gov (United States)

    Ingalls, Brian; Mincheva, Maya; Roussel, Marc R

    2017-07-01

    A parametric sensitivity analysis for periodic solutions of delay-differential equations is developed. Because phase shifts cause the sensitivity coefficients of a periodic orbit to diverge, we focus on sensitivities of the extrema, from which amplitude sensitivities are computed, and of the period. Delay-differential equations are often used to model gene expression networks. In these models, the parametric sensitivities of a particular genotype define the local geometry of the evolutionary landscape. Thus, sensitivities can be used to investigate directions of gradual evolutionary change. An oscillatory protein synthesis model whose properties are modulated by RNA interference is used as an example. This model consists of a set of coupled delay-differential equations involving three delays. Sensitivity analyses are carried out at several operating points. Comments on the evolutionary implications of the results are offered.

  4. Columbia River ESI: NESTS (Nest Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for bird nesting sites in the Columbia River area. Vector points in this data set represent locations of...

  5. Allergic sensitization

    DEFF Research Database (Denmark)

    van Ree, Ronald; Hummelshøj, Lone; Plantinga, Maud

    2014-01-01

    Allergic sensitization is the outcome of a complex interplay between the allergen and the host in a given environmental context. The first barrier encountered by an allergen on its way to sensitization is the mucosal epithelial layer. Allergic inflammatory diseases are accompanied by increased pe...

  6. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Science.gov (United States)

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  7. Analyses for designing objects in mechanical design

    International Nuclear Information System (INIS)

    Nakajima, Norihiro; Miyamura, Hiroko N.; Kawakami, Yoshiaki; Kawamura, Takuma

    2015-01-01

    Defining factors to induce the damage that can be assumed in the mechanical structures, it is a loaded work to analyze a damage possibility point derived by the assumption phenomenon that can occur is, but it is the important designing process. Taking factors caused by the earthquake as an example, and simulating the phenomenon that can be assumed, the computational result was analyzed with information visualization. A damage possibility point by analyzing the result mathematically. The illustration of analytical results may give points to designers as means for a designer to recognize a damage possibility point from the sensitivity of the designer. (author)

  8. Developing cultural sensitivity

    DEFF Research Database (Denmark)

    Ruddock, Heidi; Turner, deSalle

    2007-01-01

    . Background. Many countries are becoming culturally diverse, but healthcare systems and nursing education often remain mono-cultural and focused on the norms and needs of the majority culture. To meet the needs of all members of multicultural societies, nurses need to develop cultural sensitivity......Title. Developing cultural sensitivity: nursing students’ experiences of a study abroad programme Aim. This paper is a report of a study to explore whether having an international learning experience as part of a nursing education programme promoted cultural sensitivity in nursing students...... and incorporate this into caregiving. Method. A Gadamerian hermeneutic phenomenological approach was adopted. Data were collected in 2004 by using in-depth conversational interviews and analysed using the Turner method. Findings. Developing cultural sensitivity involves a complex interplay between becoming...

  9. Analysing and Comparing Encodability Criteria

    Directory of Open Access Journals (Sweden)

    Kirstin Peters

    2015-08-01

    Full Text Available Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.

  10. Radioecological sensitivity

    International Nuclear Information System (INIS)

    Howard, Brenda J.; Strand, Per; Assimakopoulos, Panayotis

    2003-01-01

    After the release of radionuclide into the environment it is important to be able to readily identify major routes of radiation exposure, the most highly exposed individuals or populations and the geographical areas of most concern. Radioecological sensitivity can be broadly defined as the extent to which an ecosystem contributes to an enhanced radiation exposure to Man and biota. Radioecological sensitivity analysis integrates current knowledge on pathways, spatially attributes the underlying processes determining transfer and thereby identifies the most radioecologically sensitive areas leading to high radiation exposure. This identifies where high exposure may occur and why. A framework for the estimation of radioecological sensitivity with respect to humans is proposed and the various indicators by which it can be considered have been identified. These are (1) aggregated transfer coefficients (Tag), (2) action (and critical) loads, (3) fluxes and (4) individual exposure of humans. The importance of spatial and temporal consideration of all these outputs is emphasized. Information on the extent of radionuclide transfer and exposure to humans at different spatial scales is needed to reflect the spatial differences which can occur. Single values for large areas, such as countries, can often mask large variation within the country. Similarly, the relative importance of different pathways can change with time and therefore assessments of radiological sensitivity are needed over different time periods after contamination. Radioecological sensitivity analysis can be used in radiation protection, nuclear safety and emergency preparedness when there is a need to identify areas that have the potential of being of particular concern from a risk perspective. Prior identification of radioecologically sensitive areas and exposed individuals improve the focus of emergency preparedness and planning, and contribute to environmental impact assessment for future facilities. The

  11. Calibration and comparison of accelerometer cut points in preschool children.

    Science.gov (United States)

    van Cauwenberghe, Eveline; Labarque, Valery; Trost, Stewart G; de Bourdeaudhuij, Ilse; Cardon, Greet

    2011-06-01

    The present study aimed to develop accelerometer cut points to classify physical activities (PA) by intensity in preschoolers and to investigate discrepancies in PA levels when applying various accelerometer cut points. To calibrate the accelerometer, 18 preschoolers (5.8 ± 0.4 years) performed eleven structured activities and one free play session while wearing a GT1M ActiGraph accelerometer using 15 s epochs. The structured activities were chosen based on the direct observation system Children's Activity Rating Scale (CARS) while the criterion measure of PA intensity during free play was provided using a second-by-second observation protocol (modified CARS). Receiver Operating Characteristic (ROC) curve analyses were used to determine the accelerometer cut points. To examine the classification differences, accelerometer data of four consecutive days from 114 preschoolers (5.5 ± 0.3 years) were classified by intensity according to previously published and the newly developed accelerometer cut points. Differences in predicted PA levels were evaluated using repeated measures ANOVA and Chi Square test. Cut points were identified at 373 counts/15 s for light (sensitivity: 86%; specificity: 91%; Area under ROC curve: 0.95), 585 counts/15 s for moderate (87%; 82%; 0.91) and 881 counts/15 s for vigorous PA (88%; 91%; 0.94). Further, applying various accelerometer cut points to the same data resulted in statistically and biologically significant differences in PA. Accelerometer cut points were developed with good discriminatory power for differentiating between PA levels in preschoolers and the choice of accelerometer cut points can result in large discrepancies.

  12. Kamusal Karar Alma Süreçlerinde Sosyal Tercih Duyarlılığı ve Sürece İlişkin Yapısal Çözümlemeler(The Sensitivity of Social Preferences in Public Decisions Making Process and The Structural Analyses Connected with The Process

    Directory of Open Access Journals (Sweden)

    A. Niyazi ÖZKER

    2014-06-01

    Full Text Available In the study, we aim to bring up the social preferences location that have an important influence on the components of process consisting of social-economics together with the dynamics which should be considered by decision makers as the balanced component between the central government characteristics and the social preferences in public decision making process. Also, the dynamics’ negative effects as under approaches management of central governments’ politic strategies on the socially choices by taking shape publically point of view have been aimed to analyze connected with participation process. Hence, it is appear that the decisions’phenomenon directed towards to analyzes appear in very important the structural models in order to ensure the desired reality participation in decision making and also, the provided participation sensitive that lay the gramework for overtaking some paradox related to decision making process must be required to take into consideration and and check over as the primary point especially in the countries that have the structural politics matters.

  13. X-point effect on edge stability

    International Nuclear Information System (INIS)

    Saarelma, S; Kirk, A; Kwon, O J

    2011-01-01

    We study the effects of the X-point configuration on edge localized mode (ELM) triggering peeling and ballooning modes using fixed boundary equilibria and modifying the plasma shape to approach the limit of a true X-point. The current driven pure peeling modes are asymptotically stabilized by the X-point while the stabilizing effect on ballooning modes depends on the poloidal location of the X-point. The coupled peeling-ballooning modes experience the elimination of the peeling component as the X-point is introduced. This can significantly affect the edge stability diagrams used to analyse the ELM triggering mechanisms.

  14. Tactual sensitivity in hypochondriasis.

    Science.gov (United States)

    Haenen, M A; Schmidi, A J; Schoenmakers, M; van den Hout, M A

    1997-01-01

    In his article on amplification, somatization and somatoform disorders Barsky [Psychosomatics 1992; 33:28-34] pointed out the importance of studying the perception and processing of somatic and visceral symptoms. Subsequently, it was demonstrated that hypochondriacal patients are not more accurately aware of cardiac activity than a group of non-hypochondriacal patients. Authors concluded that hypochondriacal somatic complaints do not result from an unusually fine discriminative ability to detect normal physiological sensations that non-hypochondriacal patients are unable to perceive. The aim of the present study was to investigate tactual sensitivity to non-painful stimuli in hypochondriacal patients and healthy subjects. Twenty-seven outpatients with DSM-III-R hypochondriasis and 27 healthy control subjects were compared. In all subjects the two-point discrimination threshold was measured, as well as subjective sensitivity to harmless bodily sensations as measured by the Somatosensory Amplification Scale. It was found that hypochondriacal patients reported more distress and discomfort with benign bodily sensations. The two-point discrimination threshold of hypochondriacal patients was not significantly lower in patients as compared to controls. Hypochondriacal subjects considered themselves more sensitive to benign bodily sensations without being better able to discriminate between two tactual bodily signals. These findings of the present study correspond quite closely to those reported earlier.

  15. Global optimization and sensitivity analysis

    International Nuclear Information System (INIS)

    Cacuci, D.G.

    1990-01-01

    A new direction for the analysis of nonlinear models of nuclear systems is suggested to overcome fundamental limitations of sensitivity analysis and optimization methods currently prevalent in nuclear engineering usage. This direction is toward a global analysis of the behavior of the respective system as its design parameters are allowed to vary over their respective design ranges. Presented is a methodology for global analysis that unifies and extends the current scopes of sensitivity analysis and optimization by identifying all the critical points (maxima, minima) and solution bifurcation points together with corresponding sensitivities at any design point of interest. The potential applicability of this methodology is illustrated with test problems involving multiple critical points and bifurcations and comprising both equality and inequality constraints

  16. Impedance analysis of acupuncture points and pathways

    International Nuclear Information System (INIS)

    Teplan, Michal; Kukucka, Marek; Ondrejkovicová, Alena

    2011-01-01

    Investigation of impedance characteristics of acupuncture points from acoustic to radio frequency range is addressed. Discernment and localization of acupuncture points in initial single subject study was unsuccessfully attempted by impedance map technique. Vector impedance analyses determined possible resonant zones in MHz region.

  17. Variation of a test's sensitivity and specificity with disease prevalence.

    Science.gov (United States)

    Leeflang, Mariska M G; Rutjes, Anne W S; Reitsma, Johannes B; Hooft, Lotty; Bossuyt, Patrick M M

    2013-08-06

    Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. We used data from 23 meta-analyses, each of which included 10-39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p disease prevalence; there was no such systematic effect for sensitivity. The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.

  18. The End of Points

    Science.gov (United States)

    Feldman, Jo

    2018-01-01

    Have teachers become too dependent on points? This article explores educators' dependency on their points systems, and the ways that points can distract teachers from really analyzing students' capabilities and achievements. Feldman argues that using a more subjective grading system can help illuminate crucial information about students and what…

  19. Demerit points systems.

    NARCIS (Netherlands)

    2006-01-01

    In 2012, 21 of the 27 EU Member States had some form of demerit points system. In theory, demerit points systems contribute to road safety through three mechanisms: 1) prevention of unsafe behaviour through the risk of receiving penalty points, 2) selection and suspension of the most frequent

  20. Laser Beam Focus Analyser

    DEFF Research Database (Denmark)

    Nielsen, Peter Carøe; Hansen, Hans Nørgaard; Olsen, Flemming Ove

    2007-01-01

    the obtainable features in direct laser machining as well as heat affected zones in welding processes. This paper describes the development of a measuring unit capable of analysing beam shape and diameter of lasers to be used in manufacturing processes. The analyser is based on the principle of a rotating......The quantitative and qualitative description of laser beam characteristics is important for process implementation and optimisation. In particular, a need for quantitative characterisation of beam diameter was identified when using fibre lasers for micro manufacturing. Here the beam diameter limits...... mechanical wire being swept through the laser beam at varying Z-heights. The reflected signal is analysed and the resulting beam profile determined. The development comprised the design of a flexible fixture capable of providing both rotation and Z-axis movement, control software including data capture...

  1. The resilience of an operating point for a fusion power plant

    Energy Technology Data Exchange (ETDEWEB)

    Ward, David, E-mail: david.ward@ccfe.ac.uk; Kemp, Richard

    2015-10-15

    Highlights: • The need to control a power plant changes our view of the optimum design. • The need for control can be reduced by finding resilient design points. • It is important to include resilience and control in selecting design points. • Including these additional constraints reduces flexibility in choice of operating points. - Abstract: The operating point for fusion power plant design concepts is often determined by simultaneously satisfying the requirements of all of the main plant systems and finding an optimum solution, for instance the one with the lowest capital cost or cost of electricity. This static assessment takes no account of the sensitivity of that operating point to variations in key parameters and therefore includes no information about how difficult to adjust and control the chosen operating point may be. Control of the operation point is a large subject with much work still to be done, and is expected to play an increasing role in the future in choosing the optimum design point. Here we present results of two analyses: one relates to the ability to load follow, that is, to vary the power production in the light of varying demands for power from the electricity network; the other investigates in simple terms what choices we can make to improve the resilience of static operating points.

  2. Continuous integration congestion cost allocation based on sensitivity

    International Nuclear Information System (INIS)

    Wu, Z.Q.; Wang, Y.N.

    2004-01-01

    Congestion cost allocation is a very important topic in congestion management. Allocation methods based on the Aumann-Shapley value use the discrete numerical integration method, which needs to solve the incremented OPF solution many times, and as such it is not suitable for practical application to large-scale systems. The optimal solution and its sensitivity change tendency during congestion removal using a DC optimal power flow (OPF) process is analysed. A simple continuous integration method based on the sensitivity is proposed for the congestion cost allocation. The proposed sensitivity analysis method needs a smaller computation time than the method based on using the quadratic method and inner point iteration. The proposed congestion cost allocation method uses a continuous integration method rather than discrete numerical integration. The method does not need to solve the incremented OPF solutions; which allows it use in large-scale systems. The method can also be used for AC OPF congestion management. (author)

  3. Sensitive innovation

    DEFF Research Database (Denmark)

    Søndergaard, Katia Dupret

    Present paper discusses sources of innovation as heterogenic and at times intangible processes. Arguing for heterogeneity and intangibility as sources of innovation originates from a theoretical reading in STS and ANT studies (e.g. Callon 1986, Latour 1996, Mol 2002, Pols 2005) and from field work...... in the area of mental health (Dupret Søndergaard 2009, 2010). The concept of sensitive innovation is developed to capture and conceptualise exactly those heterogenic and intangible processes. Sensitive innovation is therefore primarily a way to understand innovative sources that can be......, but are not necessarily, recognized and acknowledged as such in the outer organisational culture or by management. The added value that qualifies these processes to be defined as “innovative” are thus argued for along different lines than in more traditional innovation studies (e.g. studies that build on the classic...

  4. Contesting Citizenship: Comparative Analyses

    DEFF Research Database (Denmark)

    Siim, Birte; Squires, Judith

    2007-01-01

    importance of particularized experiences and multiple ineequality agendas). These developments shape the way citizenship is both practiced and analysed. Mapping neat citizenship modles onto distinct nation-states and evaluating these in relation to formal equality is no longer an adequate approach....... Comparative citizenship analyses need to be considered in relation to multipleinequalities and their intersections and to multiple governance and trans-national organisinf. This, in turn, suggests that comparative citizenship analysis needs to consider new spaces in which struggles for equal citizenship occur...

  5. Characterization and simulation of fate and transport of selected volatile organic compounds in the vicinities of the Hadnot Point Industrial Area and landfill: Chapter A Supplement 6 in Analyses and historical reconstruction of groundwater flow, contaminant fate and transport, and distribution of drinking water within the service areas of the Hadnot Point and Holcomb Boulevard Water Treatment Plants and vicinities, U.S. Marine Corps Base Camp Lejeune, North Carolina

    Science.gov (United States)

    Jones, L. Elliott; Suárez-Soto, René J.; Anderson, Barbara A.; Maslia, Morris L.

    2013-01-01

    This supplement of Chapter A (Supplement 6) describes the reconstruction (i.e. simulation) of historical concentrations of tetrachloroethylene (PCE), trichloroethylene (TCE), and benzene3 in production wells supplying water to the Hadnot Base (USMCB) Camp Lejeune, North Carolina (Figure S6.1). A fate and transport model (i.e., MT3DMS [Zheng and Wang 1999]) was used to simulate contaminant migration from source locations through the groundwater system and to estimate mean contaminant concentrations in water withdrawn from water-supply wells in the vicinity of the Hadnot Point Industrial Area (HPIA) and the Hadnot Point landfill (HPLF) area.4 The reconstructed contaminant concentrations were subsequently input into a flow-weighted, materials mass balance (mixing) model (Masters 1998) to estimate monthly mean concentrations of the contaminant in finished water 5 at the HPWTP (Maslia et al. 2013). The calibrated fate and transport models described herein were based on and used groundwater velocities derived from groundwater-flow models that are described in Suárez-Soto et al. (2013). Information data pertinent to historical operations of water-supply wells are described in Sautner et al. (2013) and Telci et al. (2013).

  6. Sensing with Superconducting Point Contacts

    Directory of Open Access Journals (Sweden)

    Argo Nurbawono

    2012-05-01

    Full Text Available Superconducting point contacts have been used for measuring magnetic polarizations, identifying magnetic impurities, electronic structures, and even the vibrational modes of small molecules. Due to intrinsically small energy scale in the subgap structures of the supercurrent determined by the size of the superconducting energy gap, superconductors provide ultrahigh sensitivities for high resolution spectroscopies. The so-called Andreev reflection process between normal metal and superconductor carries complex and rich information which can be utilized as powerful sensor when fully exploited. In this review, we would discuss recent experimental and theoretical developments in the supercurrent transport through superconducting point contacts and their relevance to sensing applications, and we would highlight their current issues and potentials. A true utilization of the method based on Andreev reflection analysis opens up possibilities for a new class of ultrasensitive sensors.

  7. Point specificity in acupuncture

    Directory of Open Access Journals (Sweden)

    Choi Emma M

    2012-02-01

    Full Text Available Abstract The existence of point specificity in acupuncture is controversial, because many acupuncture studies using this principle to select control points have found that sham acupoints have similar effects to those of verum acupoints. Furthermore, the results of pain-related studies based on visual analogue scales have not supported the concept of point specificity. In contrast, hemodynamic, functional magnetic resonance imaging and neurophysiological studies evaluating the responses to stimulation of multiple points on the body surface have shown that point-specific actions are present. This review article focuses on clinical and laboratory studies supporting the existence of point specificity in acupuncture and also addresses studies that do not support this concept. Further research is needed to elucidate the point-specific actions of acupuncture.

  8. Risico-analyse brandstofpontons

    NARCIS (Netherlands)

    Uijt de Haag P; Post J; LSO

    2001-01-01

    Voor het bepalen van de risico's van brandstofpontons in een jachthaven is een generieke risico-analyse uitgevoerd. Er is een referentiesysteem gedefinieerd, bestaande uit een betonnen brandstofponton met een relatief grote inhoud en doorzet. Aangenomen is dat de ponton gelegen is in een

  9. Fast multichannel analyser

    Energy Technology Data Exchange (ETDEWEB)

    Berry, A; Przybylski, M M; Sumner, I [Science Research Council, Daresbury (UK). Daresbury Lab.

    1982-10-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10/sup 7/ s/sup -1/ has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format.

  10. A fast multichannel analyser

    International Nuclear Information System (INIS)

    Berry, A.; Przybylski, M.M.; Sumner, I.

    1982-01-01

    A fast multichannel analyser (MCA) capable of sampling at a rate of 10 7 s -1 has been developed. The instrument is based on an 8 bit parallel encoding analogue to digital converter (ADC) reading into a fast histogramming random access memory (RAM) system, giving 256 channels of 64 k count capacity. The prototype unit is in CAMAC format. (orig.)

  11. WHAT IF (Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Iulian N. BUJOREANU

    2011-01-01

    Full Text Available Sensitivity analysis represents such a well known and deeply analyzed subject that anyone to enter the field feels like not being able to add anything new. Still, there are so many facets to be taken into consideration.The paper introduces the reader to the various ways sensitivity analysis is implemented and the reasons for which it has to be implemented in most analyses in the decision making processes. Risk analysis is of outmost importance in dealing with resource allocation and is presented at the beginning of the paper as the initial cause to implement sensitivity analysis. Different views and approaches are added during the discussion about sensitivity analysis so that the reader develops an as thoroughly as possible opinion on the use and UTILITY of the sensitivity analysis. Finally, a round-up conclusion brings us to the question of the possibility of generating the future and analyzing it before it unfolds so that, when it happens it brings less uncertainty.

  12. A Laboratory-Based Evaluation of Four Rapid Point-of-Care Tests for Syphilis

    Science.gov (United States)

    Causer, Louise M.; Kaldor, John M.; Fairley, Christopher K.; Donovan, Basil; Karapanagiotidis, Theo; Leslie, David E.; Robertson, Peter W.; McNulty, Anna M.; Anderson, David; Wand, Handan; Conway, Damian P.; Denham, Ian; Ryan, Claire; Guy, Rebecca J.

    2014-01-01

    Background Syphilis point-of-care tests may reduce morbidity and ongoing transmission by increasing the proportion of people rapidly treated. Syphilis stage and co-infection with HIV may influence test performance. We evaluated four commercially available syphilis point-of-care devices in a head-to-head comparison using sera from laboratories in Australia. Methods Point-of-care tests were evaluated using sera stored at Sydney and Melbourne laboratories. Sensitivity and specificity were calculated by standard methods, comparing point-of-care results to treponemal immunoassay (IA) reference test results. Additional analyses by clinical syphilis stage, HIV status, and non-treponemal antibody titre were performed. Non-overlapping 95% confidence intervals (CI) were considered statistically significant differences in estimates. Results In total 1203 specimens were tested (736 IA-reactive, 467 IA-nonreactive). Point-of-care test sensitivities were: Determine 97.3%(95%CI:95.8–98.3), Onsite 92.5%(90.3–94.3), DPP 89.8%(87.3–91.9) and Bioline 87.8%(85.1–90.0). Specificities were: Determine 96.4%(94.1–97.8), Onsite 92.5%(90.3–94.3), DPP 98.3%(96.5–99.2), and Bioline 98.5%(96.8–99.3). Sensitivity of the Determine test was 100% for primary and 100% for secondary syphilis. The three other tests had reduced sensitivity among primary (80.4–90.2%) compared to secondary syphilis (94.3–98.6%). No significant differences in sensitivity were observed by HIV status. Test sensitivities were significantly higher among high-RPR titre (RPR≥8) (range: 94.6–99.5%) than RPR non-reactive infections (range: 76.3–92.9%). Conclusions The Determine test had the highest sensitivity overall. All tests were most sensitive among high-RPR titre infections. Point-of-care tests have a role in syphilis control programs however in developed countries with established laboratory infrastructures, the lower sensitivities of some tests observed in primary syphilis suggest these would

  13. Realising point of care testing

    International Nuclear Information System (INIS)

    Braybrook, J

    2009-01-01

    Efforts to move molecular diagnostic technologies out of a centralised lab setting and closer to the patient have proved problematic. Early diagnosis of disease is often dependent upon detection of trace amounts of a molecular marker in a complex background. This challenging analytical scenario is compounded when testing is done in rapid manner using miniaturized and portable instruments. Metrology will be fundamental to delivering high quality and reliable clinical data with measurable sensitivity and robustness. Quality of the sample, integrity of the analyser, and ease of use together with incorporation of appropriate QC standards and demonstration of 'fitness for purpose' will be key challenges.

  14. Beyond sensitivity

    DEFF Research Database (Denmark)

    Stott, Iain; Hodgson, David James; Townley, Stuart

    2012-01-01

    , but limited: they ignore short-term (transient) dynamics and provide a linear approximation to nonlinear perturbation curves. 2. Population inertia measures how much larger or smaller a non-stable population becomes compared with an equivalent stable population, as a result of transient dynamics. We present...... formulae for the transfer function of population inertia, which describes nonlinear perturbation curves of transient population dynamics. The method comfortably fits into wider frameworks for analytical study of transient dynamics, and for perturbation analyses that use the transfer function approach. 3....... We use case studies to illustrate how the transfer function of population inertia may be used in population management. These show that strategies based solely on asymptotic perturbation analyses can cause undesirable transient dynamics and/ or fail to exploit desirable transient dynamics...

  15. Geomorphic tipping points: convenient metaphor or fundamental landscape property?

    Science.gov (United States)

    Lane, Stuart

    2016-04-01

    In 2000 Malcolm Gladwell published as book that has done much to publicise Tipping Points in society but also in academia. His arguments, re-expressed in a geomorphic sense, have three core elements: (1) a "Law of the Few", where rapid change results from the effects of a relatively restricted number of critical elements, ones that are able to rapidly connect systems together, that are particularly sensitive to an external force, of that are spatially organised in a particular way; (2) a "Stickiness" where an element of the landscape is able to assimilate characteristics which make it progressively more applicable to the "Law of the Few"; and (3), given (1) and (2) a history and a geography that means that the same force can have dramatically different effects, according to where and when it occurs. Expressed in this way, it is not clear that Tipping Points bring much to our understanding in geomorphology that existing concepts (e.g. landscape sensitivity and recovery; cusp-catastrophe theory; non-linear dynamics systems) do not already provide. It may also be all too easy to describe change in geomorphology as involving a Tipping Point: we know that geomorphic processes often involve a non-linear response above a certain critical threshold; we know that landscapes can, after Denys Brunsden, be though of as involving long periods of boredom ("stability") interspersed with brief moments of terror ("change"); but these are not, after Gladwell, sufficient for the term Tipping Point to apply. Following from these issues, this talk will address three themes. First, it will question, through reference to specific examples, notably in high Alpine systems, the extent to which the Tipping Point analogy is truly a property of the world in which we live. Second, it will explore how 'tipping points' become assigned metaphorically, sometimes evolving to the point that they themselves gain agency, that is, shaping the way we interpret landscape rather than vice versa. Third, I

  16. Sensitive Ceramics

    DEFF Research Database (Denmark)

    2014-01-01

    Sensitive Ceramics is showing an interactive digital design tool for designing wall like composition with 3d ceramics. The experiment is working on two levels. One which has to do with designing compositions and patterns in a virtual 3d universe based on a digital dynamic system that responds on ...... with realizing the modules in ceramics by 3d printing directly in porcelain with a RapMan printer that coils up the 3d shape in layers. Finally the ceramic modules are mounted in a laser cut board that reflects the captured composition of the movement of the hands....

  17. Interesting Interest Points

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Lindbjerg; Pedersen, Kim Steenstrup

    2012-01-01

    on spatial invariance of interest points under changing acquisition parameters by measuring the spatial recall rate. The scope of this paper is to investigate the performance of a number of existing well-established interest point detection methods. Automatic performance evaluation of interest points is hard......Not all interest points are equally interesting. The most valuable interest points lead to optimal performance of the computer vision method in which they are employed. But a measure of this kind will be dependent on the chosen vision application. We propose a more general performance measure based...... position. The LED illumination provides the option for artificially relighting the scene from a range of light directions. This data set has given us the ability to systematically evaluate the performance of a number of interest point detectors. The highlights of the conclusions are that the fixed scale...

  18. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  19. Biomass feedstock analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wilen, C.; Moilanen, A.; Kurkela, E. [VTT Energy, Espoo (Finland). Energy Production Technologies

    1996-12-31

    The overall objectives of the project `Feasibility of electricity production from biomass by pressurized gasification systems` within the EC Research Programme JOULE II were to evaluate the potential of advanced power production systems based on biomass gasification and to study the technical and economic feasibility of these new processes with different type of biomass feed stocks. This report was prepared as part of this R and D project. The objectives of this task were to perform fuel analyses of potential woody and herbaceous biomasses with specific regard to the gasification properties of the selected feed stocks. The analyses of 15 Scandinavian and European biomass feed stock included density, proximate and ultimate analyses, trace compounds, ash composition and fusion behaviour in oxidizing and reducing atmospheres. The wood-derived fuels, such as whole-tree chips, forest residues, bark and to some extent willow, can be expected to have good gasification properties. Difficulties caused by ash fusion and sintering in straw combustion and gasification are generally known. The ash and alkali metal contents of the European biomasses harvested in Italy resembled those of the Nordic straws, and it is expected that they behave to a great extent as straw in gasification. Any direct relation between the ash fusion behavior (determined according to the standard method) and, for instance, the alkali metal content was not found in the laboratory determinations. A more profound characterisation of the fuels would require gasification experiments in a thermobalance and a PDU (Process development Unit) rig. (orig.) (10 refs.)

  20. Kantian Turning Point in Gadamer's Philosophical Hermeneutics

    Directory of Open Access Journals (Sweden)

    Kristína Bosáková

    2016-11-01

    Full Text Available The paper is treating the theme of a Kantian turning-point in the philosophical hermeneutics of H.- G. Gadamer based on of the harmonic relationship between metaphysics and science in Kantian philosophy from the point of view of the philosophical hermeneutics of Gadamer. The philosophical work of Kant had such an influence on Gadamer that without exaggerating we can talk about the Kantian turning-point in Gadamerian hermeneutics. Grondin, a former student of Gadamer, is talking about Kantian turning-point on the field of aesthetics, but in reality Kantian turning-point means much more than a mere change in the reception of the concept of judgement. It is a discovery of harmonical relationship between the beauty and the moral, between the reason and the sensitivity, between the modern sciences and the metaphysical tradition in the Kantian philosophy, made by Gadamer. This is what we call the Kantian turning-point in Gadamerian hermeneutics.

  1. On Recall Rate of Interest Point Detectors

    DEFF Research Database (Denmark)

    Aanæs, Henrik; Dahl, Anders Lindbjerg; Pedersen, Kim Steenstrup

    2010-01-01

    in relation to the number of interest points, the recall rate as a function of camera position and light variation, and the sensitivity relative to model parameter change. The overall conclusion is that the Harris corner detector has a very high recall rate, but is sensitive to change in scale. The Hessian......In this paper we provide a method for evaluating interest point detectors independently of image descriptors. This is possible because we have compiled a unique data set enabling us to determine if common interest points are found. The data contains 60 scenes of a wide range of object types......, and for each scene we have 119 precisely located camera positions obtained from a camera mounted on an industrial robot arm. The scene surfaces have been scanned using structured light, providing precise 3D ground truth. We have investigated a number of the most popular interest point detectors. This is done...

  2. Point defects in solids

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    The principal properties of point defects are studied: thermodynamics, electronic structure, interactions with etended defects, production by irradiation. Some measuring methods are presented: atomic diffusion, spectroscopic methods, diffuse scattering of neutron and X rays, positron annihilation, molecular dynamics. Then points defects in various materials are investigated: ionic crystals, oxides, semiconductor materials, metals, intermetallic compounds, carbides, nitrides [fr

  3. Indexing Moving Points

    DEFF Research Database (Denmark)

    Agarwal, Pankaj K.; Arge, Lars Allan; Erickson, Jeff

    2003-01-01

    We propose three indexing schemes for storing a set S of N points in the plane, each moving along a linear trajectory, so that any query of the following form can be answered quickly: Given a rectangle R and a real value t, report all K points of S that lie inside R at time t. We first present an...

  4. Generalized zero point anomaly

    International Nuclear Information System (INIS)

    Nogueira, Jose Alexandre; Maia Junior, Adolfo

    1994-01-01

    It is defined Zero point Anomaly (ZPA) as the difference between the Effective Potential (EP) and the Zero point Energy (ZPE). It is shown, for a massive and interacting scalar field that, in very general conditions, the renormalized ZPA vanishes and then the renormalized EP and ZPE coincide. (author). 3 refs

  5. Poisson branching point processes

    International Nuclear Information System (INIS)

    Matsuo, K.; Teich, M.C.; Saleh, B.E.A.

    1984-01-01

    We investigate the statistical properties of a special branching point process. The initial process is assumed to be a homogeneous Poisson point process (HPP). The initiating events at each branching stage are carried forward to the following stage. In addition, each initiating event independently contributes a nonstationary Poisson point process (whose rate is a specified function) located at that point. The additional contributions from all points of a given stage constitute a doubly stochastic Poisson point process (DSPP) whose rate is a filtered version of the initiating point process at that stage. The process studied is a generalization of a Poisson branching process in which random time delays are permitted in the generation of events. Particular attention is given to the limit in which the number of branching stages is infinite while the average number of added events per event of the previous stage is infinitesimal. In the special case when the branching is instantaneous this limit of continuous branching corresponds to the well-known Yule--Furry process with an initial Poisson population. The Poisson branching point process provides a useful description for many problems in various scientific disciplines, such as the behavior of electron multipliers, neutron chain reactions, and cosmic ray showers

  6. The Lagrangian Points

    Science.gov (United States)

    Linton, J. Oliver

    2017-01-01

    There are five unique points in a star/planet system where a satellite can be placed whose orbital period is equal to that of the planet. Simple methods for calculating the positions of these points, or at least justifying their existence, are developed.

  7. Multispectral Image Feature Points

    Directory of Open Access Journals (Sweden)

    Cristhian Aguilera

    2012-09-01

    Full Text Available This paper presents a novel feature point descriptor for the multispectral image case: Far-Infrared and Visible Spectrum images. It allows matching interest points on images of the same scene but acquired in different spectral bands. Initially, points of interest are detected on both images through a SIFT-like based scale space representation. Then, these points are characterized using an Edge Oriented Histogram (EOH descriptor. Finally, points of interest from multispectral images are matched by finding nearest couples using the information from the descriptor. The provided experimental results and comparisons with similar methods show both the validity of the proposed approach as well as the improvements it offers with respect to the current state-of-the-art.

  8. AMS analyses at ANSTO

    Energy Technology Data Exchange (ETDEWEB)

    Lawson, E.M. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia). Physics Division

    1998-03-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with {sup 14}C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for {sup 14}C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent`s indigenous Aboriginal peoples. (author)

  9. AMS analyses at ANSTO

    International Nuclear Information System (INIS)

    Lawson, E.M.

    1998-01-01

    The major use of ANTARES is Accelerator Mass Spectrometry (AMS) with 14 C being the most commonly analysed radioisotope - presently about 35 % of the available beam time on ANTARES is used for 14 C measurements. The accelerator measurements are supported by, and dependent on, a strong sample preparation section. The ANTARES AMS facility supports a wide range of investigations into fields such as global climate change, ice cores, oceanography, dendrochronology, anthropology, and classical and Australian archaeology. Described here are some examples of the ways in which AMS has been applied to support research into the archaeology, prehistory and culture of this continent's indigenous Aboriginal peoples. (author)

  10. Analyses of MHD instabilities

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki

    1985-01-01

    In this article analyses of the MHD stabilities which govern the global behavior of a fusion plasma are described from the viewpoint of the numerical computation. First, we describe the high accuracy calculation of the MHD equilibrium and then the analysis of the linear MHD instability. The former is the basis of the stability analysis and the latter is closely related to the limiting beta value which is a very important theoretical issue of the tokamak research. To attain a stable tokamak plasma with good confinement property it is necessary to control or suppress disruptive instabilities. We, next, describe the nonlinear MHD instabilities which relate with the disruption phenomena. Lastly, we describe vectorization of the MHD codes. The above MHD codes for fusion plasma analyses are relatively simple though very time-consuming and parts of the codes which need a lot of CPU time concentrate on a small portion of the codes, moreover, the codes are usually used by the developers of the codes themselves, which make it comparatively easy to attain a high performance ratio on the vector processor. (author)

  11. Sensitivity of nuclear power plant structural response to aircraft impact

    International Nuclear Information System (INIS)

    Buchhardt, F.; Magiera, G.; Matthees, W.; Weber, M.

    1984-01-01

    In this paper a sensitivity study for aircraft impact is performed concerning the excitation of internal components, with particular regard to nonlinear structural material behaviour in the impact area. The nonlinear material values are varied within the bandwidth of suitable material strength, depending on local stiffness pre-calculations. The analyses are then performed on a globally discretized three-dimensional finite element model of a nuclear power plant, using a relatively fine mesh. For specified nodal points results are evaluated by comparing their response spectra. (Author) [pt

  12. Sensitivity studies of a PCV under earthquake loading conditions

    International Nuclear Information System (INIS)

    Maraslioglu, B.; Shamshiri, I.

    1987-01-01

    The results point out the special sensitivity of the modeling and the time history method analyses. Due to the lack of more precise general statements in literature and regulations which should help to find an otpimal design, the analyst is challenged to make much efforts in finding realistic, conservative, not overestimated results. For the purpose of more precise data due to the temporal combination in time history method he has to abandon the security supplied by enveloped, smoothed and broadened spectra in response spectrum method analysis (RSMA). It is therefore advisable to prefer RSMA or to perform several calculations with variations of frequency, ground shear modulus and earthquake loading condition. (orig./HP)

  13. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  14. Do acupuncture points exist?

    International Nuclear Information System (INIS)

    Yan Xiaohui; Zhang Xinyi; Liu Chenglin; Dang, Ruishan; Huang Yuying; He Wei; Ding Guanghong

    2009-01-01

    We used synchrotron x-ray fluorescence analysis to probe the distribution of four chemical elements in and around acupuncture points, two located in the forearm and two in the lower leg. Three of the four acupuncture points showed significantly elevated concentrations of elements Ca, Fe, Cu and Zn in relation to levels in the surrounding tissue, with similar elevation ratios for Cu and Fe. The mapped distribution of these elements implies that each acupuncture point seems to be elliptical with the long axis along the meridian. (note)

  15. Do acupuncture points exist?

    Energy Technology Data Exchange (ETDEWEB)

    Yan Xiaohui; Zhang Xinyi [Department of Physics, Surface Physics Laboratory (State Key Laboratory), and Synchrotron Radiation Research Center of Fudan University, Shanghai 200433 (China); Liu Chenglin [Physics Department of Yancheng Teachers' College, Yancheng 224002 (China); Dang, Ruishan [Second Military Medical University, Shanghai 200433 (China); Huang Yuying; He Wei [Beijing Synchrotron Radiation Facility, Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100039 (China); Ding Guanghong [Shanghai Research Center of Acupuncture and Meridian, Pudong, Shanghai 201203 (China)

    2009-05-07

    We used synchrotron x-ray fluorescence analysis to probe the distribution of four chemical elements in and around acupuncture points, two located in the forearm and two in the lower leg. Three of the four acupuncture points showed significantly elevated concentrations of elements Ca, Fe, Cu and Zn in relation to levels in the surrounding tissue, with similar elevation ratios for Cu and Fe. The mapped distribution of these elements implies that each acupuncture point seems to be elliptical with the long axis along the meridian. (note)

  16. Sensory sensitivity and identification and hedonic assessment ofolfactory stimuli

    Directory of Open Access Journals (Sweden)

    Borys Ruszpel

    2012-06-01

    Full Text Available Conducted research had an exploratory character. It was focused on connections between temperament and olfactory functioning – in particular, identification and affective assessment of olfactory stimuli. Main research question dealt with potential correlations between sensory sensitivity (dimension of temperamental questionnaire FCZ‑KT with declarative and objective ability to identify presented odours and their assessment. Fifty four schoolgirls from one of the Warsaw sec‑ ondary schools participated in the research and they were asked for filling in the FCZ‑KT questionnaire and evaluating each of 16 smell samples. Analyses revealed a significant positive correlation between declared familiarity and accurate odours’ identification (odours that were subjectively known were recognized more accurately than unknown and a posi‑ tive correlation between declared familiarity and affective assessment (odours that were known were assessed as more pleasant than unknown. Sensory sensitivity was not correlated neither with declarative nor real ability to identify smells, however sensory sensitivity was positively correlated with affective assessment (the higher scores on sensory sensitivity dimension, the more pleasantly assessed odours in general. Analyses revealed a number of connections between other dimensions of FCZ‑KT questionnaire (perseverance, liveliness, stamina and the ability (both objective and subjective to correctly identify odours which were most difficult to recognize. Completed project might be perceived as a starting point for further research concerning relationships between temperament, olfactory functioning, and food preferences among patients diagnosed with eating disorders such as anorexia nervosa, bulimia nervosa, and obesity.

  17. A simple beam analyser

    International Nuclear Information System (INIS)

    Lemarchand, G.

    1977-01-01

    (ee'p) experiments allow to measure the missing energy distribution as well as the momentum distribution of the extracted proton in the nucleus versus the missing energy. Such experiments are presently conducted on SACLAY's A.L.S. 300 Linac. Electrons and protons are respectively analysed by two spectrometers and detected in their focal planes. Counting rates are usually low and include time coincidences and accidentals. Signal-to-noise ratio is dependent on the physics of the experiment and the resolution of the coincidence, therefore it is mandatory to get a beam current distribution as flat as possible. Using new technologies has allowed to monitor in real time the behavior of the beam pulse and determine when the duty cycle can be considered as being good with respect to a numerical basis

  18. EEG analyses with SOBI.

    Energy Technology Data Exchange (ETDEWEB)

    Glickman, Matthew R.; Tang, Akaysha (University of New Mexico, Albuquerque, NM)

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  19. Pathway-based analyses.

    Science.gov (United States)

    Kent, Jack W

    2016-02-03

    New technologies for acquisition of genomic data, while offering unprecedented opportunities for genetic discovery, also impose severe burdens of interpretation and penalties for multiple testing. The Pathway-based Analyses Group of the Genetic Analysis Workshop 19 (GAW19) sought reduction of multiple-testing burden through various approaches to aggregation of highdimensional data in pathways informed by prior biological knowledge. Experimental methods testedincluded the use of "synthetic pathways" (random sets of genes) to estimate power and false-positive error rate of methods applied to simulated data; data reduction via independent components analysis, single-nucleotide polymorphism (SNP)-SNP interaction, and use of gene sets to estimate genetic similarity; and general assessment of the efficacy of prior biological knowledge to reduce the dimensionality of complex genomic data. The work of this group explored several promising approaches to managing high-dimensional data, with the caveat that these methods are necessarily constrained by the quality of external bioinformatic annotation.

  20. Analysing Access Control Specifications

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2009-01-01

    When prosecuting crimes, the main question to answer is often who had a motive and the possibility to commit the crime. When investigating cyber crimes, the question of possibility is often hard to answer, as in a networked system almost any location can be accessed from almost anywhere. The most...... common tool to answer this question, analysis of log files, faces the problem that the amount of logged data may be overwhelming. This problems gets even worse in the case of insider attacks, where the attacker’s actions usually will be logged as permissible, standard actions—if they are logged at all....... Recent events have revealed intimate knowledge of surveillance and control systems on the side of the attacker, making it often impossible to deduce the identity of an inside attacker from logged data. In this work we present an approach that analyses the access control configuration to identify the set...

  1. Attitude stability analyses for small artificial satellites

    International Nuclear Information System (INIS)

    Silva, W R; Zanardi, M C; Formiga, J K S; Cabette, R E S; Stuchi, T J

    2013-01-01

    The objective of this paper is to analyze the stability of the rotational motion of a symmetrical spacecraft, in a circular orbit. The equilibrium points and regions of stability are established when components of the gravity gradient torque acting on the spacecraft are included in the equations of rotational motion, which are described by the Andoyer's variables. The nonlinear stability of the equilibrium points of the rotational motion is analysed here by the Kovalev-Savchenko theorem. With the application of the Kovalev-Savchenko theorem, it is possible to verify if they remain stable under the influence of the terms of higher order of the normal Hamiltonian. In this paper, numerical simulations are made for a small hypothetical artificial satellite. Several stable equilibrium points were determined and regions around these points have been established by variations in the orbital inclination and in the spacecraft principal moment of inertia. The present analysis can directly contribute in the maintenance of the spacecraft's attitude

  2. Marine Point Forecasts

    Science.gov (United States)

    will link to the zone forecast and then allow further zooming to the point of interest whereas on the Honolulu, HI Chicago, IL Northern Indiana, IN Lake Charles, LA New Orleans, LA Boston, MA Caribou, ME

  3. Critical-point nuclei

    International Nuclear Information System (INIS)

    Clark, R.M.

    2004-01-01

    It has been suggested that a change of nuclear shape may be described in terms of a phase transition and that specific nuclei may lie close to the critical point of the transition. Analytical descriptions of such critical-point nuclei have been introduced recently and they are described briefly. The results of extensive searches for possible examples of critical-point behavior are presented. Alternative pictures, such as describing bands in the candidate nuclei using simple ΔK = 0 and ΔK = 2 rotational-coupling models, are discussed, and the limitations of the different approaches highlighted. A possible critical-point description of the transition from a vibrational to rotational pairing phase is suggested

  4. National Wetlands Inventory Points

    Data.gov (United States)

    Minnesota Department of Natural Resources — Wetland point features (typically wetlands that are too small to be as area features at the data scale) mapped as part of the National Wetlands Inventory (NWI). The...

  5. Allegheny County Address Points

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset contains address points which represent physical address locations assigned by the Allegheny County addressing authority. Data is updated by County...

  6. Iowa Geologic Sampling Points

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Point locations of geologic samples/files in the IGS repository. Types of samples include well cuttings, outcrop samples, cores, drillers logs, measured sections,...

  7. Characterizing fixed points

    Directory of Open Access Journals (Sweden)

    Sanjo Zlobec

    2017-04-01

    Full Text Available A set of sufficient conditions which guarantee the existence of a point x⋆ such that f(x⋆ = x⋆ is called a "fixed point theorem". Many such theorems are named after well-known mathematicians and economists. Fixed point theorems are among most useful ones in applied mathematics, especially in economics and game theory. Particularly important theorem in these areas is Kakutani's fixed point theorem which ensures existence of fixed point for point-to-set mappings, e.g., [2, 3, 4]. John Nash developed and applied Kakutani's ideas to prove the existence of (what became known as "Nash equilibrium" for finite games with mixed strategies for any number of players. This work earned him a Nobel Prize in Economics that he shared with two mathematicians. Nash's life was dramatized in the movie "Beautiful Mind" in 2001. In this paper, we approach the system f(x = x differently. Instead of studying existence of its solutions our objective is to determine conditions which are both necessary and sufficient that an arbitrary point x⋆ is a fixed point, i.e., that it satisfies f(x⋆ = x⋆. The existence of solutions for continuous function f of the single variable is easy to establish using the Intermediate Value Theorem of Calculus. However, characterizing fixed points x⋆, i.e., providing answers to the question of finding both necessary and sufficient conditions for an arbitrary given x⋆ to satisfy f(x⋆ = x⋆, is not simple even for functions of the single variable. It is possible that constructive answers do not exist. Our objective is to find them. Our work may require some less familiar tools. One of these might be the "quadratic envelope characterization of zero-derivative point" recalled in the next section. The results are taken from the author's current research project "Studying the Essence of Fixed Points". They are believed to be original. The author has received several feedbacks on the preliminary report and on parts of the project

  8. Fundamental data analyses for measurement control

    International Nuclear Information System (INIS)

    Campbell, K.; Barlich, G.L.; Fazal, B.; Strittmatter, R.B.

    1987-02-01

    A set of measurment control data analyses was selected for use by analysts responsible for maintaining measurement quality of nuclear materials accounting instrumentation. The analyses consist of control charts for bias and precision and statistical tests used as analytic supplements to the control charts. They provide the desired detection sensitivity and yet can be interpreted locally, quickly, and easily. The control charts provide for visual inspection of data and enable an alert reviewer to spot problems possibly before statistical tests detect them. The statistical tests are useful for automating the detection of departures from the controlled state or from the underlying assumptions (such as normality). 8 refs., 3 figs., 5 tabs

  9. Photoacoustic Point Source

    International Nuclear Information System (INIS)

    Calasso, Irio G.; Craig, Walter; Diebold, Gerald J.

    2001-01-01

    We investigate the photoacoustic effect generated by heat deposition at a point in space in an inviscid fluid. Delta-function and long Gaussian optical pulses are used as sources in the wave equation for the displacement potential to determine the fluid motion. The linear sound-generation mechanism gives bipolar photoacoustic waves, whereas the nonlinear mechanism produces asymmetric tripolar waves. The salient features of the photoacoustic point source are that rapid heat deposition and nonlinear thermal expansion dominate the production of ultrasound

  10. Unconventional Quantum Critical Points

    OpenAIRE

    Xu, Cenke

    2012-01-01

    In this paper we review the theory of unconventional quantum critical points that are beyond the Landau's paradigm. Three types of unconventional quantum critical points will be discussed: (1). The transition between topological order and semiclassical spin ordered phase; (2). The transition between topological order and valence bond solid phase; (3). The direct second order transition between different competing orders. We focus on the field theory and universality class of these unconventio...

  11. SharePoint governance

    OpenAIRE

    Ali, Mudassar

    2013-01-01

    Masteroppgave i informasjons- og kommunikasjonsteknologi IKT590 2013 – Universitetet i Agder, Grimstad SharePoint is a web-based business collaboration platform from Microsoft which is very robust and dynamic in nature. The platform has been in the market for more than a decade and has been adapted by large number of organisations in the world. The platform has become larger in scale, richer in features and is improving consistently with every new version. However, SharePoint ...

  12. Dynamic Pointing Triggers Shifts of Visual Attention in Young Infants

    Science.gov (United States)

    Rohlfing, Katharina J.; Longo, Matthew R.; Bertenthal, Bennett I.

    2012-01-01

    Pointing, like eye gaze, is a deictic gesture that can be used to orient the attention of another person towards an object or an event. Previous research suggests that infants first begin to follow a pointing gesture between 10 and 13 months of age. We investigated whether sensitivity to pointing could be seen at younger ages employing a technique…

  13. Maternal sensitivity: a concept analysis.

    Science.gov (United States)

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  14. Seismic fragility analyses

    International Nuclear Information System (INIS)

    Kostov, Marin

    2000-01-01

    In the last two decades there is increasing number of probabilistic seismic risk assessments performed. The basic ideas of the procedure for performing a Probabilistic Safety Analysis (PSA) of critical structures (NUREG/CR-2300, 1983) could be used also for normal industrial and residential buildings, dams or other structures. The general formulation of the risk assessment procedure applied in this investigation is presented in Franzini, et al., 1984. The probability of failure of a structure for an expected lifetime (for example 50 years) can be obtained from the annual frequency of failure, β E determined by the relation: β E ∫[d[β(x)]/dx]P(flx)dx. β(x) is the annual frequency of exceedance of load level x (for example, the variable x may be peak ground acceleration), P(fI x) is the conditional probability of structure failure at a given seismic load level x. The problem leads to the assessment of the seismic hazard β(x) and the fragility P(fl x). The seismic hazard curves are obtained by the probabilistic seismic hazard analysis. The fragility curves are obtained after the response of the structure is defined as probabilistic and its capacity and the associated uncertainties are assessed. Finally the fragility curves are combined with the seismic loading to estimate the frequency of failure for each critical scenario. The frequency of failure due to seismic event is presented by the scenario with the highest frequency. The tools usually applied for probabilistic safety analyses of critical structures could relatively easily be adopted to ordinary structures. The key problems are the seismic hazard definitions and the fragility analyses. The fragility could be derived either based on scaling procedures or on the base of generation. Both approaches have been presented in the paper. After the seismic risk (in terms of failure probability) is assessed there are several approaches for risk reduction. Generally the methods could be classified in two groups. The

  15. Website-analyse

    DEFF Research Database (Denmark)

    Thorlacius, Lisbeth

    2009-01-01

    eller blindgyder, når han/hun besøger sitet. Studier i design og analyse af de visuelle og æstetiske aspekter i planlægning og brug af websites har imidlertid kun i et begrænset omfang været under reflektorisk behandling. Det er baggrunden for dette kapitel, som indleder med en gennemgang af æstetikkens......Websitet er i stigende grad det foretrukne medie inden for informationssøgning,virksomhedspræsentation, e-handel, underholdning, undervisning og social kontakt. I takt med denne voksende mangfoldighed af kommunikationsaktiviteter på nettet, er der kommet mere fokus på at optimere design og...... planlægning af de funktionelle og indholdsmæssige aspekter ved websites. Der findes en stor mængde teori- og metodebøger, som har specialiseret sig i de tekniske problemstillinger i forbindelse med interaktion og navigation, samt det sproglige indhold på websites. Den danske HCI (Human Computer Interaction...

  16. A channel profile analyser

    International Nuclear Information System (INIS)

    Gobbur, S.G.

    1983-01-01

    It is well understood that due to the wide band noise present in a nuclear analog-to-digital converter, events at the boundaries of adjacent channels are shared. It is a difficult and laborious process to exactly find out the shape of the channels at the boundaries. A simple scheme has been developed for the direct display of channel shape of any type of ADC on a cathode ray oscilliscope display. This has been accomplished by sequentially incrementing the reference voltage of a precision pulse generator by a fraction of a channel and storing ADC data in alternative memory locations of a multichannel pulse height analyser. Alternative channels are needed due to the sharing at the boundaries of channels. In the flat region of the profile alternate memory locations are channels with zero counts and channels with the full scale counts. At the boundaries all memory locations will have counts. The shape of this is a direct display of the channel boundaries. (orig.)

  17. An evaluation of the Olympus "Quickrate" analyser.

    Science.gov (United States)

    Williams, D G; Wood, R J; Worth, H G

    1979-02-01

    The Olympus "Quickrate", a photometer built for both kinetic and end point analysis was evaluated in this laboratory. Aspartate transaminase, lactate dehydrogenase, hydroxybutyrate dehydrogenase, creatine kinase, alkaline phosphatase and gamma glutamyl transpeptidase were measured in the kinetic mode and glucose, urea, total protein, albumin, bilirubin, calcium and iron in the end point mode. Overall, good correlation was observed with routine methodologies and the precision of the methods was acceptable. An electrical evaluation was also performed. In our hands, the instrument proved to be simple to use and gave no trouble. It should prove useful for paediatric and emergency work, and as a back up for other analysers.

  18. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  19. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    2002-01-01

    for NPP V-1 Bohunice and on review of the impact of the modelling of selected components to the results of calculation safety analysis (a sensitivity study for NPP Mochovce). In 2001 UJD joined a new European project Alternative Approaches to the Safety Performance Indicators. The project is aimed at the information collecting and determining of approaches and recommendations for implementation of the risk oriented indicators, identification of the impact of the safety culture level and organisational culture on safety and applying of indicators to the needs of regulators and operators. In frame of the PHARE project UJD participated in the task focused on severe accident mitigation for nuclear power plants with VVER-440/V213 units. The main results of the analyses of nuclear power plants responses to severe accidents were summarised and the state of their analytical base performed in the past was evaluated within the project. Possible severe accident mitigation and preventative measures were proposed and their applicability for the nuclear power plants with VVER-440/V213 was investigated. The obtained results will be used in assessment activities and accident management of UJD. UJD has been involved also in EVITA project which makes a part of the 5 th EC Framework Programme. The project aims at validation of the European computer code ASTEC dedicated for severe accidents modelling. In 2001 the ASTEC computer code was tested on different platforms. The results of the testing are summarised in the technical report of EC issued in September 2001. Further activities within this project were focused on performing of selected accident scenarios analyses and comparison of the obtained results with the analyses realised with the help of other computer codes. The work on the project will continue in 2002. In 2001 a groundwork on establishing the Centre for Nuclear Safety in Central and Eastern Europe (CENS), the seat of which is going to be in Bratislava, has continued. The

  20. Characterization of MIPAS elevation pointing

    Directory of Open Access Journals (Sweden)

    M. Kiefer

    2007-01-01

    Full Text Available Sufficient knowledge of the pointing is essential for analyses of limb emission measurements. The scientific retrieval processor for MIPAS on ENVISAT operated at IMK allows the retrieval of pointing information in terms of tangent altitudes along with temperature. The retrieved tangent altitudes are independent of systematic offsets in the engineering Line-Of-Sight (LOS information delivered with the ESA Level 1b product. The difference of pointing retrieved from the reprocessed high resolution MIPAS spectra and the engineering pointing information was examined with respect to spatial/temporal behaviour. Among others the following characteristics of MIPAS pointing could be identified: Generally the engineering tangent altitudes are too high by 0–1.8 km with conspicuous variations in this range over time. Prior to December of 2003 there was a drift of about 50–100 m/h, which was due to a slow change in the satellite attitude. A correction of this attitude is done twice a day, which leads to discontinuities in the order of 1–1.5 km in the tangent altitudes. Occasionally discontinuities up to 2.5 km are found, as already reported from MIPAS and SCIAMACHY observations. After an update of the orbit position software in December 2003 values of drift and jumps are much reduced. There is a systematic difference in the mispointing between the poles which amounts to 1.5–2 km, i.e. there is a conspicuous orbit-periodic feature. The analysis of the correlation between the instrument's viewing angle azimuth and differential mispointing supports the hypotheses that a major part of this latter phenomenon can be attributed to an error in the roll angle of the satellite/instrument system of approximately 42 mdeg. One conclusion is that ESA level 2 data should be compared to other data exclusively on tangent pressure levels. Complementary to IMK data, ESA operational LOS calibration results were used to characterize MIPAS pointing. For this purpose

  1. New Novae snack point

    CERN Document Server

    2012-01-01

    Located next to the car park by the flag poles, a few metres from the Main CERN Reception (building 33), a new snack point catered by Novae will open to the public on Wednesday 8 August. More information will be available in the next issue of the Bulletin!

  2. [Clinical key points. Gonioscopy].

    Science.gov (United States)

    Hamard, P

    2007-05-01

    Gonioscopy should be performed in all patients with glaucoma or suspected of having glaucoma. Four points are systematically evaluated: the width of the angle, the degree of trabecular pigmentation, the iridotrabecular appositions or synechia, and the level of iris insertion and the shape of the peripheral iris profile.

  3. Point Lepreau generating station

    International Nuclear Information System (INIS)

    Ganong, G.H.D.; Strang, A.E.; Gunter, G.E.; Thompson, T.S.

    Point Lepreau-1 reactor is a 600 MWe generating station expected to be in service by October 1979. New Brunswick is suffering a 'catch up' phenomenon in load growth and needs to decrease dependence on foreign oil. The site is on salt water and extensive study has gone into corrosion control. Project management, financing and scheduling have unique aspects. (E.C.B.)

  4. Least fixed points revisited

    NARCIS (Netherlands)

    J.W. de Bakker (Jaco)

    1975-01-01

    textabstractParameter mechanisms for recursive procedures are investigated. Contrary to the view of Manna et al., it is argued that both call-by-value and call-by-name mechanisms yield the least fixed points of the functionals determined by the bodies of the procedures concerned. These functionals

  5. Point kinetics modeling

    International Nuclear Information System (INIS)

    Kimpland, R.H.

    1996-01-01

    A normalized form of the point kinetics equations, a prompt jump approximation, and the Nordheim-Fuchs model are used to model nuclear systems. Reactivity feedback mechanisms considered include volumetric expansion, thermal neutron temperature effect, Doppler effect and void formation. A sample problem of an excursion occurring in a plutonium solution accidentally formed in a glovebox is presented

  6. Electro-sensitivity: the physicians-laymen relationship under the test of an environmental pathology

    International Nuclear Information System (INIS)

    Oullion, Amandine

    2014-01-01

    This academic work addresses sociological and anthropological aspects of contemporary techniques. The authors more particularly examined how electro-sensitivity contributes to the re-composition of the relationship between physicians and laymen. After some methodological and theoretical considerations, the authors first addresses this issue from a macro-sociological and political point of view, i.e. by discussing the role of laymen in the definition and addressing of public health issues. They analyse the implication of electro-sensitive persons in the process of politicisation of their disease, and in research arrangements implemented on these issues. Then, they address the way electro-sensitivity reshapes the meeting between physician and patient. In the last part, they address the way electro-sensitivity questions the medical institution itself in terms of medical practices, and of power relationships within this institution

  7. Diagnosing patients at point of care

    CSIR Research Space (South Africa)

    Vilakazi, CB

    2015-10-01

    Full Text Available of pregnant women, the Cellnostics portable blood analyser and paper-based diagnostic solutions. Umbiflow is a Doppler ultrasound device that can determine at the primary point of care, such as a clinic, whether a fetus that is small for gestational age...

  8. INDIAN POINT REACTOR STARTUP AND PERFORMANCE

    Energy Technology Data Exchange (ETDEWEB)

    Deddens, J. C.; Batch, M. L.

    1963-09-15

    The testing program for the Indian Point Reactor is discussed. The thermal and hydraulic evaluation of the primary coolant system is discussed. Analyses of fuel loading and initial criticality, measurement of operating coefficients of reactivity, control rod group reactivity worths, and xenon evaluation are presented. (R.E.U.)

  9. NOAA's National Snow Analyses

    Science.gov (United States)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  10. Generalized tolerance sensitivity and DEA metric sensitivity

    OpenAIRE

    Neralić, Luka; E. Wendell, Richard

    2015-01-01

    This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA). Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  11. Generalized tolerance sensitivity and DEA metric sensitivity

    Directory of Open Access Journals (Sweden)

    Luka Neralić

    2015-03-01

    Full Text Available This paper considers the relationship between Tolerance sensitivity analysis in optimization and metric sensitivity analysis in Data Envelopment Analysis (DEA. Herein, we extend the results on the generalized Tolerance framework proposed by Wendell and Chen and show how this framework includes DEA metric sensitivity as a special case. Further, we note how recent results in Tolerance sensitivity suggest some possible extensions of the results in DEA metric sensitivity.

  12. Risk Characterization uncertainties associated description, sensitivity analysis

    International Nuclear Information System (INIS)

    Carrillo, M.; Tovar, M.; Alvarez, J.; Arraez, M.; Hordziejewicz, I.; Loreto, I.

    2013-01-01

    The power point presentation is about risks to the estimated levels of exposure, uncertainty and variability in the analysis, sensitivity analysis, risks from exposure to multiple substances, formulation of guidelines for carcinogenic and genotoxic compounds and risk subpopulations

  13. Melting point of yttria

    International Nuclear Information System (INIS)

    Skaggs, S.R.

    1977-06-01

    Fourteen samples of 99.999 percent Y 2 O 3 were melted near the focus of a 250-W CO 2 laser. The average value of the observed melting point along the solid-liquid interface was 2462 +- 19 0 C. Several of these same samples were then melted in ultrahigh-purity oxygen, nitrogen, helium, or argon and in water vapor. No change in the observed temperature was detected, with the exception of a 20 0 C increase in temperature from air to helium gas. Post test examination of the sample characteristics, clarity, sphericity, and density is presented, along with composition. It is suggested that yttria is superior to alumina as a secondary melting-point standard

  14. 'Saddle-point' ionization

    International Nuclear Information System (INIS)

    Gay, T.J.; Hale, E.B.; Irby, V.D.; Olson, R.E.; Missouri Univ., Rolla; Berry, H.G.

    1988-01-01

    We have studied the ionization of rare gases by protons at intermediate energies, i.e., energies at which the velocities of the proton and the target-gas valence electrons are comparable. A significant channel for electron production in the forward direction is shown to be 'saddle-point' ionization, in which electrons are stranded on or near the saddle-point of electric potential between the receding projectile and the ionized target. Such electrons yield characteristic energy spectra, and contribute significantly to forward-electron-production cross sections. Classical trajectory Monte Carlo calculations are found to provide qualitative agreement with our measurements and the earlier measurements of Rudd and coworkers, and reproduce, in detail, the features of the general ionization spectra. (orig.)

  15. Analysis of Consumers' Preferences and Price Sensitivity to Native Chickens.

    Science.gov (United States)

    Lee, Min-A; Jung, Yoojin; Jo, Cheorun; Park, Ji-Young; Nam, Ki-Chang

    2017-01-01

    This study analyzed consumers' preferences and price sensitivity to native chickens. A survey was conducted from Jan 6 to 17, 2014, and data were collected from consumers (n=500) living in Korea. Statistical analyses evaluated the consumption patterns of native chickens, preference marketing for native chicken breeds which will be newly developed, and price sensitivity measurement (PSM). Of the subjects who preferred broilers, 24.3% do not purchase native chickens because of the dryness and tough texture, while those who preferred native chickens liked their chewy texture (38.2%). Of the total subjects, 38.2% preferred fried native chickens (38.2%) for processed food, 38.4% preferred direct sales for native chicken distribution, 51.0% preferred native chickens to be slaughtered in specialty stores, and 32.4% wanted easy access to native chickens. Additionally, the price stress range (PSR) was 50 won and the point of marginal cheapness (PMC) and point of marginal expensiveness (PME) were 6,980 won and 12,300 won, respectively. Evaluation of the segmentation market revealed that consumers who prefer broiler to native chicken breeds were more sensitive to the chicken price. To accelerate the consumption of newly developed native chicken meat, it is necessary to develop a texture that each consumer needs, to increase the accessibility of native chickens, and to have diverse menus and recipes as well as reasonable pricing for native chickens.

  16. High boiling point hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Pier, M

    1929-04-29

    A process is given for the production of hydrocarbons of high boiling point, such as lubricating oils, from bituminous substances, such as varieties of coal, shale, or other solid distillable carbonaceous materials. The process consists of treating the initial materials with organic solvents and then subjecting the products extracted from the initial materials, preferably directly, to a reducing treatment in respect to temperature, pressure, and time. The reduction treatment is performed by means of hydrogen under pressure.

  17. Critical point predication device

    International Nuclear Information System (INIS)

    Matsumura, Kazuhiko; Kariyama, Koji.

    1996-01-01

    An operation for predicting a critical point by using a existent reverse multiplication method has been complicated, and an effective multiplication factor could not be plotted directly to degrade the accuracy for the prediction. The present invention comprises a detector counting memory section for memorizing the counting sent from a power detector which monitors the reactor power, a reverse multiplication factor calculation section for calculating the reverse multiplication factor based on initial countings and current countings of the power detector, and a critical point prediction section for predicting the criticality by the reverse multiplication method relative to effective multiplication factors corresponding to the state of the reactor core previously determined depending on the cases. In addition, a reactor core characteristic calculation section is added for analyzing an effective multiplication factor depending on the state of the reactor core. Then, if the margin up to the criticality is reduced to lower than a predetermined value during critical operation, an alarm is generated to stop the critical operation when generation of a period of more than a predetermined value predicted by succeeding critical operation. With such procedures, forecasting for the critical point can be easily predicted upon critical operation to greatly mitigate an operator's burden and improve handling for the operation. (N.H.)

  18. At the Tipping Point

    Energy Technology Data Exchange (ETDEWEB)

    Wiley, H. S.

    2011-02-28

    There comes a time in every field of science when things suddenly change. While it might not be immediately apparent that things are different, a tipping point has occurred. Biology is now at such a point. The reason is the introduction of high-throughput genomics-based technologies. I am not talking about the consequences of the sequencing of the human genome (and every other genome within reach). The change is due to new technologies that generate an enormous amount of data about the molecular composition of cells. These include proteomics, transcriptional profiling by sequencing, and the ability to globally measure microRNAs and post-translational modifications of proteins. These mountains of digital data can be mapped to a common frame of reference: the organism’s genome. With the new high-throughput technologies, we can generate tens of thousands of data points from each sample. Data are now measured in terabytes and the time necessary to analyze data can now require years. Obviously, we can’t wait to interpret the data fully before the next experiment. In fact, we might never be able to even look at all of it, much less understand it. This volume of data requires sophisticated computational and statistical methods for its analysis and is forcing biologists to approach data interpretation as a collaborative venture.

  19. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  20. Cost-effectiveness of point-of-care testing for dehydration in the pediatric ED.

    Science.gov (United States)

    Whitney, Rachel E; Santucci, Karen; Hsiao, Allen; Chen, Lei

    2016-08-01

    Acute gastroenteritis (AGE) and subsequent dehydration account for a large proportion of pediatric emergency department (PED) visits. Point-of-care (POC) testing has been used in conjunction with clinical assessment to determine the degree of dehydration. Despite the wide acceptance of POC testing, little formal cost-effective analysis of POC testing in the PED exists. We aim to examine the cost-effectiveness of using POC electrolyte testing vs traditional serum chemistry testing in the PED for children with AGE. This was a cost-effective analysis using data from a randomized control trial of children with AGE. A decision analysis model was constructed to calculate cost-savings from the point of view of the payer and the provider. We used parameters obtained from the trial, including cost of testing, admission rates, cost of admission, and length of stay. Sensitivity analyses were performed to evaluate the stability of our model. Using the data set of 225 subjects, POC testing results in a cost savings of $303.30 per patient compared with traditional serum testing from the point of the view of the payer. From the point-of-view of the provider, POC testing results in consistent mean savings of $36.32 ($8.29-$64.35) per patient. Sensitivity analyses demonstrated the stability of the model and consistent savings. This decision analysis provides evidence that POC testing in children with gastroenteritis-related moderate dehydration results in significant cost savings from the points of view of payers and providers compared to traditional serum chemistry testing. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Sensitivity to friction for primary explosives

    Energy Technology Data Exchange (ETDEWEB)

    Matyas, Robert, E-mail: robert.matyas@upce.cz [Institute of Energetic Materials, Faculty of Chemical Technology, University of Pardubice, Pardubice 532 10 (Czech Republic); Selesovsky, Jakub; Musil, Tomas [Institute of Energetic Materials, Faculty of Chemical Technology, University of Pardubice, Pardubice 532 10 (Czech Republic)

    2012-04-30

    Highlights: Black-Right-Pointing-Pointer The friction sensitivity of 14 samples of primary explosives was determined. Black-Right-Pointing-Pointer The same apparatus (small scale BAM) and the same method (probit analysis) was used. Black-Right-Pointing-Pointer The crystal shapes and sizes were documented with microscopy. Black-Right-Pointing-Pointer Almost all samples are less sensitive than lead azide, which is commercially used. Black-Right-Pointing-Pointer The organic peroxides (TATP, DADP, HMTD) are not as sensitive as often reported. - Abstract: The sensitivity to friction for a selection of primary explosives has been studied using a small BAM friction apparatus. The probit analysis was used for the construction of a sensitivity curve for each primary explosive tested. Two groups of primary explosives were chosen for measurement (a) the most commonly used industrially produced primary explosives (e.g. lead azide, tetrazene, dinol, lead styphnate) and (b) the most produced improvised primary explosives (e.g. triacetone triperoxide, hexamethylenetriperoxide diamine, mercury fulminate, acetylides of heavy metals). A knowledge of friction sensitivity is very important for determining manipulation safety for primary explosives. All the primary explosives tested were carefully characterised (synthesis procedure, shape and size of crystals). The sensitivity curves obtained represent a unique set of data, which cannot be found anywhere else in the available literature.

  2. Beyond sensitivity analysis

    DEFF Research Database (Denmark)

    Lund, Henrik; Sorknæs, Peter; Mathiesen, Brian Vad

    2018-01-01

    of electricity, which have been introduced in recent decades. These uncertainties pose a challenge to the design and assessment of future energy strategies and investments, especially in the economic assessment of renewable energy versus business-as-usual scenarios based on fossil fuels. From a methodological...... point of view, the typical way of handling this challenge has been to predict future prices as accurately as possible and then conduct a sensitivity analysis. This paper includes a historical analysis of such predictions, leading to the conclusion that they are almost always wrong. Not only...... are they wrong in their prediction of price levels, but also in the sense that they always seem to predict a smooth growth or decrease. This paper introduces a new method and reports the results of applying it on the case of energy scenarios for Denmark. The method implies the expectation of fluctuating fuel...

  3. Inhibition effect of calcium hydroxide point and chlorhexidine point on root canal bacteria of necrosis teeth

    Directory of Open Access Journals (Sweden)

    Andry Leonard Je

    2006-03-01

    Full Text Available Calcium Hydroxide point and Chlorhexidine point are new drugs for eliminating bacteria in the root canal. The points slowly and controly realease Calcium Hydroxide and Chlorhexidine into root canal. The purpose of the study was to determined the effectivity of Calcium hydroxide point (Calcium hydroxide plus point and Chlorhexidine point in eleminating the root canal bacteria of nescrosis teeth. In this study 14 subjects were divided into 2 groups. The first group was treated with Calcium hydroxide point and the second was treated with Chlorhexidine poin. The bacteriological sampling were measured with spectrofotometry. The Paired T Test analysis (before and after showed significant difference between the first and second group. The Independent T Test which analysed the effectivity of both groups had not showed significant difference. Although there was no significant difference in statistical test, the result of second group eliminate more bacteria than the first group. The present finding indicated that the use of Chlorhexidine point was better than Calcium hydroxide point in seven days period. The conclusion is Chlorhexidine point and Calcium hydroxide point as root canal medicament effectively eliminate root canal bacteria of necrosis teeth.

  4. Probabilistic derivation of the interspecies assessment factor for skin sensitization.

    NARCIS (Netherlands)

    Bil, W; Schuur, A G; Ezendam, J; Bokkers, B G H

    An interspecies sensitization assessment factor (SAF) is used in the quantitative risk assessment (QRA) for skin sensitization when a murine-based NESIL (No Expected Sensitization Induction Level) is taken as point of departure. Several studies showed that, on average, the murine sensitization

  5. Hawaii ESI: M_MAMPT (Marine Mammal Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for endangered Hawaiian monk seal pupping and haul-out sites. Vector points in this data set represent...

  6. American Samoa ESI: T_MAMPT (Terrestrial Mammal Points)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains sensitive biological resource data for bats in American Samoa. Vector points in this data set represent bat roosts and caves. Species-specific...

  7. Detecting Change-Point via Saddlepoint Approximations

    Institute of Scientific and Technical Information of China (English)

    Zhaoyuan LI; Maozai TIAN

    2017-01-01

    It's well-known that change-point problem is an important part of model statistical analysis.Most of the existing methods are not robust to criteria of the evaluation of change-point problem.In this article,we consider "mean-shift" problem in change-point studies.A quantile test of single quantile is proposed based on saddlepoint approximation method.In order to utilize the information at different quantile of the sequence,we further construct a "composite quantile test" to calculate the probability of every location of the sequence to be a change-point.The location of change-point can be pinpointed rather than estimated within a interval.The proposed tests make no assumptions about the functional forms of the sequence distribution and work sensitively on both large and small size samples,the case of change-point in the tails,and multiple change-points situation.The good performances of the tests are confirmed by simulations and real data analysis.The saddlepoint approximation based distribution of the test statistic that is developed in the paper is of independent interest and appealing.This finding may be of independent interest to the readers in this research area.

  8. Sensitivity and specificity of waist circumference as a single screening tool for identification of overweight and obesity among Malaysian adults.

    Science.gov (United States)

    Kee, C C; Jamaiyah, H; Geeta, A; Ali, Z Ahmad; Safiza, M N Noor; Suzana, S; Khor, G L; Rahmah, R; Jamalludin, A R; Sumarni, M G; Lim, K H; Faudzi, Y Ahmad; Amal, N M

    2011-12-01

    Generalised obesity and central obesity are risk factors for Type II diabetes mellitus and cardiovascular diseases. Waist circumference (WC) has been suggested as a single screening tool for identification of overweight or obese subjects in lieu of the body mass index (BMI) for weight management in public health program. Currently, the recommended waist circumference cut-off points of > or = 94cm for men and > or =80cm for women (waist action level 1) and > or = 102cm for men and > or = 88cm for women (waist action level 2) used for identification of overweight and obesity are based on studies in Caucasian populations. The objective of this study was to assess the sensitivity and specificity of the recommended waist action levels, and to determine optimal WC cut-off points for identification of overweight or obesity with central fat distribution based on BMI for Malaysian adults. Data from 32,773 subjects (14,982 men and 17,791 women) aged 18 and above who participated in the Third National Health Morbidity Survey in 2006 were analysed. Sensitivity and specificity of WC at waist action level 1 were 48.3% and 97.5% for men; and 84.2% and 80.6% for women when compared to the cut-off points based on BMI > or = 25kg/m2. At waist action level 2, sensitivity and specificity were 52.4% and 98.0% for men, and 79.2% and 85.4% for women when compared with the cut-off points based on BMI (> or = 30 kg/m2). Receiver operating characteristic analyses showed that the appropriatescreening cut-off points for WC to identify subjects with overweight (> or = 25kg/m2) was 86.0cm (sensitivity=83.6%, specificity=82.5%) for men, and 79.1cm (sensitivity=85.0%, specificity=79.5%) for women. Waist circumference cut-off points to identify obese subjects (BMI > or = 30 kg/m2) was 93.2cm (sensitivity=86.5%, specificity=85.7%) for men and 85.2cm (sensitivity=77.9%, specificity=78.0%) for women. Our findings demonstrated that the current recommended waist circumference cut-off points have low

  9. Procedures for Sensitive Immunoassay

    Energy Technology Data Exchange (ETDEWEB)

    Givol, D. [Department of Chemical Immunology, Weizmann Institute of Science, Rehovot (Israel)

    1970-02-15

    Sensitive immunoassay methods should be applied to small molecules of biological importance, which are non-immunogenic by themselves, such as small peptide hormones (e.g. bradykinin), plant hormones (e.g. indoleacetic acid), nucleotides and other small molecules. Methods of binding these small molecules, as haptens, to immunogenic carriers by various cross-linking agents are described (dicyclohexylcarbodiimide, tolylene-diisocyanate and glutaraldehyde), and the considerations involved in relation to the methods of binding and the specificity of the antibodies formed are discussed. Some uses of antibody bound to bromoacetyl cellulose as an immuno adsorbent convenient for assay of immunoglobulins are described. Finally, the sensitive immunoassay method of chemically modified phage is described. This includes methods of binding small molecules (such as the dinitrophenyl group, penicillin, indoleacetic acid) or proteins (such as insulin, immunoglobulins) to phages. Methods of direct chemical conjugation, or an indirect binding via anti-phage Fab, are described. The phage inactivation method by direct plating and its modifications (such as decision technique and complex inactivation) are compared with the more simple end-point titration method. The inhibition of phage inactivation has some advantages as it does not require radioactive material, or expensive radioactive counters, and avoids the need for separation between bound and unbound antigen. Hence, if developed, it could be used as an alternative to radioimmunoassay. (author)

  10. Vernal Point and Anthropocene

    Science.gov (United States)

    Chavez-Campos, Teodosio; Chavez S, Nadia; Chavez-Sumarriva, Israel

    2014-05-01

    The time scale was based on the internationally recognized formal chronostratigraphical /geochronological subdivisions of time: The Phanerozoic Eonathem/Eon; the Cenozoic Erathem/Era; the Quaternary System/Period; the Pleistocene and Holocene Series/Epoch. The Quaternary was divided into: (1) The Pleistocene that was characterized by cycles of glaciations (intervals between 40,000 and 100,000 years). (2) The Holocene that was an interglacial period that began about 12,000 years ago. It was believed that the Milankovitch cycles (eccentricity, axial tilt and the precession of the equinoxes) were responsible for the glacial and interglacial Holocene periods. The magnetostratigraphic units have been widely used for global correlations valid for Quaternary. The gravitational influence of the sun and moon on the equatorial bulges of the mantle of the rotating earth causes the precession of the earth. The retrograde motion of the vernal point through the zodiacal band is 26,000 years. The Vernal point passes through each constellation in an average of 2000 years and this period of time was correlated to Bond events that were North Atlantic climate fluctuations occurring every ≡1,470 ± 500 years throughout the Holocene. The vernal point retrogrades one precessional degree approximately in 72 years (Gleissberg-cycle) and approximately enters into the Aquarius constellation on March 20, 1940. On earth this entry was verify through: a) stability of the magnetic equator in the south central zone of Peru and in the north zone of Bolivia, b) the greater intensity of equatorial electrojet (EEJ) in Peru and Bolivia since 1940. With the completion of the Holocene and the beginning of the Anthropocene (widely popularized by Paul Crutzen) it was proposed the date of March 20, 1940 as the beginning of the Anthropocene. The date proposed was correlated to the work presented in IUGG (Italy 2007) with the title "Cusco base meridian for the study of geophysical data"; Cusco was

  11. Point/Counterpoint

    DEFF Research Database (Denmark)

    Ungar, David; Ernst, Erik

    2007-01-01

    Point Argument: "Dynamic Languages (in Reactive Environments) Unleash Creativity," by David Ungar. For the sake of creativity, the profession needs to concentrate more on inventing new and better dynamic languages and environments and less on improving static languages. Counterpoint Argument......: "Explicitly Declared Static Types: The Missing Links," by Erik Ernst. How do we understand software? Using it is a powerful approach, but it provides examples of properties, not general truths. Some static knowledge is needed. This department is part of a special issue on dynamically typed languages....

  12. Mise au point

    African Journals Online (AJOL)

    tomie est replacé et fixé par des fils d'acier, krönlein lais- sait ce fragment pédiculé au fascia temporalis afin d'évi- ter la dépression de la fosse temporale due à la désinser- tion du muscle temporal [20] ; dans notre série, après reconstitution du cadre, le muscle temporal est suturé à son point d'insertion. pour les tumeurs ...

  13. Torsades de Pointes

    Directory of Open Access Journals (Sweden)

    Richard J Chen, MD

    2018-04-01

    Full Text Available History of present illness: 70-year-old male with a history ventricular arrhythmia, AICD (automated implantable cardioverter defibrillator, coronary artery disease and cardiac stents presented to the Emergency Department after three AICD discharges with dyspnea but no chest pain. During triage, he was found to have an irregular radial pulse and was placed on a cardiac monitor. Significant findings: The patient was found to be in a polymorphic ventricular tachycardia; he was alert, awake and asymptomatic. A rhythm strip showed a wide complex tachycardia with the QRS complex varying in amplitude around the isoelectric line consistent with Torsades de Pointes. Discussion: Torsades de Pointes (TdP is a specific type of polymorphic ventricular tachycardia. The arrhythmia’s characteristic morphology consists of the QRS complex “twisting” around the isoelectric line with gradual variation of the amplitude, reflecting its literal translation of “twisting of the points.”1 This arrhythmia occurs in the context of prolonged QT. The most common form of acquired QT prolongation is medication induced. Common causes include antiarrhythmics, antipsychotics, antiemetics, and antibiotics.2 Patient specific risk factors include female sex, bradycardia, hypokalemia, hypomagnesemia, hypocalcemia, hypothermia and heart disease.3 In the setting of prolonged QT, the repolarization phase is extended. TdP is initiated when a PVC (premature ventricular contraction occurs during this repolarization, known as an ‘R on T’ phenomenon. TdP is often asymptomatic and self-limited. The danger in TdP is its potential to deteriorate into ventricular fibrillation. A mainstay of management of TdP is prevention of risk factors when possible.4 Unstable patients should be treated with synchronized cardioversion. Magnesium sulfate should be administered in all cases of TdP.1 If a patient is not responsive to magnesium, consider isoproterenol, amiodarone, and overdrive

  14. Pointing control for LDR

    Science.gov (United States)

    Yam, Y.; Briggs, C.

    1988-01-01

    One important aspect of the LDR control problem is the possible excitations of structural modes due to random disturbances, mirror chopping, and slewing maneuvers. An analysis was performed to yield a first order estimate of the effects of such dynamic excitations. The analysis involved a study of slewing jitters, chopping jitters, disturbance responses, and pointing errors, making use of a simplified planar LDR model which describes the LDR dynamics on a plane perpendicular to the primary reflector. Briefly, the results indicate that the command slewing profile plays an important role in minimizing the resultant jitter, even to a level acceptable without any control action. An optimal profile should therefore be studied.

  15. Point of Care Ultrasound

    DEFF Research Database (Denmark)

    Dietrich, Christoph F; Goudie, Adrian; Chiorean, Liliana

    2017-01-01

    Over the last decade, the use of portable ultrasound scanners has enhanced the concept of point of care ultrasound (PoC-US), namely, "ultrasound performed at the bedside and interpreted directly by the treating clinician." PoC-US is not a replacement for comprehensive ultrasound, but rather allows...... and critical care medicine, cardiology, anesthesiology, rheumatology, obstetrics, neonatology, gynecology, gastroenterology and many other applications. In the future, PoC-US will be more diverse than ever and be included in medical student training....

  16. The point on.

    International Nuclear Information System (INIS)

    1996-01-01

    In this last article, we find the point of view about the world petroleum activity, the reserves and the recent discoveries, the deep offshore, the technological developments in petroleum upstream. The petroleum situation in China is treated. The trends of world refining are described. The recent technological developments in the petroleum downstream are detailed. The prices of crude oil and the refining margins are the subject of a chapter. The investments of hydrocarbons area are given, the world trade and the lng projects, the gas availability in Western Europe have their place. The trends of European automobile industry and the fuels distribution are also discussed. (N.C.)

  17. Sensitivity Analysis of a Physiochemical Interaction Model ...

    African Journals Online (AJOL)

    In this analysis, we will study the sensitivity analysis due to a variation of the initial condition and experimental time. These results which we have not seen elsewhere are analysed and discussed quantitatively. Keywords: Passivation Rate, Sensitivity Analysis, ODE23, ODE45 J. Appl. Sci. Environ. Manage. June, 2012, Vol.

  18. Layered Fixed Point Logic

    DEFF Research Database (Denmark)

    Filipiuk, Piotr; Nielson, Flemming; Nielson, Hanne Riis

    2012-01-01

    We present a logic for the specification of static analysis problems that goes beyond the logics traditionally used. Its most prominent feature is the direct support for both inductive computations of behaviors as well as co-inductive specifications of properties. Two main theoretical contributions...... are a Moore Family result and a parametrized worst case time complexity result. We show that the logic and the associated solver can be used for rapid prototyping of analyses and illustrate a wide variety of applications within Static Analysis, Constraint Satisfaction Problems and Model Checking. In all cases...

  19. Relative Critical Points

    Directory of Open Access Journals (Sweden)

    Debra Lewis

    2013-05-01

    Full Text Available Relative equilibria of Lagrangian and Hamiltonian systems with symmetry are critical points of appropriate scalar functions parametrized by the Lie algebra (or its dual of the symmetry group. Setting aside the structures – symplectic, Poisson, or variational – generating dynamical systems from such functions highlights the common features of their construction and analysis, and supports the construction of analogous functions in non-Hamiltonian settings. If the symmetry group is nonabelian, the functions are invariant only with respect to the isotropy subgroup of the given parameter value. Replacing the parametrized family of functions with a single function on the product manifold and extending the action using the (coadjoint action on the algebra or its dual yields a fully invariant function. An invariant map can be used to reverse the usual perspective: rather than selecting a parametrized family of functions and finding their critical points, conditions under which functions will be critical on specific orbits, typically distinguished by isotropy class, can be derived. This strategy is illustrated using several well-known mechanical systems – the Lagrange top, the double spherical pendulum, the free rigid body, and the Riemann ellipsoids – and generalizations of these systems.

  20. Reassessing Function Points

    Directory of Open Access Journals (Sweden)

    G.R. Finnie

    1997-05-01

    Full Text Available Accurate estimation of the size and development effort for software projects requires estimation models which can be used early enough in the development life cycle to be of practical value. Function Point Analysis (FPA has become possibly the most widely used estimation technique in practice. However the technique was developed in the data processing environment of the 1970's and, despite undergoing considerable reassessment and formalisation, still attracts criticism for the weighting scoring it employs and for the way in which the function point score is adapted for specific system characteristics. This paper reviews the validity of the weighting scheme and the value of adjusting for system characteristics by studying their effect in a sample of 299 software developments. In general the value adjustment scheme does not appear to cater for differences in productivity. The weighting scheme used to adjust system components in terms of being simple, average or complex also appears suspect and should be redesigned to provide a more realistic estimate of system functionality.

  1. Bright point study

    International Nuclear Information System (INIS)

    Tang, F.; Harvey, K.; Bruner, M.; Kent, B.; Antonucci, E.

    1982-01-01

    Transition region and coronal observations of bright points by instruments aboard the Solar Maximum Mission and high resolution photospheric magnetograph observations on September 11, 1980 are presented. A total of 31 bipolar ephemeral regions were found in the photosphere from birth in 9.3 hours of combined magnetograph observations from three observatories. Two of the three ephemeral regions present in the field of view of the Ultraviolet Spectrometer-Polarimeter were observed in the C IV 1548 line. The unobserved ephemeral region was determined to be the shortest-lived (2.5 hr) and lowest in magnetic flux density (13G) of the three regions. The Flat Crystal Spectrometer observed only low level signals in the O VIII 18.969 A line, which were not statistically significant to be positively identified with any of the 16 ephemeral regions detected in the photosphere. In addition, the data indicate that at any given time there lacked a one-to-one correspondence between observable bright points and photospheric ephemeral regions, while more ephemeral regions were observed than their counterparts in the transition region and the corona

  2. Referential Zero Point

    Directory of Open Access Journals (Sweden)

    Matjaž Potrč

    2016-04-01

    Full Text Available Perhaps the most important controversy in which ordinary language philosophy was involved is that of definite descriptions, presenting referential act as a community-involving communication-intention endeavor, thereby opposing the direct acquaintance-based and logical proper names inspired reference aimed at securing truth conditions of referential expression. The problem of reference is that of obtaining access to the matters in the world. This access may be forthcoming through the senses, or through descriptions. A review of how the problem of reference is handled shows though that one main practice is to indulge in relations of acquaintance supporting logical proper names, demonstratives, indexicals and causal or historical chains. This testifies that the problem of reference involves the zero point, and with it phenomenology of intentionality. Communication-intention is but one dimension of rich phenomenology that constitutes an agent’s experiential space, his experiential world. Zero point is another constitutive aspect of phenomenology involved in the referential relation. Realizing that the problem of reference is phenomenology based opens a new perspective upon the contribution of analytical philosophy in this area, reconciling it with continental approach, and demonstrating variations of the impossibility related to the real. Chromatic illumination from the cognitive background empowers the referential act, in the best tradition of ordinary language philosophy.

  3. Komparativ analyse - Scandinavian Airlines & Norwegian Air Shuttle

    OpenAIRE

    Kallesen, Martin Nystrup; Singh, Ravi Pal; Boesen, Nana Wiaberg

    2017-01-01

    The project is based around a pondering of how that a company the size of Scandinavian Airlines or Norwegian Air Shuttle use their Finances and how they see their external environment. This has led to us researching the relationship between the companies and their finances as well as their external environment, and how they differ in both.To do this we have utilised a myriad of different methods to analyse the companies, including PESTEL, SWOT, TOWS; DCF, risk analysis, Sensitivity, Porter’s ...

  4. Exergetic and thermoeconomic analyses of power plants

    International Nuclear Information System (INIS)

    Kwak, H.-Y.; Kim, D.-J.; Jeon, J.-S.

    2003-01-01

    Exergetic and thermoeconomic analyses were performed for a 500-MW combined cycle plant. In these analyses, mass and energy conservation laws were applied to each component of the system. Quantitative balances of the exergy and exergetic cost for each component, and for the whole system was carefully considered. The exergoeconomic model, which represented the productive structure of the system considered, was used to visualize the cost formation process and the productive interaction between components. The computer program developed in this study can determine the production costs of power plants, such as gas- and steam-turbines plants and gas-turbine cogeneration plants. The program can be also be used to study plant characteristics, namely, thermodynamic performance and sensitivity to changes in process and/or component design variables

  5. Point Pollution Sources Dimensioning

    Directory of Open Access Journals (Sweden)

    Georgeta CUCULEANU

    2011-06-01

    Full Text Available In this paper a method for determining the main physical characteristics of the point pollution sources is presented. It can be used to find the main physical characteristics of them. The main physical characteristics of these sources are top inside source diameter and physical height. The top inside source diameter is calculated from gas flow-rate. For reckoning the physical height of the source one takes into account the relation given by the proportionality factor, defined as ratio between the plume rise and physical height of the source. The plume rise depends on the gas exit velocity and gas temperature. That relation is necessary for diminishing the environmental pollution when the production capacity of the plant varies, in comparison with the nominal one.

  6. Critical Points of Contact

    DEFF Research Database (Denmark)

    Jensen, Ole B.; Wind, Simon; Lanng, Ditte Bendix

    2012-01-01

    In this brief article, we shall illustrate the application of the analytical and interventionist concept of ‘Critical Points of Contact’ (CPC) through a number of urban design studios. The notion of CPC has been developed over a span of the last three to four years and is reported in more detail...... elsewhere (Jensen & Morelli 2011). In this article, we will only discuss the conceptual and theoretical framing superficially, since our real interest is to show and discuss the concept's application value to spatial design in a number of urban design studios. The 'data' or the projects presented are seven...... in urban design at Aalborg University, where urban design consists of both an analytical and an interventionist field of operation. Furthermore, the content of the CPC concept links to research in mobilities, the network city, and urban design. These are among the core pillars of both the masters programme...

  7. Point defects in platinum

    International Nuclear Information System (INIS)

    Piercy, G.R.

    1960-01-01

    An investigation was made of the mobility and types of point defect introduced in platinum by deformation in liquid nitrogen, quenching into water from 1600 o C, or reactor irradiation at 50 o C. In all cases the activation energy for motion of the defect was determined from measurements of electrical resistivity. Measurements of density, hardness, and x-ray line broadening were also made there applicable. These experiments indicated that the principal defects remaining in platinum after irradiation were single vacant lattice sites and after quenching were pairs of vacant lattice sites. Those present after deformation In liquid nitrogen were single vacant lattice sites and another type of defect, perhaps interstitial atoms. (author)

  8. Virtual turning points

    CERN Document Server

    Honda, Naofumi; Takei, Yoshitsugu

    2015-01-01

    The discovery of a virtual turning point truly is a breakthrough in WKB analysis of higher order differential equations. This monograph expounds the core part of its theory together with its application to the analysis of higher order Painlevé equations of the Noumi–Yamada type and to the analysis of non-adiabatic transition probability problems in three levels. As M.V. Fedoryuk once lamented, global asymptotic analysis of higher order differential equations had been thought to be impossible to construct. In 1982, however, H.L. Berk, W.M. Nevins, and K.V. Roberts published a remarkable paper in the Journal of Mathematical Physics indicating that the traditional Stokes geometry cannot globally describe the Stokes phenomena of solutions of higher order equations; a new Stokes curve is necessary.

  9. The Point Mass Concept

    Directory of Open Access Journals (Sweden)

    Lehnert B.

    2011-04-01

    Full Text Available A point-mass concept has been elaborated from the equations of the gravitational field. One application of these deductions results in a black hole configuration of the Schwarzschild type, having no electric charge and no angular momentum. The critical mass of a gravitational collapse with respect to the nuclear binding energy is found to be in the range of 0.4 to 90 solar masses. A second application is connected with the spec- ulation about an extended symmetric law of gravitation, based on the options of positive and negative mass for a particle at given positive energy. This would make masses of equal polarity attract each other, while masses of opposite polarity repel each other. Matter and antimatter are further proposed to be associated with the states of positive and negative mass. Under fully symmetric conditions this could provide a mechanism for the separation of antimatter from matter at an early stage of the universe.

  10. The Point Mass Concept

    Directory of Open Access Journals (Sweden)

    Lehnert B.

    2011-04-01

    Full Text Available A point-mass concept has been elaborated from the equations of the gravitational field. One application of these deductions results in a black hole configuration of the Schwarzschild type, having no electric charge and no angular momentum. The critical mass of a gravitational collapse with respect to the nuclear binding energy is found to be in the range of 0.4 to 90 solar masses. A second application is connected with the speculation about an extended symmetric law of gravitation, based on the options of positive and negative mass for a particle at given positive energy. This would make masses of equal polarity attract each other, while masses of opposite polarity repel each other. Matter and antimatter are further proposed to be associated with the states of positive and negative mass. Under fully symmetric conditions this could provide a mechanism for the separation of antimatter from matter at an early stage of the universe.

  11. A calibration protocol for population-specific accelerometer cut-points in children.

    Science.gov (United States)

    Mackintosh, Kelly A; Fairclough, Stuart J; Stratton, Gareth; Ridgers, Nicola D

    2012-01-01

    To test a field-based protocol using intermittent activities representative of children's physical activity behaviours, to generate behaviourally valid, population-specific accelerometer cut-points for sedentary behaviour, moderate, and vigorous physical activity. Twenty-eight children (46% boys) aged 10-11 years wore a hip-mounted uniaxial GT1M ActiGraph and engaged in 6 activities representative of children's play. A validated direct observation protocol was used as the criterion measure of physical activity. Receiver Operating Characteristics (ROC) curve analyses were conducted with four semi-structured activities to determine the accelerometer cut-points. To examine classification differences, cut-points were cross-validated with free-play and DVD viewing activities. Cut-points of ≤ 372, >2160 and >4806 counts • min(-1) representing sedentary, moderate and vigorous intensity thresholds, respectively, provided the optimal balance between the related needs for sensitivity (accurately detecting activity) and specificity (limiting misclassification of the activity). Cross-validation data demonstrated that these values yielded the best overall kappa scores (0.97; 0.71; 0.62), and a high classification agreement (98.6%; 89.0%; 87.2%), respectively. Specificity values of 96-97% showed that the developed cut-points accurately detected physical activity, and sensitivity values (89-99%) indicated that minutes of activity were seldom incorrectly classified as inactivity. The development of an inexpensive and replicable field-based protocol to generate behaviourally valid and population-specific accelerometer cut-points may improve the classification of physical activity levels in children, which could enhance subsequent intervention and observational studies.

  12. A calibration protocol for population-specific accelerometer cut-points in children.

    Directory of Open Access Journals (Sweden)

    Kelly A Mackintosh

    Full Text Available To test a field-based protocol using intermittent activities representative of children's physical activity behaviours, to generate behaviourally valid, population-specific accelerometer cut-points for sedentary behaviour, moderate, and vigorous physical activity.Twenty-eight children (46% boys aged 10-11 years wore a hip-mounted uniaxial GT1M ActiGraph and engaged in 6 activities representative of children's play. A validated direct observation protocol was used as the criterion measure of physical activity. Receiver Operating Characteristics (ROC curve analyses were conducted with four semi-structured activities to determine the accelerometer cut-points. To examine classification differences, cut-points were cross-validated with free-play and DVD viewing activities.Cut-points of ≤ 372, >2160 and >4806 counts • min(-1 representing sedentary, moderate and vigorous intensity thresholds, respectively, provided the optimal balance between the related needs for sensitivity (accurately detecting activity and specificity (limiting misclassification of the activity. Cross-validation data demonstrated that these values yielded the best overall kappa scores (0.97; 0.71; 0.62, and a high classification agreement (98.6%; 89.0%; 87.2%, respectively. Specificity values of 96-97% showed that the developed cut-points accurately detected physical activity, and sensitivity values (89-99% indicated that minutes of activity were seldom incorrectly classified as inactivity.The development of an inexpensive and replicable field-based protocol to generate behaviourally valid and population-specific accelerometer cut-points may improve the classification of physical activity levels in children, which could enhance subsequent intervention and observational studies.

  13. Sensitivity analysis of EQ3

    International Nuclear Information System (INIS)

    Horwedel, J.E.; Wright, R.Q.; Maerker, R.E.

    1990-01-01

    A sensitivity analysis of EQ3, a computer code which has been proposed to be used as one link in the overall performance assessment of a national high-level waste repository, has been performed. EQ3 is a geochemical modeling code used to calculate the speciation of a water and its saturation state with respect to mineral phases. The model chosen for the sensitivity analysis is one which is used as a test problem in the documentation of the EQ3 code. Sensitivities are calculated using both the CHAIN and ADGEN options of the GRESS code compiled under G-float FORTRAN on the VAX/VMS and verified by perturbation runs. The analyses were performed with a preliminary Version 1.0 of GRESS which contains several new algorithms that significantly improve the application of ADGEN. Use of ADGEN automates the implementation of the well-known adjoint technique for the efficient calculation of sensitivities of a given response to all the input data. Application of ADGEN to EQ3 results in the calculation of sensitivities of a particular response to 31,000 input parameters in a run time of only 27 times that of the original model. Moreover, calculation of the sensitivities for each additional response increases this factor by only 2.5 percent. This compares very favorably with a running-time factor of 31,000 if direct perturbation runs were used instead. 6 refs., 8 tabs

  14. CADDIS Volume 4. Data Analysis: Advanced Analyses - Controlling for Natural Variability

    Science.gov (United States)

    Methods for controlling natural variability, predicting environmental conditions from biological observations method, biological trait data, species sensitivity distributions, propensity scores, Advanced Analyses of Data Analysis references.

  15. CADDIS Volume 4. Data Analysis: Advanced Analyses - Controlling for Natural Variability: SSD Plot Diagrams

    Science.gov (United States)

    Methods for controlling natural variability, predicting environmental conditions from biological observations method, biological trait data, species sensitivity distributions, propensity scores, Advanced Analyses of Data Analysis references.

  16. Cloud-point extraction and spectrophotometric determination of ...

    African Journals Online (AJOL)

    Aneco-friendly, simple and very sensitive method was developed for preconcentration and determination of clonazepam (CLO) in pharmaceutical dosage forms using cloud point extraction (CPE) technique. The method is based on cloud point extraction of product from oxidative coupling between reduced CLO and ...

  17. Comparison of pressure perception of static and dynamic two point ...

    African Journals Online (AJOL)

    ... the right and left index finger (p<0.05). Conclusion: Age and gender did not affect the perception of static and dynamic two point discrimination while the limb side (left or right) affected the perception of static and dynamic two point discrimination. The index finger is also more sensitive to moving rather static sensations.

  18. May 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from Dana Point to Point La...

  19. September 2002 Lidar Point Data of Southern California Coastline: Dana Point to Point La Jolla

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data set contains lidar point data from a strip of Southern California coastline (including water, beach, cliffs, and top of cliffs) from Dana Point to Point La...

  20. SENSIT: a cross-section and design sensitivity and uncertainty analysis code

    International Nuclear Information System (INIS)

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE

  1. Pratique de l'analyse fonctionelle

    CERN Document Server

    Tassinari, Robert

    1997-01-01

    Mettre au point un produit ou un service qui soit parfaitement adapté aux besoins et aux exigences du client est indispensable pour l'entreprise. Pour ne rien laisser au hasard, il s'agit de suivre une méthodologie rigoureuse : celle de l'analyse fonctionnelle. Cet ouvrage définit précisément cette méthode ainsi que ses champs d'application. Il décrit les méthodes les plus performantes en termes de conception de produit et de recherche de qualité et introduit la notion d'analyse fonctionnelle interne. Un ouvrage clé pour optimiser les processus de conception de produit dans son entreprise. -- Idées clés, par Business Digest

  2. Learning power point 2000 easily

    Energy Technology Data Exchange (ETDEWEB)

    Mon, In Su; Je, Jung Suk

    2000-05-15

    This book introduces power point 2000, which gives descriptions what power point is, what we can do with power point 2000, is it possible to install power point 2000 in my computer? Let's run power point, basic of power point such as new presentation, writing letter, using text box, changing font size, color and shape, catching power user, insertion of word art and creating of new file. It also deals with figure, chart, graph, making multimedia file, presentation, know-how of power point for teachers and company workers.

  3. Cost/benefit analyses of environmental impact

    International Nuclear Information System (INIS)

    Goldman, M.I.

    1974-01-01

    Various aspects of cost-benefit analyses are considered. Some topics discussed are: regulations of the National Environmental Policy Act (NEPA); statement of AEC policy and procedures for implementation of NEPA; Calvert Cliffs decision; AEC Regulatory Guide; application of risk-benefit analysis to nuclear power; application of the as low as practicable (ALAP) rule to radiation discharges; thermal discharge restrictions proposed by EPA under the 1972 Amendment to the Water Pollution Control Act; estimates of somatic and genetic insult per unit population exposure; occupational exposure; EPA Point Source Guidelines for Discharges from Steam Electric Power Plants; and costs of closed-cycle cooling using cooling towers. (U.S.)

  4. Risky decision making from childhood through adulthood: Contributions of learning and sensitivity to negative feedback.

    Science.gov (United States)

    Humphreys, Kathryn L; Telzer, Eva H; Flannery, Jessica; Goff, Bonnie; Gabard-Durnam, Laurel; Gee, Dylan G; Lee, Steve S; Tottenham, Nim

    2016-02-01

    Decision making in the context of risk is a complex and dynamic process that changes across development. Here, we assessed the influence of sensitivity to negative feedback (e.g., loss) and learning on age-related changes in risky decision making, both of which show unique developmental trajectories. In the present study, we examined risky decision making in 216 individuals, ranging in age from 3-26 years, using the balloon emotional learning task (BELT), a computerized task in which participants pump up a series of virtual balloons to earn points, but risk balloon explosion on each trial, which results in no points. It is important to note that there were 3 balloon conditions, signified by different balloon colors, ranging from quick- to slow-to-explode, and participants could learn the color-condition pairings through task experience. Overall, we found age-related increases in pumps made and points earned. However, in the quick-to-explode condition, there was a nonlinear adolescent peak for points earned. Follow-up analyses indicated that this adolescent phenotype occurred at the developmental intersection of linear age-related increases in learning and decreases in sensitivity to negative feedback. Adolescence was marked by intermediate values on both these processes. These findings show that a combination of linearly changing processes can result in nonlinear changes in risky decision making, the adolescent-specific nature of which is associated with developmental improvements in learning and reduced sensitivity to negative feedback. (c) 2016 APA, all rights reserved).

  5. ATLAS Point 1 Construction

    CERN Multimedia

    Inigo-Golfin, J

    After 3 years of work in point 1, a number of surface buildings have already been completed and handed over to CERN (the control, the gas and the cooling and ventilation buildings) and, probably more appealing to the public, 60,000 m3 of earth have already been excavated from underground. At present, the technical cavern USA15 and its access shaft are almost finished, leaving only the main cavern and the liaison galleries to be completed in the coming year and a half. The main cavern has been excavated down to the radiation limit and its walls and vault will presently be concreted (see below the picture of the section of the vault with the impressive shell of 1.2 m thickness). The excavation of the bench (27 vertical metres to go yet!) will proceed from August, when some additional civil engineering work in the LHC tunnel will be undertaken. Needless to say many different services are necessary around the detector, both for its installation and future operation for physics. To that end much of the heavy...

  6. Point 1 Updates

    CERN Multimedia

    Inigo-Golfin, J.

    The ATLAS experimental area is located in Point 1, just across the main CERN entrance, in the commune of Meyrin. There people are ever so busy to finish the different infrastructures for ATLAS. Not only has Civil Engineering finished the construction of the USA15 technical cavern, but the excavation of the main UX15 cavern has resumed below the machine tunnel, after a brief halt to allow the construction of the UJ-caverns for the power converters of the LHC machine. The excavation work should end in August 2002. The UX15 hand-over to ATLAS is expected in April 2003. On the surface civil engineering is starting to complete the last two surface buildings (SDX1 and SH1), once the services (cooling pipes, ventilation ducts and the largest item, the lift modules and its lift of course) in the shaft PX15 have been completed. But the civil engineering is not all. A lot more is under way. The site installation of the steel structures in the caverns is to begin in Autumn, along with all the cooling pipes, airconditi...

  7. Rational points on varieties

    CERN Document Server

    Poonen, Bjorn

    2017-01-01

    This book is motivated by the problem of determining the set of rational points on a variety, but its true goal is to equip readers with a broad range of tools essential for current research in algebraic geometry and number theory. The book is unconventional in that it provides concise accounts of many topics instead of a comprehensive account of just one-this is intentionally designed to bring readers up to speed rapidly. Among the topics included are Brauer groups, faithfully flat descent, algebraic groups, torsors, étale and fppf cohomology, the Weil conjectures, and the Brauer-Manin and descent obstructions. A final chapter applies all these to study the arithmetic of surfaces. The down-to-earth explanations and the over 100 exercises make the book suitable for use as a graduate-level textbook, but even experts will appreciate having a single source covering many aspects of geometry over an unrestricted ground field and containing some material that cannot be found elsewhere. The origins of arithmetic (o...

  8. "Point de suspension"

    CERN Multimedia

    2004-01-01

    CERN - Globe of Science and Innovation 20 and 21 October Acrobatics, mime, a cappella singing, projections of images, a magical setting... a host of different tools of a grandeur matching that of the Universe they relate. A camera makes a massive zoom out to reveal the multiple dimensions of Nature. Freeze the frame: half way between the infinitesimally small and the infinitesimally large, a man suspends his everyday life (hence the title "Point de Suspension", which refers to the three dots at the end of an uncompleted sentence) to take a glimpse of the place he occupies in the great history of the Universe. An unusual perspective on what it means to be a human being... This wondrous show in the Globe of Science and Innovation, specially created by the Miméscope* company for the official ceremony marking CERN's fiftieth anniversary, is a gift from the Government of the Republic and Canton of Geneva, which also wishes to share this moment of wonder with the local population. There will be three perfo...

  9. "Point de suspension"

    CERN Multimedia

    2004-01-01

    http://www.cern.ch/cern50/ CERN - Globe of Science and Innovation 20 and 21 October Acrobatics, mime, a cappella singing, projections of images, a magical setting... a host of different tools of a grandeur matching that of the Universe they relate. A camera makes a massive zoom out to reveal the multiple dimensions of Nature. Freeze the frame: half way between the infinitesimally small and the infinitesimally large, a man suspends his everyday life (hence the title "Point de Suspension", which refers to the three dots at the end of an uncompleted sentence) to take a glimpse of the place he occupies in the great history of the Universe. An unusual perspective on what it means to be a human being... This wondrous show in the Globe of Science and Innovation, specially created by the Miméscope* company for the official ceremony marking CERN's fiftieth anniversary, is a gift from the Government of the Republic and Canton of Geneva, which also wishes to share this moment of wonder with the local pop...

  10. "Point de suspension"

    CERN Multimedia

    2004-01-01

    CERN - Globe of Science and Innovation 20 and 21 October Acrobatics, mime, a cappella singing, projections of images, a magical setting... a host of different tools of a grandeur matching that of the Universe they relate. A camera makes a massive zoom out to reveal the multiple dimensions of Nature. Freeze the frame: half way between the infinitesimally small and the infinitesimally large, a man suspends his everyday life (hence the title "Point de Suspension", which refers to the three dots at the end of an uncompleted sentence) to take a glimpse of the place he occupies in the great history of the Universe. An unusual perspective on what it means to be a human being... This spectacle in the Globe of Science and Innovation, specially created by the Miméscope* company for the official ceremony marking CERN's fiftieth anniversary, is a gift from the Government of the Republic and Canton of Geneva, which also wishes to share this moment of wonder with the local population. There will be three performances for...

  11. Sensitivity analysis of Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    Directory of Open Access Journals (Sweden)

    A. P. Jacquin

    2009-01-01

    Full Text Available This paper is concerned with the sensitivity analysis of the model parameters of the Takagi-Sugeno-Kang fuzzy rainfall-runoff models previously developed by the authors. These models are classified in two types of fuzzy models, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis and Sobol's variance decomposition. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of several measures of goodness of fit, assessing the model performance from different points of view. These measures include the Nash-Sutcliffe criteria, volumetric errors and peak errors. The results show that the sensitivity of the model parameters depends on both the catchment type and the measure used to assess the model performance.

  12. Variation of a test’s sensitivity and specificity with disease prevalence

    Science.gov (United States)

    Leeflang, Mariska M.G.; Rutjes, Anne W.S.; Reitsma, Johannes B.; Hooft, Lotty; Bossuyt, Patrick M.M.

    2013-01-01

    Background: Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. Methods: We used data from 23 meta-analyses, each of which included 10–39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Results: Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p disease prevalence; there was no such systematic effect for sensitivity. Interpretation: The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation. PMID:23798453

  13. The Tipping Points of Technology Development

    Directory of Open Access Journals (Sweden)

    Tauno Kekäle

    2014-07-01

    Full Text Available The tipping point, the decisive point in time in the competition between old and new, is an interesting phenomenon in physics of today. This aspect in technology acceptance is connected to many business decisions such as technology investments, product releases, resource allocation, sales forecasts and, ultimately, affects the profitability and even survival of a company. The tipping point itself is based on many stochastic and dynamic variables, and the process may at least partly be described as path-dependent. This paper analyses the tipping point from three aspects: (1 product performance, (2 features of the market and infrastructure (including related technologies and human network externalities, and (3 actions of the incumbents (including customer lock-in, systems lock-in, and sustaining innovation. The paper is based on the Bass s-curve idea and the technology trajectory concept proposed by Dosi. Three illustrative cases are presented to make the point of the multiple factors affecting technology acceptance and, thus, the tipping point. The paper also suggests outlines for further research in field of computer simulation.

  14. Modeling fixation locations using spatial point processes.

    Science.gov (United States)

    Barthelmé, Simon; Trukenbrod, Hans; Engbert, Ralf; Wichmann, Felix

    2013-10-01

    Whenever eye movements are measured, a central part of the analysis has to do with where subjects fixate and why they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space; this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from point processes is a very fruitful framework for eye-movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features' predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation.

  15. Pilot points method for conditioning multiple-point statistical facies simulation on flow data

    Science.gov (United States)

    Ma, Wei; Jafarpour, Behnam

    2018-05-01

    We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.

  16. Point cloud processing for smart systems

    Directory of Open Access Journals (Sweden)

    Jaromír Landa

    2013-01-01

    Full Text Available High population as well as the economical tension emphasises the necessity of effective city management – from land use planning to urban green maintenance. The management effectiveness is based on precise knowledge of the city environment. Point clouds generated by mobile and terrestrial laser scanners provide precise data about objects in the scanner vicinity. From these data pieces the state of the roads, buildings, trees and other objects important for this decision-making process can be obtained. Generally, they can support the idea of “smart” or at least “smarter” cities.Unfortunately the point clouds do not provide this type of information automatically. It has to be extracted. This extraction is done by expert personnel or by object recognition software. As the point clouds can represent large areas (streets or even cities, usage of expert personnel to identify the required objects can be very time-consuming, therefore cost ineffective. Object recognition software allows us to detect and identify required objects semi-automatically or automatically.The first part of the article reviews and analyses the state of current art point cloud object recognition techniques. The following part presents common formats used for point cloud storage and frequently used software tools for point cloud processing. Further, a method for extraction of geospatial information about detected objects is proposed. Therefore, the method can be used not only to recognize the existence and shape of certain objects, but also to retrieve their geospatial properties. These objects can be later directly used in various GIS systems for further analyses.

  17. Exploring Foundation Concepts in Introductory Statistics Using Dynamic Data Points

    Science.gov (United States)

    Ekol, George

    2015-01-01

    This paper analyses introductory statistics students' verbal and gestural expressions as they interacted with a dynamic sketch (DS) designed using "Sketchpad" software. The DS involved numeric data points built on the number line whose values changed as the points were dragged along the number line. The study is framed on aggregate…

  18. Individual modulation of pain sensitivity under stress.

    Science.gov (United States)

    Reinhardt, Tatyana; Kleindienst, Nikolaus; Treede, Rolf-Detlef; Bohus, Martin; Schmahl, Christian

    2013-05-01

    Stress has a strong influence on pain sensitivity. However, the direction of this influence is unclear. Recent studies reported both decreased and increased pain sensitivities under stress, and one hypothesis is that interindividual differences account for these differences. The aim of our study was to investigate the effect of stress on individual pain sensitivity in a relatively large female sample. Eighty female participants were included. Pain thresholds and temporal summation of pain were tested before and after stress, which was induced by the Mannheim Multicomponent Stress Test. In an independent sample of 20 women, correlation coefficients between 0.45 and 0.89 indicated relatively high test-retest reliability for pain measurements. On average, there were significant differences between pain thresholds under non-stress and stress conditions, indicating an increased sensitivity to pain under stress. No significant differences between non-stress and stress conditions were found for temporal summation of pain. On an individual basis, both decreased and increased pain sensitivities under stress conditions based on Jacobson's criteria for reliable change were observed. Furthermore, we found significant negative associations between pain sensitivity under non-stress conditions and individual change of pain sensitivity under stress. Participants with relatively high pain sensitivity under non-stress conditions became less sensitive under stress and vice versa. These findings support the view that pain sensitivity under stress shows large interindividual variability, and point to a possible dichotomy of altered pain sensitivity under stress. Wiley Periodicals, Inc.

  19. Transcriptome and selected metabolite analyses reveal points of sugar metabolism in jackfruit (Artocarpus heterophyllus Lam.).

    Science.gov (United States)

    Hu, Lisong; Wu, Gang; Hao, Chaoyun; Yu, Huan; Tan, Lehe

    2016-07-01

    Artocarpus heterophyllus Lam., commonly known as jackfruit, produces the largest tree-borne fruit known thus far. The edible part of the fruit develops from the perianths, and contains many sugar-derived compounds. However, its sugar metabolism is poorly understood. A fruit perianth transcriptome was sequenced on an Illumina HiSeq 2500 platform, producing 32,459 unigenes with an average length of 1345nt. Sugar metabolism was characterized by comparing expression patterns of genes related to sugar metabolism and evaluating correlations with enzyme activity and sugar accumulation during fruit perianth development. During early development, high expression levels of acid invertases and corresponding enzyme activities were responsible for the rapid utilization of imported sucrose for fruit growth. The differential expression of starch metabolism-related genes and corresponding enzyme activities were responsible for starch accumulated before fruit ripening but decreased during ripening. Sucrose accumulated during ripening, when the expression levels of genes for sucrose synthesis were elevated and high enzyme activity was observed. The comprehensive transcriptome analysis presents fundamental information on sugar metabolism and will be a useful reference for further research on fruit perianth development in jackfruit. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Statistical improvements in functional magnetic resonance imaging analyses produced by censoring high-motion data points.

    Science.gov (United States)

    Siegel, Joshua S; Power, Jonathan D; Dubis, Joseph W; Vogel, Alecia C; Church, Jessica A; Schlaggar, Bradley L; Petersen, Steven E

    2014-05-01

    Subject motion degrades the quality of task functional magnetic resonance imaging (fMRI) data. Here, we test two classes of methods to counteract the effects of motion in task fMRI data: (1) a variety of motion regressions and (2) motion censoring ("motion scrubbing"). In motion regression, various regressors based on realignment estimates were included as nuisance regressors in general linear model (GLM) estimation. In motion censoring, volumes in which head motion exceeded a threshold were withheld from GLM estimation. The effects of each method were explored in several task fMRI data sets and compared using indicators of data quality and signal-to-noise ratio. Motion censoring decreased variance in parameter estimates within- and across-subjects, reduced residual error in GLM estimation, and increased the magnitude of statistical effects. Motion censoring performed better than all forms of motion regression and also performed well across a variety of parameter spaces, in GLMs with assumed or unassumed response shapes. We conclude that motion censoring improves the quality of task fMRI data and can be a valuable processing step in studies involving populations with even mild amounts of head movement. Copyright © 2013 Wiley Periodicals, Inc.

  1. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  2. Noise sensitivity and reactions to noise and other environmental conditions

    NARCIS (Netherlands)

    Miedema, H.M.E.; Vos, H.

    2003-01-01

    This article integrates findings from the literature and new results regarding noise sensitivity. The new results are based on analyses of 28 combined datasets (N=23 038), and separate analyses of a large aircraft noise study (N=10939). Three topics regarding noise sensitivity are discussed, namely,

  3. Solving discrete zero point problems

    NARCIS (Netherlands)

    van der Laan, G.; Talman, A.J.J.; Yang, Z.F.

    2004-01-01

    In this paper an algorithm is proposed to .nd a discrete zero point of a function on the collection of integral points in the n-dimensional Euclidean space IRn.Starting with a given integral point, the algorithm generates a .nite sequence of adjacent integral simplices of varying dimension and

  4. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    Science.gov (United States)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  5. Risk factors associated with sensitization to hydroxyisohexyl 3-cyclohexene carboxaldehyde.

    Science.gov (United States)

    Uter, Wolfgang; Geier, Johannes; Schnuch, Axel; Gefeller, Olaf

    2013-08-01

    Hydroxyisohexyl 3-cyclohexene carboxaldehyde (HICC) is a synthetic fragrance chemical and an important contact allergen, at least in Europe. Despite this importance, little is known about risk factors associated with this allergen. To examine factors from the history and clinical presentation of patch tested patients associated with HICC sensitization. Contact allergy surveillance data of 95 637 patients collected by the Information Network of Departments of Dermatology (IVDK, www.ivkd.org) in 2002-2011 were analysed. Point and interval estimates of the relative risk were derived from multifactorial logistic regression modelling. The overall prevalence of HICC sensitization was 2.24%. The strongest risk factors were polysensitization and dermatitis of the axillae, followed by dermatitis at other sites. No consistent and significant time trend was observed in this analysis. As compared with the youngest patients, the odds of HICC sensitization increased approximately three-fold in the 52-67-year age group, and strongly declined with further increasing age. The risk pattern with regard to age and affected anatomical site differed from that observed with other fragrance screening allergens. Cosmetic exposure, as broadly defined here, was a stronger and more prevalent individual risk factor than occupational exposure. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  6. Compilation and network analyses of cambrian food webs.

    Directory of Open Access Journals (Sweden)

    Jennifer A Dunne

    2008-04-01

    Full Text Available A rich body of empirically grounded theory has developed about food webs--the networks of feeding relationships among species within habitats. However, detailed food-web data and analyses are lacking for ancient ecosystems, largely because of the low resolution of taxa coupled with uncertain and incomplete information about feeding interactions. These impediments appear insurmountable for most fossil assemblages; however, a few assemblages with excellent soft-body preservation across trophic levels are candidates for food-web data compilation and topological analysis. Here we present plausible, detailed food webs for the Chengjiang and Burgess Shale assemblages from the Cambrian Period. Analyses of degree distributions and other structural network properties, including sensitivity analyses of the effects of uncertainty associated with Cambrian diet designations, suggest that these early Paleozoic communities share remarkably similar topology with modern food webs. Observed regularities reflect a systematic dependence of structure on the numbers of taxa and links in a web. Most aspects of Cambrian food-web structure are well-characterized by a simple "niche model," which was developed for modern food webs and takes into account this scale dependence. However, a few aspects of topology differ between the ancient and recent webs: longer path lengths between species and more species in feeding loops in the earlier Chengjiang web, and higher variability in the number of links per species for both Cambrian webs. Our results are relatively insensitive to the exclusion of low-certainty or random links. The many similarities between Cambrian and recent food webs point toward surprisingly strong and enduring constraints on the organization of complex feeding interactions among metazoan species. The few differences could reflect a transition to more strongly integrated and constrained trophic organization within ecosystems following the rapid

  7. Compilation and network analyses of cambrian food webs.

    Science.gov (United States)

    Dunne, Jennifer A; Williams, Richard J; Martinez, Neo D; Wood, Rachel A; Erwin, Douglas H

    2008-04-29

    A rich body of empirically grounded theory has developed about food webs--the networks of feeding relationships among species within habitats. However, detailed food-web data and analyses are lacking for ancient ecosystems, largely because of the low resolution of taxa coupled with uncertain and incomplete information about feeding interactions. These impediments appear insurmountable for most fossil assemblages; however, a few assemblages with excellent soft-body preservation across trophic levels are candidates for food-web data compilation and topological analysis. Here we present plausible, detailed food webs for the Chengjiang and Burgess Shale assemblages from the Cambrian Period. Analyses of degree distributions and other structural network properties, including sensitivity analyses of the effects of uncertainty associated with Cambrian diet designations, suggest that these early Paleozoic communities share remarkably similar topology with modern food webs. Observed regularities reflect a systematic dependence of structure on the numbers of taxa and links in a web. Most aspects of Cambrian food-web structure are well-characterized by a simple "niche model," which was developed for modern food webs and takes into account this scale dependence. However, a few aspects of topology differ between the ancient and recent webs: longer path lengths between species and more species in feeding loops in the earlier Chengjiang web, and higher variability in the number of links per species for both Cambrian webs. Our results are relatively insensitive to the exclusion of low-certainty or random links. The many similarities between Cambrian and recent food webs point toward surprisingly strong and enduring constraints on the organization of complex feeding interactions among metazoan species. The few differences could reflect a transition to more strongly integrated and constrained trophic organization within ecosystems following the rapid diversification of species, body

  8. TEMAC, Top Event Sensitivity Analysis

    International Nuclear Information System (INIS)

    Iman, R.L.; Shortencarier, M.J.

    1988-01-01

    1 - Description of program or function: TEMAC is designed to permit the user to easily estimate risk and to perform sensitivity and uncertainty analyses with a Boolean expression such as produced by the SETS computer program. SETS produces a mathematical representation of a fault tree used to model system unavailability. In the terminology of the TEMAC program, such a mathematical representation is referred to as a top event. The analysis of risk involves the estimation of the magnitude of risk, the sensitivity of risk estimates to base event probabilities and initiating event frequencies, and the quantification of the uncertainty in the risk estimates. 2 - Method of solution: Sensitivity and uncertainty analyses associated with top events involve mathematical operations on the corresponding Boolean expression for the top event, as well as repeated evaluations of the top event in a Monte Carlo fashion. TEMAC employs a general matrix approach which provides a convenient general form for Boolean expressions, is computationally efficient, and allows large problems to be analyzed. 3 - Restrictions on the complexity of the problem - Maxima of: 4000 cut sets, 500 events, 500 values in a Monte Carlo sample, 16 characters in an event name. These restrictions are implemented through the FORTRAN 77 PARAMATER statement

  9. PowerPoint 2010 Bible

    CERN Document Server

    Wempen, Faithe

    2010-01-01

    Master PowerPoint and improve your presentation skills-with one book!. It's no longer enough to have slide after slide of text, bullets, and charts. It's not even enough to have good speaking skills if your PowerPoint slides bore your audience. Get the very most out of all that PowerPoint 2010 has to offer while also learning priceless tips and techniques for making good presentations in this new PowerPoint 2010 Bible. Well-known PowerPoint expert and author Faithe Wempen provides formatting tips; shows you how to work with drawings, tables, and SmartArt; introduces new collaboration tools; wa

  10. Imaging study on acupuncture points

    Science.gov (United States)

    Yan, X. H.; Zhang, X. Y.; Liu, C. L.; Dang, R. S.; Ando, M.; Sugiyama, H.; Chen, H. S.; Ding, G. H.

    2009-09-01

    The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.

  11. Imaging study on acupuncture points

    International Nuclear Information System (INIS)

    Yan, X H; Zhang, X Y; Liu, C L; Dang, R S; Ando, M; Sugiyama, H; Chen, H S; Ding, G H

    2009-01-01

    The topographic structures of acupuncture points were investigated by using the synchrotron radiation based Dark Field Image (DFI) method. Four following acupuncture points were studied: Sanyinjiao, Neiguan, Zusanli and Tianshu. We have found that at acupuncture point regions there exists the accumulation of micro-vessels. The images taken in the surrounding tissue out of the acupuncture points do not show such kind of structure. It is the first time to reveal directly the specific structure of acupuncture points by X-ray imaging.

  12. Magic Pointing for Eyewear Computers

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Mardanbegi, Diako; Pederson, Thomas

    2015-01-01

    In this paper, we propose a combination of head and eye movements for touchlessly controlling the "mouse pointer" on eyewear devices, exploiting the speed of eye pointing and accuracy of head pointing. The method is a wearable computer-targeted variation of the original MAGIC pointing approach...... which combined gaze tracking with a classical mouse device. The result of our experiment shows that the combination of eye and head movements is faster than head pointing for far targets and more accurate than eye pointing....

  13. Chemical kinetic functional sensitivity analysis: Elementary sensitivities

    International Nuclear Information System (INIS)

    Demiralp, M.; Rabitz, H.

    1981-01-01

    Sensitivity analysis is considered for kinetics problems defined in the space--time domain. This extends an earlier temporal Green's function method to handle calculations of elementary functional sensitivities deltau/sub i//deltaα/sub j/ where u/sub i/ is the ith species concentration and α/sub j/ is the jth system parameter. The system parameters include rate constants, diffusion coefficients, initial conditions, boundary conditions, or any other well-defined variables in the kinetic equations. These parameters are generally considered to be functions of position and/or time. Derivation of the governing equations for the sensitivities and the Green's funciton are presented. The physical interpretation of the Green's function and sensitivities is given along with a discussion of the relation of this work to earlier research

  14. Starting point anchoring effects in choice experiments

    DEFF Research Database (Denmark)

    Ladenburg, Jacob; Olsen, Søren Bøye

    of preferences in Choice Experiments resembles the Dichotomous Choice format, there is reason to suspect that Choice Experiments are equally vulnerable to anchoring bias. Employing different sets of price levels in a so-called Instruction Choice Set presented prior to the actual choice sets, the present study...... subjectivity in the present study is gender dependent, pointing towards, that female respondents are prone to be affected by the price levels employed. Male respondents, on the other hand, are not sensitive towards these prices levels. Overall, this implicates that female respondents, when employing a low......-priced Instruction Choice Set, tend to express lower willingness-to-pay than when higher prices are employed....

  15. Evaluating Google, Twitter, and Wikipedia as Tools for Influenza Surveillance Using Bayesian Change Point Analysis: A Comparative Analysis.

    Science.gov (United States)

    Sharpe, J Danielle; Hopkins, Richard S; Cook, Robert L; Striley, Catherine W

    2016-10-20

    Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC's change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package "bcp" version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for

  16. Sample preparation in foodomic analyses.

    Science.gov (United States)

    Martinović, Tamara; Šrajer Gajdošik, Martina; Josić, Djuro

    2018-04-16

    Representative sampling and adequate sample preparation are key factors for successful performance of further steps in foodomic analyses, as well as for correct data interpretation. Incorrect sampling and improper sample preparation can be sources of severe bias in foodomic analyses. It is well known that both wrong sampling and sample treatment cannot be corrected anymore. These, in the past frequently neglected facts, are now taken into consideration, and the progress in sampling and sample preparation in foodomics is reviewed here. We report the use of highly sophisticated instruments for both high-performance and high-throughput analyses, as well as miniaturization and the use of laboratory robotics in metabolomics, proteomics, peptidomics and genomics. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  17. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  18. Precise Chemical Analyses of Planetary Surfaces

    Science.gov (United States)

    Kring, David; Schweitzer, Jeffrey; Meyer, Charles; Trombka, Jacob; Freund, Friedemann; Economou, Thanasis; Yen, Albert; Kim, Soon Sam; Treiman, Allan H.; Blake, David; hide

    1996-01-01

    We identify the chemical elements and element ratios that should be analyzed to address many of the issues identified by the Committee on Planetary and Lunar Exploration (COMPLEX). We determined that most of these issues require two sensitive instruments to analyze the necessary complement of elements. In addition, it is useful in many cases to use one instrument to analyze the outermost planetary surface (e.g. to determine weathering effects), while a second is used to analyze a subsurface volume of material (e.g., to determine the composition of unaltered planetary surface material). This dual approach to chemical analyses will also facilitate the calibration of orbital and/or Earth-based spectral observations of the planetary body. We determined that in many cases the scientific issues defined by COMPLEX can only be fully addressed with combined packages of instruments that would supplement the chemical data with mineralogic or visual information.

  19. Sensitivities of JEM-X

    DEFF Research Database (Denmark)

    Westergaard, Niels Jørgen Stenfeldt; Lund, Niels

    1999-01-01

    The mask design for JEM-X is now finalized regarding the hole pattern and mechnical support structure. The engineering model of the detectors with collimators is under construction. The JEM-X sensitivities for source detection in various circumstances are reviewed with proper regard to the way...... INTEGRAL will carry out its pointed observations. An important fraction of the INTEGRAL observation time will be used for scans along the galactic plane and observations of the central region of our galaxy. The JEM-X performance for this type of observations is discussed. The software tools used to carry...

  20. Sensitivity analysis of a PWR pressurizer

    International Nuclear Information System (INIS)

    Bruel, Renata Nunes

    1997-01-01

    A sensitivity analysis relative to the parameters and modelling of the physical process in a PWR pressurizer has been performed. The sensitivity analysis was developed by implementing the key parameters and theoretical model lings which generated a comprehensive matrix of influences of each changes analysed. The major influences that have been observed were the flashing phenomenon and the steam condensation on the spray drops. The present analysis is also applicable to the several theoretical and experimental areas. (author)

  1. Descriptive Analyses of Mechanical Systems

    DEFF Research Database (Denmark)

    Andreasen, Mogens Myrup; Hansen, Claus Thorp

    2003-01-01

    Forord Produktanalyse og teknologianalyse kan gennmføres med et bredt socio-teknisk sigte med henblik på at forstå kulturelle, sociologiske, designmæssige, forretningsmæssige og mange andre forhold. Et delområde heri er systemisk analyse og beskrivelse af produkter og systemer. Nærværende kompend...

  2. Analysing Children's Drawings: Applied Imagination

    Science.gov (United States)

    Bland, Derek

    2012-01-01

    This article centres on a research project in which freehand drawings provided a richly creative and colourful data source of children's imagined, ideal learning environments. Issues concerning the analysis of the visual data are discussed, in particular, how imaginative content was analysed and how the analytical process was dependent on an…

  3. Impact analyses after pipe rupture

    International Nuclear Information System (INIS)

    Chun, R.C.; Chuang, T.Y.

    1983-01-01

    Two of the French pipe whip experiments are reproduced with the computer code WIPS. The WIPS results are in good agreement with the experimental data and the French computer code TEDEL. This justifies the use of its pipe element in conjunction with its U-bar element in a simplified method of impact analyses

  4. Millifluidic droplet analyser for microbiology

    NARCIS (Netherlands)

    Baraban, L.; Bertholle, F.; Salverda, M.L.M.; Bremond, N.; Panizza, P.; Baudry, J.; Visser, de J.A.G.M.; Bibette, J.

    2011-01-01

    We present a novel millifluidic droplet analyser (MDA) for precisely monitoring the dynamics of microbial populations over multiple generations in numerous (=103) aqueous emulsion droplets (100 nL). As a first application, we measure the growth rate of a bacterial strain and determine the minimal

  5. Analyser of sweeping electron beam

    International Nuclear Information System (INIS)

    Strasser, A.

    1993-01-01

    The electron beam analyser has an array of conductors that can be positioned in the field of the sweeping beam, an electronic signal treatment system for the analysis of the signals generated in the conductors by the incident electrons and a display for the different characteristics of the electron beam

  6. Electrochemical biosensors for hormone analyses.

    Science.gov (United States)

    Bahadır, Elif Burcu; Sezgintürk, Mustafa Kemal

    2015-06-15

    Electrochemical biosensors have a unique place in determination of hormones due to simplicity, sensitivity, portability and ease of operation. Unlike chromatographic techniques, electrochemical techniques used do not require pre-treatment. Electrochemical biosensors are based on amperometric, potentiometric, impedimetric, and conductometric principle. Amperometric technique is a commonly used one. Although electrochemical biosensors offer a great selectivity and sensitivity for early clinical analysis, the poor reproducible results, difficult regeneration steps remain primary challenges to the commercialization of these biosensors. This review summarizes electrochemical (amperometric, potentiometric, impedimetric and conductometric) biosensors for hormone detection for the first time in the literature. After a brief description of the hormones, the immobilization steps and analytical performance of these biosensors are summarized. Linear ranges, LODs, reproducibilities, regenerations of developed biosensors are compared. Future outlooks in this area are also discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Sensitivity analysis using probability bounding

    International Nuclear Information System (INIS)

    Ferson, Scott; Troy Tucker, W.

    2006-01-01

    Probability bounds analysis (PBA) provides analysts a convenient means to characterize the neighborhood of possible results that would be obtained from plausible alternative inputs in probabilistic calculations. We show the relationship between PBA and the methods of interval analysis and probabilistic uncertainty analysis from which it is jointly derived, and indicate how the method can be used to assess the quality of probabilistic models such as those developed in Monte Carlo simulations for risk analyses. We also illustrate how a sensitivity analysis can be conducted within a PBA by pinching inputs to precise distributions or real values

  8. Laser Dew-Point Hygrometer

    Science.gov (United States)

    Matsumoto, Shigeaki; Toyooka, Satoru

    1995-01-01

    A rough-surface-type automatic dew-point hygrometer was developed using a laser diode and an optical fiber cable. A gold plate with 0.8 µ m average surface roughness was used as a surface for deposition of dew to facilitate dew deposition and prevent supersaturation of water vapor at the dew point. It was shown experimentally that the quantity of dew deposited can be controlled to be constant at any predetermined level, and is independent of the dew point to be measured. The dew points were measured in the range from -15° C to 54° C in which the temperature ranged from 0° C to 60° C. The measurement error of the dew point was ±0.5° C which was equal to below ±2% in relative humidity in the above dew-point range.

  9. Fermat's point from five perspectives

    Science.gov (United States)

    Park, Jungeun; Flores, Alfinio

    2015-04-01

    The Fermat point of a triangle is the point such that minimizes the sum of the distances from that point to the three vertices. Five approaches to study the Fermat point of a triangle are presented in this article. First, students use a mechanical device using masses, strings and pulleys to study the Fermat point as the one that minimizes the potential energy of the system. Second, students use soap films between parallel planes connecting three pegs. The tension on the film will be minimal when the sum of distances is minimal. Third, students use an empirical approach, measuring distances in an interactive GeoGebra page. Fourth, students use Euclidean geometry arguments for two proofs based on the Torricelli configuration, and one using Viviani's Theorem. And fifth, the kinematic method is used to gain additional insight on the size of the angles between the segments joining the Fermat point with the vertices.

  10. Response surface use in safety analyses

    International Nuclear Information System (INIS)

    Prosek, A.

    1999-01-01

    When thousands of complex computer code runs related to nuclear safety are needed for statistical analysis, the response surface is used to replace the computer code. The main purpose of the study was to develop and demonstrate a tool called optimal statistical estimator (OSE) intended for response surface generation of complex and non-linear phenomena. The performance of optimal statistical estimator was tested by the results of 59 different RELAP5/MOD3.2 code calculations of the small-break loss-of-coolant accident in a two loop pressurized water reactor. The results showed that OSE adequately predicted the response surface for the peak cladding temperature. Some good characteristic of the OSE like monotonic function between two neighbor points and independence on the number of output parameters suggest that OSE can be used for response surface generation of any safety or system parameter in the thermal-hydraulic safety analyses.(author)

  11. Null point of discrimination in crustacean polarisation vision.

    Science.gov (United States)

    How, Martin J; Christy, John; Roberts, Nicholas W; Marshall, N Justin

    2014-07-15

    The polarisation of light is used by many species of cephalopods and crustaceans to discriminate objects or to communicate. Most visual systems with this ability, such as that of the fiddler crab, include receptors with photopigments that are oriented horizontally and vertically relative to the outside world. Photoreceptors in such an orthogonal array are maximally sensitive to polarised light with the same fixed e-vector orientation. Using opponent neural connections, this two-channel system may produce a single value of polarisation contrast and, consequently, it may suffer from null points of discrimination. Stomatopod crustaceans use a different system for polarisation vision, comprising at least four types of polarisation-sensitive photoreceptor arranged at 0, 45, 90 and 135 deg relative to each other, in conjunction with extensive rotational eye movements. This anatomical arrangement should not suffer from equivalent null points of discrimination. To test whether these two systems were vulnerable to null points, we presented the fiddler crab Uca heteropleura and the stomatopod Haptosquilla trispinosa with polarised looming stimuli on a modified LCD monitor. The fiddler crab was less sensitive to differences in the degree of polarised light when the e-vector was at -45 deg than when the e-vector was horizontal. In comparison, stomatopods showed no difference in sensitivity between the two stimulus types. The results suggest that fiddler crabs suffer from a null point of sensitivity, while stomatopods do not. © 2014. Published by The Company of Biologists Ltd.

  12. Calibrated HDRI in 3D point clouds

    DEFF Research Database (Denmark)

    Bülow, Katja; Tamke, Martin

    2017-01-01

    the challenges of dynamic smart lighting planning in outdoor urban space. This paper presents findings on how 3D capturing of outdoor environments combined with HDRI establishes a new way for analysing and representing the spatial distribution of light in combination with luminance data.......3D-scanning technologies and point clouds as means for spatial representation introduce a new paradigm to the measuring and mapping of physical artefacts and space. This technology also offers possibilities for the measuring and mapping of outdoor urban lighting and has the potential to meet...

  13. Nuclear data covariances and sensitivity analysis, validation of a methodology based on the perturbation theory; application to an innovative concept: the molten thorium salt fueled reactor; Analyses de sensibilite et d'incertitude de donnees nucleaires. Contribution a la validation d'une methodologie utilisant la theorie des perturbations; application a un concept innovant: reacteur a sels fondus thorium a spectre epithermique

    Energy Technology Data Exchange (ETDEWEB)

    Bidaud, A

    2005-10-15

    Neutron transport simulation of nuclear reactors is based on the knowledge of the neutron-nucleus interaction (cross-sections, fission neutron yields and spectra...) for the dozens of nuclei present in the core over a very large energy range (fractions of eV to several MeV). To obtain the goal of the sustainable development of nuclear power, future reactors must have new and more strict constraints to their design: optimization of ore materials will necessitate breeding (generation of fissile material from fertile material), and waste management will require transmutation. Innovative reactors that could achieve such objectives - generation IV or ADS (accelerator driven system) - are loaded with new fuels (thorium, heavy actinides) and function with neutron spectra for which nuclear data do not benefit from 50 years of industrial experience, and thus present particular challenges. After validation on an experimental reactor using an international benchmark, we take classical reactor physics tools along with available nuclear data uncertainties to calculate the sensitivities and uncertainties of the criticality and temperature coefficient of a thorium molten salt reactor. In addition, a study based on the important reaction rates for the calculation of cycle's equilibrium allows us to estimate the efficiency of different reprocessing strategies and the contribution of these reaction rates on the uncertainty of the breeding and then on the uncertainty of the size of the reprocessing plant. Finally, we use this work to propose an improvement of the high priority experimental request list. (author)

  14. Research on Nonlinear Feature of Electrical Resistance of Acupuncture Points

    Directory of Open Access Journals (Sweden)

    Jianzi Wei

    2012-01-01

    Full Text Available A highly sensitive volt-ampere characteristics detecting system was applied to measure the volt-ampere curves of nine acupuncture points, LU9, HT7, LI4, PC6, ST36, SP6, KI3, LR3, and SP3, and corresponding nonacupuncture points bilaterally from 42 healthy volunteers. Electric currents intensity was increased from 0 μA to 20 μA and then returned to 0 μA again. The results showed that the volt-ampere curves of acupuncture points had nonlinear property and magnetic hysteresis-like feature. On all acupuncture point spots, the volt-ampere areas of the increasing phase were significantly larger than that of the decreasing phase (P<0.01. The volt-ampere areas of ten acupuncture point spots were significantly smaller than those of the corresponding nonacupuncture point spots when intensity was increase (P<0.05~P<0.001. And when intensity was decrease, eleven acupuncture point spots showed the same property as above (P<0.05~P<0.001, while two acupuncture point spots showed opposite phenomenon in which the areas of two acupuncture point spots were larger than those of the corresponding nonacupuncture point spots (P<0.05~P<0.01. These results show that the phenomenon of low skin resistance does not exist to all acupuncture points.

  15. NPDES permits and water analyses

    International Nuclear Information System (INIS)

    Pojasek, R.B.

    1975-01-01

    Provisions of the Federal Water Pollution Control Act, as amended by P. L. 92-500, including an explanation of the National Pollutant Discharge Elimination System (NPDES), and EPA's criteria for the analysis of pollutants are discussed. The need for a revision of current restrictive variance procedures is pointed out. References for the comparison of analytical methods for water pollutants under permits, including radioactive parameters, are tabulated. (U.S.)

  16. A miniaturised, nested-cylindrical electrostatic analyser geometry for dual electron and ion, multi-energy measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bedington, Robert, E-mail: r.bedington@nus.edu.sg; Kataria, Dhiren; Smith, Alan

    2015-09-01

    The CATS (Cylindrical And Tiny Spectrometer) electrostatic optics geometry features multiple nested cylindrical analysers to simultaneously measure multiple energies of electron and multiple energies of ion in a configuration that is targeted at miniaturisation and MEMS fabrication. In the prototyped model, two configurations of cylindrical analyser were used, featuring terminating side-plates that caused particle trajectories to either converge (C type) or diverge (D type) in the axial direction. Simulations show how these different electrode configurations affect the particle focussing and instrument parameters; C-type providing greater throughputs but D-type providing higher resolving powers. The simulations were additionally used to investigate unexpected plate spacing variations in the as-built model, revealing that the k-factors are most sensitive to the width of the inter-electrode spacing at its narrowest point. - Highlights: • A new nested cylindrical miniaturised electrostatic analyser geometry is described. • “Converging” (C) and “diverging” (D) type channel properties are investigated. • C channels are shown to have greater throughputs and D greater resolving powers. • Plate factors are shown to be sensitive to the minimum in inter-electrode spacing.

  17. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  18. Mitogenomic analyses from ancient DNA

    DEFF Research Database (Denmark)

    Paijmans, Johanna L. A.; Gilbert, Tom; Hofreiter, Michael

    2013-01-01

    The analysis of ancient DNA is playing an increasingly important role in conservation genetic, phylogenetic and population genetic analyses, as it allows incorporating extinct species into DNA sequence trees and adds time depth to population genetics studies. For many years, these types of DNA...... analyses (whether using modern or ancient DNA) were largely restricted to the analysis of short fragments of the mitochondrial genome. However, due to many technological advances during the past decade, a growing number of studies have explored the power of complete mitochondrial genome sequences...... yielded major progress with regard to both the phylogenetic positions of extinct species, as well as resolving population genetics questions in both extinct and extant species....

  19. Effect of Antenna Pointing Errors on SAR Imaging Considering the Change of the Point Target Location

    Science.gov (United States)

    Zhang, Xin; Liu, Shijie; Yu, Haifeng; Tong, Xiaohua; Huang, Guoman

    2018-04-01

    Towards spaceborne spotlight SAR, the antenna is regulated by the SAR system with specific regularity, so the shaking of the internal mechanism is inevitable. Moreover, external environment also has an effect on the stability of SAR platform. Both of them will cause the jitter of the SAR platform attitude. The platform attitude instability will introduce antenna pointing error on both the azimuth and range directions, and influence the acquisition of SAR original data and ultimate imaging quality. In this paper, the relations between the antenna pointing errors and the three-axis attitude errors are deduced, then the relations between spaceborne spotlight SAR imaging of the point target and antenna pointing errors are analysed based on the paired echo theory, meanwhile, the change of the azimuth antenna gain is considered as the spotlight SAR platform moves ahead. The simulation experiments manifest the effects on spotlight SAR imaging caused by antenna pointing errors are related to the target location, that is, the pointing errors of the antenna beam will severely influence the area far away from the scene centre of azimuth direction in the illuminated scene.

  20. Search for the QCD critical point at SPS energies

    CERN Document Server

    Anticic, T.; Barna, D.; Bartke, J.; Betev, L.; Bialkowska, H.; Blume, C.; Boimska, B.; Botje, M.; Bracinik, J.; Buncic, P.; Cerny, V.; Christakoglou, P.; Chung, P.; Chvala, O.; Cramer, J.G.; Csato, P.; Dinkelaker, P.; Eckardt, V.; Fodor, Z.; Foka, P.; Friese, V.; Gal, J.; Gazdzicki, M.; Genchev, V.; Gladysz, E.; Grebieszkow, K.; Hegyi, S.; Hohne, C.; Kadija, K.; Karev, A.; Kikola, D.; Kolesnikov, V.I.; Kornas, E.; Korus, R.; Kowalski, M.; Kreps, M.; Laszlo, A.; Lacey, R.; van Leeuwen, M.; Levai, P.; Litov, L.; Lungwitz, B.; Makariev, M.; Malakhov, A.I.; Mateev, M.; Melkumov, G.L.; Mischke, A.; Mitrovski, M.; Mrowczynski, St.; Palla, G.; Panagiotou, A.D.; Petridis, A.; Peryt, W.; Pikna, M.; Pluta, J.; Prindle, D.; Puhlhofer, F.; Renfordt, R.; Roland, C.; Roland, G.; Rybczynski, M.; Rybicki, A.; Sandoval, A.; Schmitz, N.; Schuster, T.; Seyboth, P.; Sikler, F.; Sitar, B.; Skrzypczak, E.; Slodkowski, M.; Stefanek, G.; Stock, R.; Strabel, C.; Strobele, H.; Susa, T.; Szentpetery, I.; Sziklai, J.; Szuba, M.; Szymanski, P.; Trubnikov, V.; Utvic, M.; Varga, D.; Vassiliou, M.; Veres, G.I.; Vesztergombi, G.; Vranic, D.; Wlodarczyk, Z.; Wojtaszek-Szwarc, A.; Yoo, I.K.; Abgrall, N.; Aduszkiewicz, A.; Andrieu, B.; Anticic, T.; Antoniou, N.; Argyriades, J.; Asryan, A.G.; Blondel, A.; Blumer, J.; Boldizsar, L.; Bravar, A.; Brzychczyk, J.; Bubak, A.; Bunyatov, S.A.; Choi, K.-U.; Chung, P.; Cleymans, J.; Derkach, D.A.; Diakonos, F.; Dominik, W.; Dumarchez, J.; Engel, R.; Ereditato, A.; Feofilov, G.A.; Ferrero, A.; Gazdzicki, M.; Golubeva, M.; Grzeszczuk, A.; Guber, F.; Hasegawa, T.; Haungs, A.; Igolkin, S.; Ivanov, A.S.; Ivashkin, A.; Katrynska, N.; Kielczewska, D.; Kisiel, J.; Kobayashi, T.; Kolev, D.; Kolevatov, R.S.; Kondratiev, V.P.; Kowalski, S.; Kurepin, A.; Lacey, R.; Lyubushkin, V.V.; Majka, Z.; Marchionni, A.; Marcinek, A.; Maris, I.; Matveev, V.; Meregaglia, A.; Messina, M.; Mijakowski, P.; Montaruli, T.; Murphy, S.; Nakadaira, T.; Naumenko, P.A.; Nikolic, V.; Nishikawa, K.; Palczewski, T.; Planeta, R.; Popov, B.A.; Posiadala, M.; Przewlocki, P.; Rauch, W.; Ravonel, M.; Rohrich, D.; Rondio, E.; Rossi, B.; Roth, M.; Rubbia, A.; Sadovsky, A.; Sakashita, K.; Sekiguchi, T.; Seyboth, P.; Shibata, M.; Sissakian, A.N.; Sorin, A.S.; Staszel, P.; Stepaniak, J.; Strabel, C.; Stroebele, H.; Tada, M.; Taranenko, A.; Tsenov, R.; Ulrich, R.; Unger, M.; Vechernin, V.V.; Zipper, W.

    2009-01-01

    Lattice QCD calculations locate the QCD critical point at energies accessible at the CERN Super Proton Synchrotron (SPS). We present average transverse momentum and multiplicity fluctuations, as well as baryon and anti-baryon transverse mass spectra which are expected to be sensitive to effects of the critical point. The future CP search strategy of the NA61/SHINE experiment at the SPS is also discussed.