WorldWideScience

Sample records for sequential uncertainty fitting

  1. Better together: reduced compliance after sequential versus simultaneous bilateral hearing aids fitting.

    Science.gov (United States)

    Lavie, Limor; Banai, Karen; Attias, Joseph; Karni, Avi

    2014-03-01

    The purpose of this study was to determine the effects of sequential versus simultaneous bilateral hearing aids fitting on patient compliance. Thirty-six older adults with hearing impairment participated in this study. Twelve were fitted with bilateral hearing aids simultaneously. The remaining participants were fitted sequentially: One hearing aid (to the left or to the right ear) was used initially; 1 month later, the other ear was also fitted with a hearing aid for bilateral use. Self-reports on usefulness and compliance were elicited after the first and second months of hearing aid use. In addition, the number of hours the hearing aids were used was extracted from the data loggings of each device. Simultaneous fitting resulted in high levels of compliance and consistent usage throughout the study period. Sequential fitting resulted in abrupt reduction in compliance and hours of use once the second hearing aid was added, both in the clinical scoring and in the data loggings. Simultaneous fitting of bilateral hearing aids results in better compliance compared with sequential fitting. The addition of a second hearing aid after a relatively short period of monaural use may lead to inconsistent use of both hearing aids.

  2. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  3. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-09-28

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data points can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.

  4. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    International Nuclear Information System (INIS)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by χ 2 -minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimates for the fit parameters. They compare this method with a χ 2 -minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than ∼20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers

  5. The use of sequential indicator simulation to characterize geostatistical uncertainty

    International Nuclear Information System (INIS)

    Hansen, K.M.

    1992-10-01

    Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It is recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds

  6. Markov decision processes: a tool for sequential decision making under uncertainty.

    Science.gov (United States)

    Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S

    2010-01-01

    We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.

  7. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data

    Science.gov (United States)

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-01

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  8. Uncertainty evaluation for ordinary least-square fitting with arbitrary order polynomial in joule balance method

    International Nuclear Information System (INIS)

    You, Qiang; Xu, JinXin; Wang, Gang; Zhang, Zhonghua

    2016-01-01

    The ordinary least-square fitting with polynomial is used in both the dynamic phase of the watt balance method and the weighting phase of joule balance method but few researches have been conducted to evaluate the uncertainty of the fitting data in the electrical balance methods. In this paper, a matrix-calculation method for evaluating the uncertainty of the polynomial fitting data is derived and the properties of this method are studied by simulation. Based on this, another two derived methods are proposed. One is used to find the optimal fitting order for the watt or joule balance methods. Accuracy and effective factors of this method are experimented with simulations. The other is used to evaluate the uncertainty of the integral of the fitting data for joule balance, which is demonstrated with an experiment from the NIM-1 joule balance. (paper)

  9. Sequential Test Selection by Quantifying of the Reduction in Diagnostic Uncertainty for the Diagnosis of Proximal Caries

    Directory of Open Access Journals (Sweden)

    Umut Arslan

    2013-06-01

    Full Text Available Background: In order to determine the presence or absence of a certain disease, multiple diagnostic tests may be necessary. Performance of these tests can be sequentially evaluated. Aims: The aim of the study is to determine the contribution of the test in each step, in reducing diagnostic uncertainty when multiple tests are sequentially used for the diagnosis. Study Design: Diagnostic accuracy study Methods: Radiographs of seventy-three patients of the Department of Dento-Maxillofacial Radiology of Hacettepe University Faculty of Dentistry were assessed. Panoramic (PAN, full mouth intraoral (FM, and bitewing (BW radiographs were used for the diagnosis of proximal caries in the maxillary and mandibular molar regions. Diagnostic performance of radiography was sequentially evaluated by using the reduction in diagnostic uncertainty. Results: FM provided maximum diagnostic information for ruling in potential in the maxillary and mandibular molar regions in the first step. FM provided more diagnostic information than BW radiographs for ruling in the mandibular region in the second step. In the mandibular region, BW radiographs provided more diagnostic information than FM for ruling out in the first step. Conclusion: The presented method in this study provides the clinicians with a solution for the decision of the sequential selection of diagnostic tests for the correct diagnosis of the presence or absence of a certain disease.

  10. Does model fit decrease the uncertainty of the data in comparison with a general non-model least squares fit?

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    The information entropy is taken as a measure of knowledge about the object and the reduced univariante variance as a common measure of uncertainty. Covariances in the model versus non-model least square fits are discussed

  11. Improved profile fitting and quantification of uncertainty in experimental measurements of impurity transport coefficients using Gaussian process regression

    International Nuclear Information System (INIS)

    Chilenski, M.A.; Greenwald, M.; Howard, N.T.; White, A.E.; Rice, J.E.; Walk, J.R.; Marzouk, Y.

    2015-01-01

    The need to fit smooth temperature and density profiles to discrete observations is ubiquitous in plasma physics, but the prevailing techniques for this have many shortcomings that cast doubt on the statistical validity of the results. This issue is amplified in the context of validation of gyrokinetic transport models (Holland et al 2009 Phys. Plasmas 16 052301), where the strong sensitivity of the code outputs to input gradients means that inadequacies in the profile fitting technique can easily lead to an incorrect assessment of the degree of agreement with experimental measurements. In order to rectify the shortcomings of standard approaches to profile fitting, we have applied Gaussian process regression (GPR), a powerful non-parametric regression technique, to analyse an Alcator C-Mod L-mode discharge used for past gyrokinetic validation work (Howard et al 2012 Nucl. Fusion 52 063002). We show that the GPR techniques can reproduce the previous results while delivering more statistically rigorous fits and uncertainty estimates for both the value and the gradient of plasma profiles with an improved level of automation. We also discuss how the use of GPR can allow for dramatic increases in the rate of convergence of uncertainty propagation for any code that takes experimental profiles as inputs. The new GPR techniques for profile fitting and uncertainty propagation are quite useful and general, and we describe the steps to implementation in detail in this paper. These techniques have the potential to substantially improve the quality of uncertainty estimates on profile fits and the rate of convergence of uncertainty propagation, making them of great interest for wider use in fusion experiments and modelling efforts. (paper)

  12. Do (un)certainty appraisal tendencies reverse the influence of emotions on risk taking in sequential tasks?

    Science.gov (United States)

    Bagneux, Virginie; Bollon, Thierry; Dantzer, Cécile

    2012-01-01

    According to the Appraisal-Tendency Framework (Han, Lerner, & Keltner, 2007), certainty-associated emotions increase risk taking compared with uncertainty-associated emotions. To date, this general effect has only been shown in static judgement and decision-making paradigms; therefore, the present study tested the effect of certainty on risk taking in a sequential decision-making task. We hypothesised that the effect would be reversed due to the kind of processing involved, as certainty is considered to encourage heuristic processing that takes into account the emotional cues arising from previous decisions, whereas uncertainty leads to more systematic processing. One hundred and one female participants were induced to feel one of three emotions (film clips) before performing a decision-making task involving risk (Game of Dice Task; Brand et al., 2005). As expected, the angry and happy participants (certainty-associated emotions) were more likely than the fearful participants (uncertainty-associated emotion) to make safe decisions (vs. risky decisions).

  13. Using sequential indicator simulation to assess the uncertainty of delineating heavy-metal contaminated soils

    International Nuclear Information System (INIS)

    Juang, Kai-Wei; Chen, Yue-Shin; Lee, Dar-Yuan

    2004-01-01

    Mapping the spatial distribution of soil pollutants is essential for delineating contaminated areas. Currently, geostatistical interpolation, kriging, is increasingly used to estimate pollutant concentrations in soils. The kriging-based approach, indicator kriging (IK), may be used to model the uncertainty of mapping. However, a smoothing effect is usually produced when using kriging in pollutant mapping. The detailed spatial patterns of pollutants could, therefore, be lost. The local uncertainty of mapping pollutants derived by the IK technique is referred to as the conditional cumulative distribution function (ccdf) for one specific location (i.e. single-location uncertainty). The local uncertainty information obtained by IK is not sufficient as the uncertainty of mapping at several locations simultaneously (i.e. multi-location uncertainty or spatial uncertainty) is required to assess the reliability of the delineation of contaminated areas. The simulation approach, sequential indicator simulation (SIS), which has the ability to model not only single, but also multi-location uncertainties, was used, in this study, to assess the uncertainty of the delineation of heavy metal contaminated soils. To illustrate this, a data set of Cu concentrations in soil from Taiwan was used. The results show that contour maps of Cu concentrations generated by the SIS realizations exhausted all the spatial patterns of Cu concentrations without the smoothing effect found when using the kriging method. Based on the SIS realizations, the local uncertainty of Cu concentrations at a specific location of x', refers to the probability of the Cu concentration z(x') being higher than the defined threshold level of contamination (z c ). This can be written as Prob SIS [z(x')>z c ], representing the probability of contamination. The probability map of Prob SIS [z(x')>z c ] can then be used for delineating contaminated areas. In addition, the multi-location uncertainty of an area A

  14. Assessment of groundwater level estimation uncertainty using sequential Gaussian simulation and Bayesian bootstrapping

    Science.gov (United States)

    Varouchakis, Emmanouil; Hristopulos, Dionissios

    2015-04-01

    Space-time geostatistical approaches can improve the reliability of dynamic groundwater level models in areas with limited spatial and temporal data. Space-time residual Kriging (STRK) is a reliable method for spatiotemporal interpolation that can incorporate auxiliary information. The method usually leads to an underestimation of the prediction uncertainty. The uncertainty of spatiotemporal models is usually estimated by determining the space-time Kriging variance or by means of cross validation analysis. For de-trended data the former is not usually applied when complex spatiotemporal trend functions are assigned. A Bayesian approach based on the bootstrap idea and sequential Gaussian simulation are employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. These stochastic modelling approaches produce multiple realizations, rank the prediction results on the basis of specified criteria and capture the range of the uncertainty. The correlation of the spatiotemporal residuals is modeled using a non-separable space-time variogram based on the Spartan covariance family (Hristopulos and Elogne 2007, Varouchakis and Hristopulos 2013). We apply these simulation methods to investigate the uncertainty of groundwater level variations. The available dataset consists of bi-annual (dry and wet hydrological period) groundwater level measurements in 15 monitoring locations for the time period 1981 to 2010. The space-time trend function is approximated using a physical law that governs the groundwater flow in the aquifer in the presence of pumping. The main objective of this research is to compare the performance of two simulation methods for prediction uncertainty estimation. In addition, we investigate the performance of the Spartan spatiotemporal covariance function for spatiotemporal geostatistical analysis. Hristopulos, D.T. and Elogne, S.N. 2007. Analytic properties and covariance functions for a new class of generalized Gibbs

  15. Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty

    Science.gov (United States)

    Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea

    2017-04-01

    Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design

  16. Sensitivity Analysis in Sequential Decision Models.

    Science.gov (United States)

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  17. Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties

    Directory of Open Access Journals (Sweden)

    Sylvain Fichet

    2016-04-01

    Full Text Available The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…. All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.

  18. Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties

    Science.gov (United States)

    Fichet, Sylvain; Moreau, Grégory

    2016-04-01

    The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…). All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.

  19. The impact of uncertainty on optimal emission policies

    Science.gov (United States)

    Botta, Nicola; Jansson, Patrik; Ionescu, Cezar

    2018-05-01

    We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.

  20. Sequential fitting-and-separating reflectance components for analytical bidirectional reflectance distribution function estimation.

    Science.gov (United States)

    Lee, Yu; Yu, Chanki; Lee, Sang Wook

    2018-01-10

    We present a sequential fitting-and-separating algorithm for surface reflectance components that separates individual dominant reflectance components and simultaneously estimates the corresponding bidirectional reflectance distribution function (BRDF) parameters from the separated reflectance values. We tackle the estimation of a Lafortune BRDF model, which combines a nonLambertian diffuse reflection and multiple specular reflectance components with a different specular lobe. Our proposed method infers the appropriate number of BRDF lobes and their parameters by separating and estimating each of the reflectance components using an interval analysis-based branch-and-bound method in conjunction with iterative K-ordered scale estimation. The focus of this paper is the estimation of the Lafortune BRDF model. Nevertheless, our proposed method can be applied to other analytical BRDF models such as the Cook-Torrance and Ward models. Experiments were carried out to validate the proposed method using isotropic materials from the Mitsubishi Electric Research Laboratories-Massachusetts Institute of Technology (MERL-MIT) BRDF database, and the results show that our method is superior to a conventional minimization algorithm.

  1. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  2. A sequential factorial analysis approach to characterize the effects of uncertainties for supporting air quality management

    Science.gov (United States)

    Wang, S.; Huang, G. H.; Veawab, A.

    2013-03-01

    This study proposes a sequential factorial analysis (SFA) approach for supporting regional air quality management under uncertainty. SFA is capable not only of examining the interactive effects of input parameters, but also of analyzing the effects of constraints. When there are too many factors involved in practical applications, SFA has the advantage of conducting a sequence of factorial analyses for characterizing the effects of factors in a systematic manner. The factor-screening strategy employed in SFA is effective in greatly reducing the computational effort. The proposed SFA approach is applied to a regional air quality management problem for demonstrating its applicability. The results indicate that the effects of factors are evaluated quantitatively, which can help decision makers identify the key factors that have significant influence on system performance and explore the valuable information that may be veiled beneath their interrelationships.

  3. Adaptive decision making in a dynamic environment: a test of a sequential sampling model of relative judgment.

    Science.gov (United States)

    Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew

    2013-09-01

    Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. Recovering stellar population parameters via two full-spectrum fitting algorithms in the absence of model uncertainties

    Science.gov (United States)

    Ge, Junqiang; Yan, Renbin; Cappellari, Michele; Mao, Shude; Li, Hongyu; Lu, Youjun

    2018-05-01

    Using mock spectra based on Vazdekis/MILES library fitted within the wavelength region 3600-7350Å, we analyze the bias and scatter on the resulting physical parameters induced by the choice of fitting algorithms and observational uncertainties, but avoid effects of those model uncertainties. We consider two full-spectrum fitting codes: pPXF and STARLIGHT, in fitting for stellar population age, metallicity, mass-to-light ratio, and dust extinction. With pPXF we find that both the bias μ in the population parameters and the scatter σ in the recovered logarithmic values follows the expected trend μ ∝ σ ∝ 1/(S/N). The bias increases for younger ages and systematically makes recovered ages older, M*/Lr larger and metallicities lower than the true values. For reference, at S/N=30, and for the worst case (t = 108yr), the bias is 0.06 dex in M/Lr, 0.03 dex in both age and [M/H]. There is no significant dependence on either E(B-V) or the shape of the error spectrum. Moreover, the results are consistent for both our 1-SSP and 2-SSP tests. With the STARLIGHT algorithm, we find trends similar to pPXF, when the input E(B-V)values, with significantly underestimated dust extinction and [M/H], and larger ages and M*/Lr. Results degrade when moving from our 1-SSP to the 2-SSP tests. The STARLIGHT convergence to the true values can be improved by increasing Markov Chains and annealing loops to the "slow mode". For the same input spectrum, pPXF is about two order of magnitudes faster than STARLIGHT's "default mode" and about three order of magnitude faster than STARLIGHT's "slow mode".

  5. Sequential planning of flood protection infrastructure under limited historic flood record and climate change uncertainty

    Science.gov (United States)

    Dittes, Beatrice; Špačková, Olga; Straub, Daniel

    2017-04-01

    Flood protection is often designed to safeguard people and property following regulations and standards, which specify a target design flood protection level, such as the 100-year flood level prescribed in Germany (DWA, 2011). In practice, the magnitude of such an event is only known within a range of uncertainty, which is caused by limited historic records and uncertain climate change impacts, among other factors (Hall & Solomatine, 2008). As more observations and improved climate projections become available in the future, the design flood estimate changes and the capacity of the flood protection may be deemed insufficient at a future point in time. This problem can be mitigated by the implementation of flexible flood protection systems (that can easily be adjusted in the future) and/or by adding an additional reserve to the flood protection, i.e. by applying a safety factor to the design. But how high should such a safety factor be? And how much should the decision maker be willing to pay to make the system flexible, i.e. what is the Value of Flexibility (Špačková & Straub, 2017)? We propose a decision model that identifies cost-optimal decisions on flood protection capacity in the face of uncertainty (Dittes et al. 2017). It considers sequential adjustments of the protection system during its lifetime, taking into account its flexibility. The proposed framework is based on pre-posterior Bayesian decision analysis, using Decision Trees and Markov Decision Processes, and is fully quantitative. It can include a wide range of uncertainty components such as uncertainty associated with limited historic record or uncertain climate or socio-economic change. It is shown that since flexible systems are less costly to adjust when flood estimates are changing, they justify initially lower safety factors. Investigation on the Value of Flexibility (VoF) demonstrates that VoF depends on the type and degree of uncertainty, on the learning effect (i.e. kind and quality of

  6. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, A. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Blake, W.H., E-mail: wblake@plymouth.ac.uk [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Keith-Roach, M.J. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Kemakta Konsult, Stockholm (Sweden)

    2012-03-30

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were Sequential extraction with natural {sup 7}Be returns high analytical uncertainties. Black-Right-Pointing-Pointer Preconcentrating extracts from a large sample mass improved analytical uncertainty. Black-Right-Pointing-Pointer This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic {sup 7}Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of {sup 7}Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout {sup 7}Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of {sup 7}Be (t{sub 1/2} = 53.3 days). Here, three different methods of preparing and quantifying {sup 7}Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the {sup 7}Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural {sup 7}Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period ({approx}10% (2

  7. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    International Nuclear Information System (INIS)

    Taylor, A.; Blake, W.H.; Keith-Roach, M.J.

    2012-01-01

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were 7 Be geochemical behaviour is required to support tracer studies. ► Sequential extraction with natural 7 Be returns high analytical uncertainties. ► Preconcentrating extracts from a large sample mass improved analytical uncertainty. ► This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic 7 Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of 7 Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout 7 Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of 7 Be (t 1/2 = 53.3 days). Here, three different methods of preparing and quantifying 7 Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the 7 Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural 7 Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period (∼10% (2σ) where extract activity >40% of total activity) and generated statistically useful sequential extraction profiles. Total recoveries of 7 Be fell between 84 and 112%. The stable Be data demonstrated that the

  8. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    Science.gov (United States)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor 0.91, NSE>0.89, and 0.18uncertainty analysis must be accounted when the outcomes of the model use for policy or management decisions.

  9. Resolving uncertainty in chemical speciation determinations

    Science.gov (United States)

    Smith, D. Scott; Adams, Nicholas W. H.; Kramer, James R.

    1999-10-01

    Speciation determinations involve uncertainty in system definition and experimentation. Identification of appropriate metals and ligands from basic chemical principles, analytical window considerations, types of species and checking for consistency in equilibrium calculations are considered in system definition uncertainty. A systematic approach to system definition limits uncertainty in speciation investigations. Experimental uncertainty is discussed with an example of proton interactions with Suwannee River fulvic acid (SRFA). A Monte Carlo approach was used to estimate uncertainty in experimental data, resulting from the propagation of uncertainties in electrode calibration parameters and experimental data points. Monte Carlo simulations revealed large uncertainties present at high (>9-10) and low (monoprotic ligands. Least-squares fit the data with 21 sites, whereas linear programming fit the data equally well with 9 sites. Multiresponse fitting, involving simultaneous fluorescence and pH measurements, improved model discrimination. Deconvolution of the excitation versus emission fluorescence surface for SRFA establishes a minimum of five sites. Diprotic sites are also required for the five fluorescent sites, and one non-fluorescent monoprotic site was added to accommodate the pH data. Consistent with greater complexity, the multiresponse method had broader confidence limits than the uniresponse methods, but corresponded better with the accepted total carboxylic content for SRFA. Overall there was a 40% standard deviation in total carboxylic content for the multiresponse fitting, versus 10% and 1% for least-squares and linear programming, respectively.

  10. Evaluating prediction uncertainty

    International Nuclear Information System (INIS)

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  11. Measurement Uncertainty

    Science.gov (United States)

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  12. dftools: Distribution function fitting

    Science.gov (United States)

    Obreschkow, Danail

    2018-05-01

    dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.

  13. In pursuit of a fit-for-purpose uncertainty guide

    Science.gov (United States)

    White, D. R.

    2016-08-01

    Measurement uncertainty is a measure of the quality of a measurement; it enables users of measurements to manage the risks and costs associated with decisions influenced by measurements, and it supports metrological traceability by quantifying the proximity of measurement results to true SI values. The Guide to the Expression of Uncertainty in Measurement (GUM) ensures uncertainty statements meet these purposes and encourages the world-wide harmony of measurement uncertainty practice. Although the GUM is an extraordinarily successful document, it has flaws, and a revision has been proposed. Like the already-published supplements to the GUM, the proposed revision employs objective Bayesian statistics instead of frequentist statistics. This paper argues that the move away from a frequentist treatment of measurement error to a Bayesian treatment of states of knowledge is misguided. The move entails changes in measurement philosophy, a change in the meaning of probability, and a change in the object of uncertainty analysis, all leading to different numerical results, increased costs, increased confusion, a loss of trust, and, most significantly, a loss of harmony with current practice. Recommendations are given for a revision in harmony with the current GUM and allowing all forms of statistical inference.

  14. Asynchronous Operators of Sequential Logic Venjunction & Sequention

    CERN Document Server

    Vasyukevich, Vadim

    2011-01-01

    This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous

  15. Uncertainty Propagation in OMFIT

    Science.gov (United States)

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  16. Competitive Capacity Investment under Uncertainty

    NARCIS (Netherlands)

    X. Li (Xishu); R.A. Zuidwijk (Rob); M.B.M. de Koster (René); R. Dekker (Rommert)

    2016-01-01

    textabstractWe consider a long-term capacity investment problem in a competitive market under demand uncertainty. Two firms move sequentially in the competition and a firm’s capacity decision interacts with the other firm’s current and future capacity. Throughout the investment race, a firm can

  17. Sequential bayes estimation algorithm with cubic splines on uniform meshes

    International Nuclear Information System (INIS)

    Hossfeld, F.; Mika, K.; Plesser-Walk, E.

    1975-11-01

    After outlining the principles of some recent developments in parameter estimation, a sequential numerical algorithm for generalized curve-fitting applications is presented combining results from statistical estimation concepts and spline analysis. Due to its recursive nature, the algorithm can be used most efficiently in online experimentation. Using computer-sumulated and experimental data, the efficiency and the flexibility of this sequential estimation procedure is extensively demonstrated. (orig.) [de

  18. Getting CSR communication fit

    DEFF Research Database (Denmark)

    Schmeltz, Line

    2017-01-01

    Companies experience increasing legal and societal pressure to communicate about their corporate social responsibility (CSR) engagements from a number of different publics. One very important group is that of young consumers who are predicted to be the most important and influential consumer group...... in the near future. From a value- theoretical base, this article empirically explores the role and applicability of ‘fit’ in strategic CSR communication targeted at young consumers. Point of departure is taken in the well-known strategic fit (a logical link between a company’s CSR commitment and its core...... values) and is further developed by introducing two additional fits, the CSR- Consumer fit and the CSR-Consumer-Company fit (Triple Fit). Through a sequential design, the three fits are empirically tested and their potential for meeting young consumers’ expectations for corporate CSR messaging...

  19. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  20. Justification for recommended uncertainties

    International Nuclear Information System (INIS)

    Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.

    2007-01-01

    The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1

  1. Sequential neural models with stochastic layers

    DEFF Research Database (Denmark)

    Fraccaro, Marco; Sønderby, Søren Kaae; Paquet, Ulrich

    2016-01-01

    How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks? This paper introduces stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural...... generative model. The clear separation of deterministic and stochastic layers allows a structured variational inference network to track the factorization of the model's posterior distribution. By retaining both the nonlinear recursive structure of a recurrent neural network and averaging over...

  2. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models...

  3. Uncertainty Quantification in High Throughput Screening ...

    Science.gov (United States)

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  4. How to Read the Tractatus Sequentially

    Directory of Open Access Journals (Sweden)

    Tim Kraft

    2016-11-01

    Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.

  5. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    Science.gov (United States)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination

  6. Uncertainty Estimate in Resources Assessment: A Geostatistical Contribution

    International Nuclear Information System (INIS)

    Souza, Luis Eduardo de; Costa, Joao Felipe C. L.; Koppe, Jair C.

    2004-01-01

    For many decades the mining industry regarded resources/reserves estimation and classification as a mere calculation requiring basic mathematical and geological knowledge. Most methods were based on geometrical procedures and spatial data distribution. Therefore, uncertainty associated with tonnages and grades either were ignored or mishandled, although various mining codes require a measure of confidence in the values reported. Traditional methods fail in reporting the level of confidence in the quantities and grades. Conversely, kriging is known to provide the best estimate and its associated variance. Among kriging methods, Ordinary Kriging (OK) probably is the most widely used one for mineral resource/reserve estimation, mainly because of its robustness and its facility in uncertainty assessment by using the kriging variance. It also is known that OK variance is unable to recognize local data variability, an important issue when heterogeneous mineral deposits with higher and poorer grade zones are being evaluated. Alternatively, stochastic simulation are used to build local or global uncertainty about a geological attribute respecting its statistical moments. This study investigates methods capable of incorporating uncertainty to the estimates of resources and reserves via OK and sequential gaussian and sequential indicator simulation The results showed that for the type of mineralization studied all methods classified the tonnages similarly. The methods are illustrated using an exploration drill hole data sets from a large Brazilian coal deposit

  7. A-Track: Detecting Moving Objects in FITS images

    Science.gov (United States)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  8. A Bayesian Optimal Design for Sequential Accelerated Degradation Testing

    Directory of Open Access Journals (Sweden)

    Xiaoyang Li

    2017-07-01

    Full Text Available When optimizing an accelerated degradation testing (ADT plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures.

  9. Resolving overlapping peaks in ARXPS data: The effect of noise and fitting method

    International Nuclear Information System (INIS)

    Muñoz-Flores, Jaime; Herrera-Gomez, Alberto

    2012-01-01

    Highlights: ► Noise is an important factor affecting the fitting of overlapping peaks in XPS data. ► The combined information in ARXPS data can be used to improve fitting reliability. ► The error on the estimation of the peak parameters depends on the peak-fitting method. ► Simultaneous fitting method is much more robust against noise than sequential fitting. ► The estimation of the error range is better done with ARXPS data than with XPS data. - Abstract: Peak-fitting of X-ray photoelectron spectroscopy (XPS) data can be very sensitive to noise when the difference on the binding energy among the peaks is smaller than the width of the peaks. This sensitivity depends on the fitting algorithm. Angle-resolved XPS (ARXPS) analysis offers the opportunity of employing the combined information contained in the data at the various angles to reduce the sensitivity to noise. The assumption of shared peak parameters (center and width) among the spectra for the different angles, and how it is introduced into the analysis, plays a basic role. Sequential fitting is the usual practice in ARXPS data peak-fitting. It consist on first estimating the center and width of the peaks from the data acquired at one of the angles, and then using those parameters as a starting approximation for fitting the data for each of the rest of the angles. An improvement of this method consists of averaging the centers and widths of the peaks obtained at the different angles, and then employing these values to assess the areas of the peaks for each angle. Another strategy for using the combined information is by assessing the peak parameters from the sum of the experimental data. The complete use of the combined information contained in the data-set is optimized by the simultaneous fitting method. It consists of the assessment of the center and width of the peaks by fitting the data at all the angles simultaneously. Computer-generated data was employed to compare the sensitivity with respect

  10. A new three-dimensional track fit with multiple scattering

    International Nuclear Information System (INIS)

    Berger, Niklaus; Kozlinskiy, Alexandr; Kiehn, Moritz; Schöning, André

    2017-01-01

    Modern semiconductor detectors allow for charged particle tracking with ever increasing position resolution. Due to the reduction of the spatial hit uncertainties, multiple Coulomb scattering in the detector layers becomes the dominant source for tracking uncertainties. In this case long distance effects can be ignored for the momentum measurement, and the track fit can consequently be formulated as a sum of independent fits to hit triplets. In this paper we present an analytical solution for a three-dimensional triplet(s) fit in a homogeneous magnetic field based on a multiple scattering model. Track fitting of hit triplets is performed using a linearization ansatz. The momentum resolution is discussed for a typical spectrometer setup. Furthermore the track fit is compared with other track fits for two different pixel detector geometries, namely the Mu3e experiment at PSI and a typical high-energy collider experiment. For a large momentum range the triplets fit provides a significantly better performance than a single helix fit. The triplets fit is fast and can easily be parallelized, which makes it ideal for the implementation on parallel computing architectures.

  11. A new three-dimensional track fit with multiple scattering

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Niklaus; Kozlinskiy, Alexandr [Physikalisches Institut, Heidelberg University, Heidelberg (Germany); Institut für Kernphysik and PRISMA cluster of excellence, Mainz University, Mainz (Germany); Kiehn, Moritz; Schöning, André [Physikalisches Institut, Heidelberg University, Heidelberg (Germany)

    2017-02-01

    Modern semiconductor detectors allow for charged particle tracking with ever increasing position resolution. Due to the reduction of the spatial hit uncertainties, multiple Coulomb scattering in the detector layers becomes the dominant source for tracking uncertainties. In this case long distance effects can be ignored for the momentum measurement, and the track fit can consequently be formulated as a sum of independent fits to hit triplets. In this paper we present an analytical solution for a three-dimensional triplet(s) fit in a homogeneous magnetic field based on a multiple scattering model. Track fitting of hit triplets is performed using a linearization ansatz. The momentum resolution is discussed for a typical spectrometer setup. Furthermore the track fit is compared with other track fits for two different pixel detector geometries, namely the Mu3e experiment at PSI and a typical high-energy collider experiment. For a large momentum range the triplets fit provides a significantly better performance than a single helix fit. The triplets fit is fast and can easily be parallelized, which makes it ideal for the implementation on parallel computing architectures.

  12. Weighted curve-fitting program for the HP 67/97 calculator

    International Nuclear Information System (INIS)

    Stockli, M.P.

    1983-01-01

    The HP 67/97 calculator provides in its standard equipment a curve-fit program for linear, logarithmic, exponential and power functions that is quite useful and popular. However, in more sophisticated applications, proper weights for data are often essential. For this purpose a program package was created which is very similar to the standard curve-fit program but which includes the weights of the data for proper statistical analysis. This allows accurate calculation of the uncertainties of the fitted curve parameters as well as the uncertainties of interpolations or extrapolations, or optionally the uncertainties can be normalized with chi-square. The program is very versatile and allows one to perform quite difficult data analysis in a convenient way with the pocket calculator HP 67/97

  13. Reducing uncertainty based on model fitness: Application to a ...

    African Journals Online (AJOL)

    A weakness of global sensitivity and uncertainty analysis methodologies is the often subjective definition of prior parameter probability distributions, especially ... The reservoir representing the central part of the wetland, where flood waters separate into several independent distributaries, is a keystone area within the model.

  14. The neural network approach to parton fitting

    International Nuclear Information System (INIS)

    Rojo, Joan; Latorre, Jose I.; Del Debbio, Luigi; Forte, Stefano; Piccione, Andrea

    2005-01-01

    We introduce the neural network approach to global fits of parton distribution functions. First we review previous work on unbiased parametrizations of deep-inelastic structure functions with faithful estimation of their uncertainties, and then we summarize the current status of neural network parton distribution fits

  15. An analysis of the uncertainty in temperature and density estimates from fitting model spectra to data. 1998 summer research program for high school juniors at the University of Rochester's Laboratory for Laser Energetics. Student research reports

    International Nuclear Information System (INIS)

    Schubmehl, M.

    1999-03-01

    Temperature and density histories of direct-drive laser fusion implosions are important to an understanding of the reaction's progress. Such measurements also document phenomena such as preheating of the core and improper compression that can interfere with the thermonuclear reaction. Model x-ray spectra from the non-LTE (local thermodynamic equilibrium) radiation transport post-processor for LILAC have recently been fitted to OMEGA data. The spectrum fitting code reads in a grid of model spectra and uses an iterative weighted least-squares algorithm to perform a fit to experimental data, based on user-input parameter estimates. The purpose of this research was to upgrade the fitting code to compute formal uncertainties on fitted quantities, and to provide temperature and density estimates with error bars. A standard error-analysis process was modified to compute these formal uncertainties from information about the random measurement error in the data. Preliminary tests of the code indicate that the variances it returns are both reasonable and useful

  16. Defining distinct negative beliefs about uncertainty: validating the factor structure of the Intolerance of Uncertainty Scale.

    Science.gov (United States)

    Sexton, Kathryn A; Dugas, Michel J

    2009-06-01

    This study examined the factor structure of the English version of the Intolerance of Uncertainty Scale (IUS; French version: M. H. Freeston, J. Rhéaume, H. Letarte, M. J. Dugas, & R. Ladouceur, 1994; English version: K. Buhr & M. J. Dugas, 2002) using a substantially larger sample than has been used in previous studies. Nonclinical undergraduate students and adults from the community (M age = 23.74 years, SD = 6.36; 73.0% female and 27.0% male) who participated in 16 studies in the Anxiety Disorders Laboratory at Concordia University in Montreal, Canada were randomly assigned to 2 datasets. Exploratory factor analysis with the 1st sample (n = 1,230) identified 2 factors: the beliefs that "uncertainty has negative behavioral and self-referent implications" and that "uncertainty is unfair and spoils everything." This 2-factor structure provided a good fit to the data (Bentler-Bonett normed fit index = .96, comparative fit index = .97, standardized root-mean residual = .05, root-mean-square error of approximation = .07) upon confirmatory factor analysis with the 2nd sample (n = 1,221). Both factors showed similarly high correlations with pathological worry, and Factor 1 showed stronger correlations with generalized anxiety disorder analogue status, trait anxiety, somatic anxiety, and depressive symptomatology. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  17. Representing uncertainty in objective functions: extension to include the influence of serial correlation

    Science.gov (United States)

    Croke, B. F.

    2008-12-01

    The role of performance indicators is to give an accurate indication of the fit between a model and the system being modelled. As all measurements have an associated uncertainty (determining the significance that should be given to the measurement), performance indicators should take into account uncertainties in the observed quantities being modelled as well as in the model predictions (due to uncertainties in inputs, model parameters and model structure). In the presence of significant uncertainty in observed and modelled output of a system, failure to adequately account for variations in the uncertainties means that the objective function only gives a measure of how well the model fits the observations, not how well the model fits the system being modelled. Since in most cases, the interest lies in fitting the system response, it is vital that the objective function(s) be designed to account for these uncertainties. Most objective functions (e.g. those based on the sum of squared residuals) assume homoscedastic uncertainties. If model contribution to the variations in residuals can be ignored, then transformations (e.g. Box-Cox) can be used to remove (or at least significantly reduce) heteroscedasticity. An alternative which is more generally applicable is to explicitly represent the uncertainties in the observed and modelled values in the objective function. Previous work on this topic addressed the modifications to standard objective functions (Nash-Sutcliffe efficiency, RMSE, chi- squared, coefficient of determination) using the optimal weighted averaging approach. This paper extends this previous work; addressing the issue of serial correlation. A form for an objective function that includes serial correlation will be presented, and the impact on model fit discussed.

  18. Uncertainty quantification using evidence theory in multidisciplinary design optimization

    International Nuclear Information System (INIS)

    Agarwal, Harish; Renaud, John E.; Preston, Evan L.; Padmanabhan, Dhanesh

    2004-01-01

    Advances in computational performance have led to the development of large-scale simulation tools for design. Systems generated using such simulation tools can fail in service if the uncertainty of the simulation tool's performance predictions is not accounted for. In this research an investigation of how uncertainty can be quantified in multidisciplinary systems analysis subject to epistemic uncertainty associated with the disciplinary design tools and input parameters is undertaken. Evidence theory is used to quantify uncertainty in terms of the uncertain measures of belief and plausibility. To illustrate the methodology, multidisciplinary analysis problems are introduced as an extension to the epistemic uncertainty challenge problems identified by Sandia National Laboratories. After uncertainty has been characterized mathematically the designer seeks the optimum design under uncertainty. The measures of uncertainty provided by evidence theory are discontinuous functions. Such non-smooth functions cannot be used in traditional gradient-based optimizers because the sensitivities of the uncertain measures are not properly defined. In this research surrogate models are used to represent the uncertain measures as continuous functions. A sequential approximate optimization approach is used to drive the optimization process. The methodology is illustrated in application to multidisciplinary example problems

  19. A tool for efficient, model-independent management optimization under uncertainty

    Science.gov (United States)

    White, Jeremy; Fienen, Michael N.; Barlow, Paul M.; Welter, Dave E.

    2018-01-01

    To fill a need for risk-based environmental management optimization, we have developed PESTPP-OPT, a model-independent tool for resource management optimization under uncertainty. PESTPP-OPT solves a sequential linear programming (SLP) problem and also implements (optional) efficient, “on-the-fly” (without user intervention) first-order, second-moment (FOSM) uncertainty techniques to estimate model-derived constraint uncertainty. Combined with a user-specified risk value, the constraint uncertainty estimates are used to form chance-constraints for the SLP solution process, so that any optimal solution includes contributions from model input and observation uncertainty. In this way, a “single answer” that includes uncertainty is yielded from the modeling analysis. PESTPP-OPT uses the familiar PEST/PEST++ model interface protocols, which makes it widely applicable to many modeling analyses. The use of PESTPP-OPT is demonstrated with a synthetic, integrated surface-water/groundwater model. The function and implications of chance constraints for this synthetic model are discussed.

  20. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    International Nuclear Information System (INIS)

    Zhang, Xiaole; Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu

    2017-01-01

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  1. Sequential multi-nuclide emission rate estimation method based on gamma dose rate measurement for nuclear emergency management

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xiaole, E-mail: zhangxiaole10@outlook.com [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany); Institute of Public Safety Research, Department of Engineering Physics, Tsinghua University, Beijing, 100084 (China); Raskob, Wolfgang; Landman, Claudia; Trybushnyi, Dmytro; Li, Yu [Institute for Nuclear and Energy Technologies, Karlsruhe Institute of Technology, Karlsruhe, D-76021 (Germany)

    2017-03-05

    Highlights: • Sequentially reconstruct multi-nuclide emission using gamma dose rate measurements. • Incorporate a priori ratio of nuclides into the background error covariance matrix. • Sequentially augment and update the estimation and the background error covariance. • Suppress the generation of negative estimations for the sequential method. • Evaluate the new method with twin experiments based on the JRODOS system. - Abstract: In case of a nuclear accident, the source term is typically not known but extremely important for the assessment of the consequences to the affected population. Therefore the assessment of the potential source term is of uppermost importance for emergency response. A fully sequential method, derived from a regularized weighted least square problem, is proposed to reconstruct the emission and composition of a multiple-nuclide release using gamma dose rate measurement. The a priori nuclide ratios are incorporated into the background error covariance (BEC) matrix, which is dynamically augmented and sequentially updated. The negative estimations in the mathematical algorithm are suppressed by utilizing artificial zero-observations (with large uncertainties) to simultaneously update the state vector and BEC. The method is evaluated by twin experiments based on the JRodos system. The results indicate that the new method successfully reconstructs the emission and its uncertainties. Accurate a priori ratio accelerates the analysis process, which obtains satisfactory results with only limited number of measurements, otherwise it needs more measurements to generate reasonable estimations. The suppression of negative estimation effectively improves the performance, especially for the situation with poor a priori information, where it is more prone to the generation of negative values.

  2. Mining of high utility-probability sequential patterns from uncertain databases.

    Directory of Open Access Journals (Sweden)

    Binbin Zhang

    Full Text Available High-utility sequential pattern mining (HUSPM has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs. They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM for mining high utility-probability sequential patterns (HUPSPs in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds.

  3. Optimization of FRAP uncertainty analysis option

    International Nuclear Information System (INIS)

    Peck, S.O.

    1979-10-01

    The automated uncertainty analysis option that has been incorporated in the FRAP codes (FRAP-T5 and FRAPCON-2) provides the user with a means of obtaining uncertainty bands on code predicted variables at user-selected times during a fuel pin analysis. These uncertainty bands are obtained by multiple single fuel pin analyses to generate data which can then be analyzed by second order statistical error propagation techniques. In this process, a considerable amount of data is generated and stored on tape. The user has certain choices to make regarding which independent variables are to be used in the analysis and what order of error propagation equation should be used in modeling the output response. To aid the user in these decisions, a computer program, ANALYZ, has been written and added to the uncertainty analysis option package. A variety of considerations involved in fitting response surface equations and certain pit-falls of which the user should be aware are discussed. An equation is derived expressing a residual as a function of a fitted model and an assumed true model. A variety of experimental design choices are discussed, including the advantages and disadvantages of each approach. Finally, a description of the subcodes which constitute program ANALYZ is provided

  4. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    Science.gov (United States)

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Uncertainty analysis of the magnetic field measurement by the translating coil method in axisymmetric magnets

    International Nuclear Information System (INIS)

    Arpaia, Pasquale; De Vito, Luca; Kazazi, Mario

    2016-01-01

    In the uncertainty assessment of magnetic flux measurements in axially symmetric magnets by the translating coil method, the Guide to the Uncertainty in Measurement and its supplement cannot be applied: the voltage variation at the coil terminals, which is the actual measured quantity, affects the flux estimate and its uncertainty. In this paper, a particle filter, implementing a sequential Monte-Carlo method based on Bayesian inference, is applied. At this aim, the main uncertainty sources are analyzed and a model of the measurement process is defined. The results of the experimental validation point out the transport system and the acquisition system as the main contributions to the uncertainty budget. (authors)

  6. Assessing student understanding of measurement and uncertainty

    Science.gov (United States)

    Jirungnimitsakul, S.; Wattanakasiwich, P.

    2017-09-01

    The objectives of this study were to develop and assess student understanding of measurement and uncertainty. A test has been adapted and translated from the Laboratory Data Analysis Instrument (LDAI) test, consists of 25 questions focused on three topics including measures of central tendency, experimental errors and uncertainties, and fitting regression lines. The test was evaluated its content validity by three physics experts in teaching physics laboratory. In the pilot study, Thai LDAI was administered to 93 freshmen enrolled in a fundamental physics laboratory course. The final draft of the test was administered to three groups—45 freshmen taking fundamental physics laboratory, 16 sophomores taking intermediated physics laboratory and 21 juniors taking advanced physics laboratory at Chiang Mai University. As results, we found that the freshmen had difficulties in experimental errors and uncertainties. Most students had problems with fitting regression lines. These results will be used to improve teaching and learning physics laboratory for physics students in the department.

  7. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    Science.gov (United States)

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015

  8. Model structural uncertainty quantification and hydrogeophysical data integration using airborne electromagnetic data (Invited)

    DEFF Research Database (Denmark)

    Minsley, Burke; Christensen, Nikolaj Kruse; Christensen, Steen

    of airborne electromagnetic (AEM) data to estimate large-scale model structural geometry, i.e. the spatial distribution of different lithological units based on assumed or estimated resistivity-lithology relationships, and the uncertainty in those structures given imperfect measurements. Geophysically derived...... estimates of model structural uncertainty are then combined with hydrologic observations to assess the impact of model structural error on hydrologic calibration and prediction errors. Using a synthetic numerical model, we describe a sequential hydrogeophysical approach that: (1) uses Bayesian Markov chain...... Monte Carlo (McMC) methods to produce a robust estimate of uncertainty in electrical resistivity parameter values, (2) combines geophysical parameter uncertainty estimates with borehole observations of lithology to produce probabilistic estimates of model structural uncertainty over the entire AEM...

  9. Sequential-Injection Analysis: Principles, Instrument Construction, and Demonstration by a Simple Experiment

    Science.gov (United States)

    Economou, A.; Tzanavaras, P. D.; Themelis, D. G.

    2005-01-01

    The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…

  10. Quantifying and Reducing Uncertainty in Correlated Multi-Area Short-Term Load Forecasting

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yannan; Hou, Zhangshuan; Meng, Da; Samaan, Nader A.; Makarov, Yuri V.; Huang, Zhenyu

    2016-07-17

    In this study, we represent and reduce the uncertainties in short-term electric load forecasting by integrating time series analysis tools including ARIMA modeling, sequential Gaussian simulation, and principal component analysis. The approaches are mainly focusing on maintaining the inter-dependency between multiple geographically related areas. These approaches are applied onto cross-correlated load time series as well as their forecast errors. Multiple short-term prediction realizations are then generated from the reduced uncertainty ranges, which are useful for power system risk analyses.

  11. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  12. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  13. Impacts of Spatial Climatic Representation on Hydrological Model Calibration and Prediction Uncertainty: A Mountainous Catchment of Three Gorges Reservoir Region, China

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-02-01

    Full Text Available Sparse climatic observations represent a major challenge for hydrological modeling of mountain catchments with implications for decision-making in water resources management. Employing elevation bands in the Soil and Water Assessment Tool-Sequential Uncertainty Fitting (SWAT2012-SUFI2 model enabled representation of precipitation and temperature variation with altitude in the Daning river catchment (Three Gorges Reservoir Region, China where meteorological inputs are limited in spatial extent and are derived from observations from relatively low lying locations. Inclusion of elevation bands produced better model performance for 1987–1993 with the Nash–Sutcliffe efficiency (NSE increasing by at least 0.11 prior to calibration. During calibration prediction uncertainty was greatly reduced. With similar R-factors from the earlier calibration iterations, a further 11% of observations were included within the 95% prediction uncertainty (95PPU compared to the model without elevation bands. For behavioral simulations defined in SWAT calibration using a NSE threshold of 0.3, an additional 3.9% of observations were within the 95PPU while the uncertainty reduced by 7.6% in the model with elevation bands. The calibrated model with elevation bands reproduced observed river discharges with the performance in the calibration period changing to “very good” from “poor” without elevation bands. The output uncertainty of calibrated model with elevation bands was satisfactory, having 85% of flow observations included within the 95PPU. These results clearly demonstrate the requirement to account for orographic effects on precipitation and temperature in hydrological models of mountainous catchments.

  14. A person fit test for IRT models for polytomous items

    NARCIS (Netherlands)

    Glas, Cornelis A.W.; Dagohoy, A.V.

    2007-01-01

    A person fit test based on the Lagrange multiplier test is presented for three item response theory models for polytomous items: the generalized partial credit model, the sequential model, and the graded response model. The test can also be used in the framework of multidimensional ability

  15. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    Science.gov (United States)

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  16. Investment and upgrade in distributed generation under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Siddiqui, Afzal S. [Department of Statistical Science, University College London, London WC1E 6BT (United Kingdom); Maribu, Karl [Centre d' Economie Industrielle, Ecole Nationale Superieure des Mines de Paris, Paris 75272 (France)

    2009-01-15

    The ongoing deregulation of electricity industries worldwide is providing incentives for microgrids to use small-scale distributed generation (DG) and combined heat and power (CHP) applications via heat exchangers (HXs) to meet local energy loads. Although the electric-only efficiency of DG is lower than that of central-station production, relatively high tariff rates and the potential for CHP applications increase the attraction of on-site generation. Nevertheless, a microgrid contemplating the installation of gas-fired DG has to be aware of the uncertainty in the natural gas price. Treatment of uncertainty via real options increases the value of the investment opportunity, which then delays the adoption decision as the opportunity cost of exercising the investment option increases as well. In this paper, we take the perspective of a microgrid that can proceed in a sequential manner with DG capacity and HX investment in order to reduce its exposure to risk from natural gas price volatility. In particular, with the availability of the HX, the microgrid faces a tradeoff between reducing its exposure to the natural gas price and maximising its cost savings. By varying the volatility parameter, we find that the microgrid prefers a direct investment strategy for low levels of volatility and a sequential one for higher levels of volatility. (author)

  17. Investment and upgrade in distributed generation under uncertainty

    International Nuclear Information System (INIS)

    Siddiqui, Afzal S.; Maribu, Karl

    2009-01-01

    The ongoing deregulation of electricity industries worldwide is providing incentives for microgrids to use small-scale distributed generation (DG) and combined heat and power (CHP) applications via heat exchangers (HXs) to meet local energy loads. Although the electric-only efficiency of DG is lower than that of central-station production, relatively high tariff rates and the potential for CHP applications increase the attraction of on-site generation. Nevertheless, a microgrid contemplating the installation of gas-fired DG has to be aware of the uncertainty in the natural gas price. Treatment of uncertainty via real options increases the value of the investment opportunity, which then delays the adoption decision as the opportunity cost of exercising the investment option increases as well. In this paper, we take the perspective of a microgrid that can proceed in a sequential manner with DG capacity and HX investment in order to reduce its exposure to risk from natural gas price volatility. In particular, with the availability of the HX, the microgrid faces a tradeoff between reducing its exposure to the natural gas price and maximising its cost savings. By varying the volatility parameter, we find that the microgrid prefers a direct investment strategy for low levels of volatility and a sequential one for higher levels of volatility. (author)

  18. Investment and Upgrade in Distributed Generation under Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Siddiqui, Afzal; Maribu, Karl

    2008-08-18

    The ongoing deregulation of electricity industries worldwide is providing incentives for microgrids to use small-scale distributed generation (DG) and combined heat and power (CHP) applications via heat exchangers (HXs) to meet local energy loads. Although the electric-only efficiency of DG is lower than that of central-station production, relatively high tariff rates and the potential for CHP applications increase the attraction of on-site generation. Nevertheless, a microgrid contemplatingthe installation of gas-fired DG has to be aware of the uncertainty in the natural gas price. Treatment of uncertainty via real options increases the value of the investment opportunity, which then delays the adoption decision as the opportunity cost of exercising the investment option increases as well. In this paper, we take the perspective of a microgrid that can proceed in a sequential manner with DG capacity and HX investment in order to reduce its exposure to risk from natural gas price volatility. In particular, with the availability of the HX, the microgrid faces a tradeoff between reducing its exposure to the natural gas price and maximising its cost savings. By varying the volatility parameter, we find that the microgrid prefers a direct investment strategy for low levels of volatility and a sequential one for higher levels of volatility.

  19. Refinement of the concept of uncertainty.

    Science.gov (United States)

    Penrod, J

    2001-04-01

    To analyse the conceptual maturity of uncertainty; to develop an expanded theoretical definition of uncertainty; to advance the concept using methods of concept refinement; and to analyse congruency with the conceptualization of uncertainty presented in the theory of hope, enduring, and suffering. Uncertainty is of concern in nursing as people experience complex life events surrounding health. In an earlier nursing study that linked the concepts of hope, enduring, and suffering into a single theoretical scheme, a state best described as 'uncertainty' arose. This study was undertaken to explore how this conceptualization fit with the scientific literature on uncertainty and to refine the concept. Initially, a concept analysis using advanced methods described by Morse, Hupcey, Mitcham and colleagues was completed. The concept was determined to be partially mature. A theoretical definition was derived and techniques of concept refinement using the literature as data were applied. The refined concept was found to be congruent with the concept of uncertainty that had emerged in the model of hope, enduring and suffering. Further investigation is needed to explore the extent of probabilistic reasoning and the effects of confidence and control on feelings of uncertainty and certainty.

  20. A new moving strategy for the sequential Monte Carlo approach in optimizing the hydrological model parameters

    Science.gov (United States)

    Zhu, Gaofeng; Li, Xin; Ma, Jinzhu; Wang, Yunquan; Liu, Shaomin; Huang, Chunlin; Zhang, Kun; Hu, Xiaoli

    2018-04-01

    Sequential Monte Carlo (SMC) samplers have become increasing popular for estimating the posterior parameter distribution with the non-linear dependency structures and multiple modes often present in hydrological models. However, the explorative capabilities and efficiency of the sampler depends strongly on the efficiency in the move step of SMC sampler. In this paper we presented a new SMC sampler entitled the Particle Evolution Metropolis Sequential Monte Carlo (PEM-SMC) algorithm, which is well suited to handle unknown static parameters of hydrologic model. The PEM-SMC sampler is inspired by the works of Liang and Wong (2001) and operates by incorporating the strengths of the genetic algorithm, differential evolution algorithm and Metropolis-Hasting algorithm into the framework of SMC. We also prove that the sampler admits the target distribution to be a stationary distribution. Two case studies including a multi-dimensional bimodal normal distribution and a conceptual rainfall-runoff hydrologic model by only considering parameter uncertainty and simultaneously considering parameter and input uncertainty show that PEM-SMC sampler is generally superior to other popular SMC algorithms in handling the high dimensional problems. The study also indicated that it may be important to account for model structural uncertainty by using multiplier different hydrological models in the SMC framework in future study.

  1. Price Uncertainty in Linear Production Situations

    NARCIS (Netherlands)

    Suijs, J.P.M.

    1999-01-01

    This paper analyzes linear production situations with price uncertainty, and shows that the corrresponding stochastic linear production games are totally balanced. It also shows that investment funds, where investors pool their individual capital for joint investments in financial assets, fit into

  2. Event based uncertainty assessment in urban drainage modelling, applying the GLUE methodology

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Beven, K.J.; Jensen, Jacob Birk

    2008-01-01

    of combined sewer overflow. The GLUE methodology is used to test different conceptual setups in order to determine if one model setup gives a better goodness of fit conditional on the observations than the other. Moreover, different methodological investigations of GLUE are conducted in order to test......In the present paper an uncertainty analysis on an application of the commercial urban drainage model MOUSE is conducted. Applying the Generalized Likelihood Uncertainty Estimation (GLUE) methodology the model is conditioned on observation time series from two flow gauges as well as the occurrence...... if the uncertainty analysis is unambiguous. It is shown that the GLUE methodology is very applicable in uncertainty analysis of this application of an urban drainage model, although it was shown to be quite difficult of get good fits of the whole time series....

  3. Sequential Power-Dependence Theory

    NARCIS (Netherlands)

    Buskens, Vincent; Rijt, Arnout van de

    2008-01-01

    Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike

  4. A real-time uncertainty-knowledge and training database

    International Nuclear Information System (INIS)

    Joergensen, H.E.; Santabarbara, J.M.; Mikkelsen, T.

    1993-01-01

    The paper describes an experimentally obtained database for training of uncertainties and data interpretation in connection with local scale accidental atmospheric dispersion scenarios. Based on remote measurement techniques using lidars, sequential 'snapshots', or movies, of the fluctuating concentration, profiles during several full scale diffusion experiments have been obtained. The aim has been to establish data sets suitable for comparison and training with the real-time atmospheric dispersion models in decision support systems, such as the RODOS system under development within the CEC. (author)

  5. A real-time uncertainty-knowledge and training database

    DEFF Research Database (Denmark)

    Ejsing Jørgensen, Hans; Santabarbara, J.M.; Mikkelsen, T.

    1993-01-01

    The paper describes an experimentally obtained database for training of uncertainties and data interpretation in connection with local scale accidental atmospheric dispersion scenarios. Based on remote measurement techniques using lidars, sequential 'snapshots', or movies. of the fluctuating...... concentration profiles during several full scale diffusion experiments has been obtained. The aim has been to establish data sets suitable tor comparison and training with the real-time atmospheric dispersion models in decision support systems, such as the RODOS system under development within the CEC....

  6. Plurality of Type A evaluations of uncertainty

    Science.gov (United States)

    Possolo, Antonio; Pintar, Adam L.

    2017-10-01

    The evaluations of measurement uncertainty involving the application of statistical methods to measurement data (Type A evaluations as specified in the Guide to the Expression of Uncertainty in Measurement, GUM) comprise the following three main steps: (i) developing a statistical model that captures the pattern of dispersion or variability in the experimental data, and that relates the data either to the measurand directly or to some intermediate quantity (input quantity) that the measurand depends on; (ii) selecting a procedure for data reduction that is consistent with this model and that is fit for the purpose that the results are intended to serve; (iii) producing estimates of the model parameters, or predictions based on the fitted model, and evaluations of uncertainty that qualify either those estimates or these predictions, and that are suitable for use in subsequent uncertainty propagation exercises. We illustrate these steps in uncertainty evaluations related to the measurement of the mass fraction of vanadium in a bituminous coal reference material, including the assessment of the homogeneity of the material, and to the calibration and measurement of the amount-of-substance fraction of a hydrochlorofluorocarbon in air, and of the age of a meteorite. Our goal is to expose the plurality of choices that can reasonably be made when taking each of the three steps outlined above, and to show that different choices typically lead to different estimates of the quantities of interest, and to different evaluations of the associated uncertainty. In all the examples, the several alternatives considered represent choices that comparably competent statisticians might make, but who differ in the assumptions that they are prepared to rely on, and in their selection of approach to statistical inference. They represent also alternative treatments that the same statistician might give to the same data when the results are intended for different purposes.

  7. A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS

    Directory of Open Access Journals (Sweden)

    Wakhid Slamet Ciptono

    2006-05-01

    Full Text Available This study extends the prior research (Zahra and Das 1993 by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment that may lead to higher company non-financial performance (productivity and operational reliability. The study distributed a questionnaire (by mail, e-mailed web system, and focus group discussion to three levels of managers (top, middle, and first-line of 49 oil and gas companies with 140 SBUs in Indonesia. These qualified samples fell into 47 upstream (supply-chain companies with 132 SBUs, and 2 downstream (demand-chain companies with 8 SBUs. A total of 1,332 individual usable questionnaires were returned thus qualified for analysis, representing an effective response rate of 50.19 percent. The researcher conducts structural equation modeling (SEM and hierarchical multiple regression analysis to assess the goodness-of-fit between the research models and the sample data and to test whether innovation strategy mediates the impact of leadership orientation on company non-financial performance. SEM reveals that the models have met goodness-of-fit criteria, thus the interpretation of the sequential models fits with the data. The results of SEM and hierarchical multiple regression: (1 support the importance of innovation strategy as a determinant of company non-financial performance, (2 suggest that the sequential model is appropriate for examining the relationships between six dimensions of innovation strategy and company non-financial performance, and (3 show that the sequential model provides additional insights into the indirect contribution of the individual

  8. Perceptions of technology uncertainty and the consequences for performance in buyer-supplier relationships

    NARCIS (Netherlands)

    Oosterhuis, M.; van der Vaart, T.; Molleman, E.

    2011-01-01

    In this paper, we investigate how buyers' and suppliers' distinct perceptions of technology uncertainty affect the relationship between communication frequency and supplier performance. Information processing theory suggests that a fit is desirable between perceived environmental uncertainty and the

  9. Adaptive Management Fitness of Watersheds

    Directory of Open Access Journals (Sweden)

    Ignacio Porzecanski

    2012-09-01

    Full Text Available Adaptive management (AM promises to improve our ability to cope with the inherent uncertainties of managing complex dynamic systems such as watersheds. However, despite the increasing adherence and attempts at implementation, the AM approach is rarely successful in practice. A one-size-fits-all AM strategy fails because some watersheds are better positioned at the outset to succeed at AM than others. We introduce a diagnostic tool called the Index of Management Condition (IMC and apply it to twelve diverse watersheds in order to determine their AM "fitness"; that is, the degree to which favorable adaptive management conditions are in place in a watershed.

  10. Decision making under uncertainty

    International Nuclear Information System (INIS)

    Cyert, R.M.

    1989-01-01

    This paper reports on ways of improving the reliability of products and systems in this country if we are to survive as a first-rate industrial power. The use of statistical techniques have, since the 1920s, been viewed as one of the methods for testing quality and estimating the level of quality in a universe of output. Statistical quality control is not relevant, generally, to improving systems in an industry like yours, but certainly the use of probability concepts is of significance. In addition, when it is recognized that part of the problem involves making decisions under uncertainty, it becomes clear that techniques such as sequential decision making and Bayesian analysis become major methodological approaches that must be utilized

  11. Model structural uncertainty quantification and hydrologic parameter and prediction error analysis using airborne electromagnetic data

    DEFF Research Database (Denmark)

    Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen

    Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...... electromagnetic (AEM) data. Our estimates of model structural uncertainty follow a Bayesian framework that accounts for both the uncertainties in geophysical parameter estimates given AEM data, and the uncertainties in the relationship between lithology and geophysical parameters. Using geostatistical sequential...

  12. Managing project risks and uncertainties

    Directory of Open Access Journals (Sweden)

    Mike Mentis

    2015-01-01

    Full Text Available This article considers threats to a project slipping on budget, schedule and fit-for-purpose. Threat is used here as the collective for risks (quantifiable bad things that can happen and uncertainties (poorly or not quantifiable bad possible events. Based on experience with projects in developing countries this review considers that (a project slippage is due to uncertainties rather than risks, (b while eventuation of some bad things is beyond control, managed execution and oversight are still the primary means to keeping within budget, on time and fit-for-purpose, (c improving project delivery is less about bigger and more complex and more about coordinated focus, effectiveness and developing thought-out heuristics, and (d projects take longer and cost more partly because threat identification is inaccurate, the scope of identified threats is too narrow, and the threat assessment product is not integrated into overall project decision-making and execution. Almost by definition, what is poorly known is likely to cause problems. Yet it is not just the unquantifiability and intangibility of uncertainties causing project slippage, but that they are insufficiently taken into account in project planning and execution that cause budget and time overruns. Improving project performance requires purpose-driven and managed deployment of scarce seasoned professionals. This can be aided with independent oversight by deeply experienced panelists who contribute technical insights and can potentially show that diligence is seen to be done.

  13. Reliability Evaluation of Distribution System Considering Sequential Characteristics of Distributed Generation

    Directory of Open Access Journals (Sweden)

    Sheng Wanxing

    2016-01-01

    Full Text Available In allusion to the randomness of output power of distributed generation (DG, a reliability evaluation model based on sequential Monte Carlo simulation (SMCS for distribution system with DG is proposed. Operating states of the distribution system can be sampled by SMCS in chronological order thus the corresponding output power of DG can be generated. The proposed method has been tested on feeder F4 of IEEE-RBTS Bus 6. The results show that reliability evaluation of distribution system considering the uncertainty of output power of DG can be effectively implemented by SMCS.

  14. Uncertainty in eddy covariance measurements and its application to physiological models

    Science.gov (United States)

    D.Y. Hollinger; A.D. Richardson; A.D. Richardson

    2005-01-01

    Flux data are noisy, and this uncertainty is largely due to random measurement error. Knowledge of uncertainty is essential for the statistical evaluation of modeled andmeasured fluxes, for comparison of parameters derived by fitting models to measured fluxes and in formal data-assimilation efforts. We used the difference between simultaneous measurements from two...

  15. Uncertainties in forces extracted from non-contact atomic force microscopy measurements by fitting of long-range background forces

    Directory of Open Access Journals (Sweden)

    Adam Sweetman

    2014-04-01

    Full Text Available In principle, non-contact atomic force microscopy (NC-AFM now readily allows for the measurement of forces with sub-nanonewton precision on the atomic scale. In practice, however, the extraction of the often desired ‘short-range’ force from the experimental observable (frequency shift is often far from trivial. In most cases there is a significant contribution to the total tip–sample force due to non-site-specific van der Waals and electrostatic forces. Typically, the contribution from these forces must be removed before the results of the experiment can be successfully interpreted, often by comparison to density functional theory calculations. In this paper we compare the ‘on-minus-off’ method for extracting site-specific forces to a commonly used extrapolation method modelling the long-range forces using a simple power law. By examining the behaviour of the fitting method in the case of two radically different interaction potentials we show that significant uncertainties in the final extracted forces may result from use of the extrapolation method.

  16. Fast uncertainty reduction strategies relying on Gaussian process models

    International Nuclear Information System (INIS)

    Chevalier, Clement

    2013-01-01

    This work deals with sequential and batch-sequential evaluation strategies of real-valued functions under limited evaluation budget, using Gaussian process models. Optimal Stepwise Uncertainty Reduction (SUR) strategies are investigated for two different problems, motivated by real test cases in nuclear safety. First we consider the problem of identifying the excursion set above a given threshold T of a real-valued function f. Then we study the question of finding the set of 'safe controlled configurations', i.e. the set of controlled inputs where the function remains below T, whatever the value of some others non-controlled inputs. New SUR strategies are presented, together with efficient procedures and formulas to compute and use them in real world applications. The use of fast formulas to recalculate quickly the posterior mean or covariance function of a Gaussian process (referred to as the 'kriging update formulas') does not only provide substantial computational savings. It is also one of the key tools to derive closed form formulas enabling a practical use of computationally-intensive sampling strategies. A contribution in batch-sequential optimization (with the multi-points Expected Improvement) is also presented. (author)

  17. Diagnostic uncertainty, guilt, mood, and disability in back pain.

    Science.gov (United States)

    Serbic, Danijela; Pincus, Tamar; Fife-Schaw, Chris; Dawson, Helen

    2016-01-01

    In the majority of patients a definitive cause for low back pain (LBP) cannot be established, and many patients report feeling uncertain about their diagnosis, accompanied by guilt. The relationship between diagnostic uncertainty, guilt, mood, and disability is currently unknown. This study tested 3 theoretical models to explore possible pathways between these factors. In Model 1, diagnostic uncertainty was hypothesized to correlate with pain-related guilt, which in turn would positively correlate with depression, anxiety and disability. Two alternative models were tested: (a) a path from depression and anxiety to guilt, from guilt to diagnostic uncertainty, and finally to disability; (b) a model in which depression and anxiety, and independently, diagnostic uncertainty, were associated with guilt, which in turn was associated with disability. Structural equation modeling was employed on data from 413 participants with chronic LBP. All 3 models showed a reasonable-to-good fit with the data, with the 2 alternative models providing marginally better fit indices. Guilt, and especially social guilt, was associated with disability in all 3 models. Diagnostic uncertainty was associated with guilt, but only moderately. Low mood was also associated with guilt. Two newly defined factors, pain related guilt and diagnostic uncertainty, appear to be linked to disability and mood in people with LBP. The causal path of these links cannot be established in this cross sectional study. However, pain-related guilt especially appears to be important, and future research should examine whether interventions directly targeting guilt improve outcomes. (c) 2015 APA, all rights reserved).

  18. Uncertainties in constraining low-energy constants from {sup 3}H β decay

    Energy Technology Data Exchange (ETDEWEB)

    Klos, P.; Carbone, A.; Hebeler, K. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, ExtreMe Matter Institute EMMI, Darmstadt (Germany); Menendez, J. [University of Tokyo, Department of Physics, Tokyo (Japan); Schwenk, A. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, ExtreMe Matter Institute EMMI, Darmstadt (Germany); Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2017-08-15

    We discuss the uncertainties in constraining low-energy constants of chiral effective field theory from {sup 3}H β decay. The half-life is very precisely known, so that the Gamow-Teller matrix element has been used to fit the coupling c{sub D} of the axial-vector current to a short-range two-nucleon pair. Because the same coupling also describes the leading one-pion-exchange three-nucleon force, this in principle provides a very constraining fit, uncorrelated with the {sup 3}H binding energy fit used to constrain another low-energy coupling in three-nucleon forces. However, so far such {sup 3}H half-life fits have only been performed at a fixed cutoff value. We show that the cutoff dependence due to the regulators in the axial-vector two-body current can significantly affect the Gamow-Teller matrix elements and consequently also the extracted values for the c{sub D} coupling constant. The degree of the cutoff dependence is correlated with the softness of the employed NN interaction. As a result, present three-nucleon forces based on a fit to {sup 3}H β decay underestimate the uncertainty in c{sub D}. We explore a range of c{sub D} values that is compatible within cutoff variation with the experimental {sup 3}H half-life and estimate the resulting uncertainties for many-body systems by performing calculations of symmetric nuclear matter. (orig.)

  19. Scientific uncertainty in media content: Introduction to this special issue.

    Science.gov (United States)

    Peters, Hans Peter; Dunwoody, Sharon

    2016-11-01

    This introduction sets the stage for the special issue on the public communication of scientific uncertainty that follows by sketching the wider landscape of issues related to the communication of uncertainty and showing how the individual contributions fit into that landscape. The first part of the introduction discusses the creation of media content as a process involving journalists, scientific sources, stakeholders, and the responsive audience. The second part then provides an overview of the perception of scientific uncertainty presented by the media and the consequences for the recipients' own assessments of uncertainty. Finally, we briefly describe the six research articles included in this special issue. © The Author(s) 2016.

  20. Difficulties in fitting the thermal response of atomic force microscope cantilevers for stiffness calibration

    International Nuclear Information System (INIS)

    Cole, D G

    2008-01-01

    This paper discusses the difficulties of calibrating atomic force microscope (AFM) cantilevers, in particular the effect calibrating under light fluid-loading (in air) and under heavy fluid-loading (in water) has on the ability to use thermal motion response to fit model parameters that are used to determine cantilever stiffness. For the light fluid-loading case, the resonant frequency and quality factor can easily be used to determine stiffness. The extension of this approach to the heavy fluid-loading case is troublesome due to the low quality factor (high damping) caused by fluid-loading. Simple calibration formulae are difficult to realize, and the best approach is often to curve-fit the thermal response, using the parameters of natural frequency and mass ratio so that the curve-fit's response is within some acceptable tolerance of the actual thermal response. The parameters can then be used to calculate the cantilever stiffness. However, the process of curve-fitting can lead to erroneous results unless suitable care is taken. A feedback model of the fluid–structure interaction between the unloaded cantilever and the hydrodynamic drag provides a framework for fitting a modeled thermal response to a measured response and for evaluating the parametric uncertainty of the fit. The cases of uncertainty in the natural frequency, the mass ratio, and combined uncertainty are presented and the implications for system identification and stiffness calibration using curve-fitting techniques are discussed. Finally, considerations and recommendations for the calibration of AFM cantilevers are given in light of the results of this paper

  1. Incorporating uncertainty in predictive species distribution modelling.

    Science.gov (United States)

    Beale, Colin M; Lennon, Jack J

    2012-01-19

    Motivated by the need to solve ecological problems (climate change, habitat fragmentation and biological invasions), there has been increasing interest in species distribution models (SDMs). Predictions from these models inform conservation policy, invasive species management and disease-control measures. However, predictions are subject to uncertainty, the degree and source of which is often unrecognized. Here, we review the SDM literature in the context of uncertainty, focusing on three main classes of SDM: niche-based models, demographic models and process-based models. We identify sources of uncertainty for each class and discuss how uncertainty can be minimized or included in the modelling process to give realistic measures of confidence around predictions. Because this has typically not been performed, we conclude that uncertainty in SDMs has often been underestimated and a false precision assigned to predictions of geographical distribution. We identify areas where development of new statistical tools will improve predictions from distribution models, notably the development of hierarchical models that link different types of distribution model and their attendant uncertainties across spatial scales. Finally, we discuss the need to develop more defensible methods for assessing predictive performance, quantifying model goodness-of-fit and for assessing the significance of model covariates.

  2. CHARACTERIZING AND PROPAGATING MODELING UNCERTAINTIES IN PHOTOMETRICALLY DERIVED REDSHIFT DISTRIBUTIONS

    International Nuclear Information System (INIS)

    Abrahamse, Augusta; Knox, Lloyd; Schmidt, Samuel; Thorman, Paul; Anthony Tyson, J.; Zhan Hu

    2011-01-01

    The uncertainty in the redshift distributions of galaxies has a significant potential impact on the cosmological parameter values inferred from multi-band imaging surveys. The accuracy of the photometric redshifts measured in these surveys depends not only on the quality of the flux data, but also on a number of modeling assumptions that enter into both the training set and spectral energy distribution (SED) fitting methods of photometric redshift estimation. In this work we focus on the latter, considering two types of modeling uncertainties: uncertainties in the SED template set and uncertainties in the magnitude and type priors used in a Bayesian photometric redshift estimation method. We find that SED template selection effects dominate over magnitude prior errors. We introduce a method for parameterizing the resulting ignorance of the redshift distributions, and for propagating these uncertainties to uncertainties in cosmological parameters.

  3. Uncertainty, joint uncertainty, and the quantum uncertainty principle

    International Nuclear Information System (INIS)

    Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad

    2016-01-01

    Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)

  4. Some sources of the underestimation of evaluated cross section uncertainties

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gai, E.V.

    2003-01-01

    The problem of the underestimation of evaluated cross-section uncertainties is addressed. Two basic sources of the underestimation of evaluated cross-section uncertainties - a) inconsistency between declared and observable experimental uncertainties and b) inadequacy between applied statistical models and processed experimental data - are considered. Both the sources of the underestimation are mainly a consequence of existence of the uncertainties unrecognized by experimenters. A model of a 'constant shift' is proposed for taking unrecognised experimental uncertainties into account. The model is applied for statistical analysis of the 238 U(n,f)/ 235 U(n,f) reaction cross-section ratio measurements. It is demonstrated that multiplication by sqrt(χ 2 ) as instrument for correction of underestimated evaluated cross-section uncertainties fails in case of correlated measurements. It is shown that arbitrary assignment of uncertainties and correlation in a simple least squares fit of two correlated measurements of unknown mean leads to physically incorrect evaluated results. (author)

  5. Parameter Identification and Uncertainty Analysis for Visual MODFLOW based Groundwater Flow Model in a Small River Basin, Eastern India

    Science.gov (United States)

    Jena, S.

    2015-12-01

    The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  6. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  7. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  8. [Evaluation of uncertainty for determination of tin and its compounds in air of workplace by flame atomic absorption spectrometry].

    Science.gov (United States)

    Wei, Qiuning; Wei, Yuan; Liu, Fangfang; Ding, Yalei

    2015-10-01

    To investigate the method for uncertainty evaluation of determination of tin and its compounds in the air of workplace by flame atomic absorption spectrometry. The national occupational health standards, GBZ/T160.28-2004 and JJF1059-1999, were used to build a mathematical model of determination of tin and its compounds in the air of workplace and to calculate the components of uncertainty. In determination of tin and its compounds in the air of workplace using flame atomic absorption spectrometry, the uncertainty for the concentration of the standard solution, atomic absorption spectrophotometer, sample digestion, parallel determination, least square fitting of the calibration curve, and sample collection was 0.436%, 0.13%, 1.07%, 1.65%, 3.05%, and 2.89%, respectively. The combined uncertainty was 9.3%.The concentration of tin in the test sample was 0.132 mg/m³, and the expanded uncertainty for the measurement was 0.012 mg/m³ (K=2). The dominant uncertainty for determination of tin and its compounds in the air of workplace comes from least squares fitting of the calibration curve and sample collection. Quality control should be improved in the process of calibration curve fitting and sample collection.

  9. Estimating the measurement uncertainty in forensic blood alcohol analysis.

    Science.gov (United States)

    Gullberg, Rod G

    2012-04-01

    For many reasons, forensic toxicologists are being asked to determine and report their measurement uncertainty in blood alcohol analysis. While understood conceptually, the elements and computations involved in determining measurement uncertainty are generally foreign to most forensic toxicologists. Several established and well-documented methods are available to determine and report the uncertainty in blood alcohol measurement. A straightforward bottom-up approach is presented that includes: (1) specifying the measurand, (2) identifying the major components of uncertainty, (3) quantifying the components, (4) statistically combining the components and (5) reporting the results. A hypothetical example is presented that employs reasonable estimates for forensic blood alcohol analysis assuming headspace gas chromatography. These computations are easily employed in spreadsheet programs as well. Determining and reporting measurement uncertainty is an important element in establishing fitness-for-purpose. Indeed, the demand for such computations and information from the forensic toxicologist will continue to increase.

  10. Calculation of uncertainties associated to environmental radioactivity measurements and their functions. Practical Procedure

    International Nuclear Information System (INIS)

    Gasco Leonarte, C.; Anton Mateos, M.P.

    1995-12-01

    This report summarizes the procedure used to calculate the uncertainties associated to environmental radioactivity measurements. focusing on those obtained by radiochemical separation in which tracers have been added. Uncertainties linked to activity concentration calculations, isotopic ratio, inventories, sequential leaching data, chronology dating by using C.R.S model and duplicate analysis are described in detail. The objective of this article is to serve as a guide to people not familiarized with this kind of calculations, showing clear practical examples. The input of the formulas and all the data needed to achieve these calculations into the Lotus 1,2,3, WIN is outlined as well. (Author)

  11. Calculation of uncertainties associated to environmental radioactivity measurements and their functions. Practical Procedure

    International Nuclear Information System (INIS)

    Gasco Leonarte, C; Anton Mateos, M. P.

    1995-01-01

    This report summarizes the procedure used to calculate the uncertainties associated to environmental radioactivity measurements, focusing on those obtained by radiochemical separation in which tracers have been added. Uncertainties linked to activity concentration calculations, isotopic rat iso, inventories, sequential leaching data, chronology dating by using C.R.S. model and duplicate analysis are described in detail. The objective of this article is to serve as a guide to people not familiarized with this kind of calculations, showing clear practical examples. The input of the formulas and all the data needed to achieve these calculations into the Lotus 1, 2, 3 WTN is outlined as well. (Author) 13 refs

  12. Uncertainty quantification for proton–proton fusion in chiral effective field theory

    Energy Technology Data Exchange (ETDEWEB)

    Acharya, B. [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996 (United States); Carlsson, B.D. [Department of Physics, Chalmers University of Technology, SE-412 96 Göteborg (Sweden); Ekström, A. [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996 (United States); Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Department of Physics, Chalmers University of Technology, SE-412 96 Göteborg (Sweden); Forssén, C. [Department of Physics, Chalmers University of Technology, SE-412 96 Göteborg (Sweden); Platter, L., E-mail: lplatter@utk.edu [Department of Physics and Astronomy, University of Tennessee, Knoxville, TN 37996 (United States); Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)

    2016-09-10

    We compute the S-factor of the proton–proton (pp) fusion reaction using chiral effective field theory (χEFT) up to next-to-next-to-leading order (NNLO) and perform a rigorous uncertainty analysis of the results. We quantify the uncertainties due to (i) the computational method used to compute the pp cross section in momentum space, (ii) the statistical uncertainties in the low-energy coupling constants of χEFT, (iii) the systematic uncertainty due to the χEFT cutoff, and (iv) systematic variations in the database used to calibrate the nucleon–nucleon interaction. We also examine the robustness of the polynomial extrapolation procedure, which is commonly used to extract the threshold S-factor and its energy-derivatives. By performing a statistical analysis of the polynomial fit of the energy-dependent S-factor at several different energy intervals, we eliminate a systematic uncertainty that can arise from the choice of the fit interval in our calculations. In addition, we explore the statistical correlations between the S-factor and few-nucleon observables such as the binding energies and point-proton radii of {sup 2,3}H and {sup 3}He as well as the D-state probability and quadrupole moment of {sup 2}H, and the β-decay of {sup 3}H. We find that, with the state-of-the-art optimization of the nuclear Hamiltonian, the statistical uncertainty in the threshold S-factor cannot be reduced beyond 0.7%.

  13. Uncertainty Model for Total Solar Irradiance Estimation on Australian Rooftops

    Science.gov (United States)

    Al-Saadi, Hassan; Zivanovic, Rastko; Al-Sarawi, Said

    2017-11-01

    The installations of solar panels on Australian rooftops have been in rise for the last few years, especially in the urban areas. This motivates academic researchers, distribution network operators and engineers to accurately address the level of uncertainty resulting from grid-connected solar panels. The main source of uncertainty is the intermittent nature of radiation, therefore, this paper presents a new model to estimate the total radiation incident on a tilted solar panel. Where a probability distribution factorizes clearness index, the model is driven upon clearness index with special attention being paid for Australia with the utilization of best-fit-correlation for diffuse fraction. The assessment of the model validity is achieved with the adoption of four goodness-of-fit techniques. In addition, the Quasi Monte Carlo and sparse grid methods are used as sampling and uncertainty computation tools, respectively. High resolution data resolution of solar irradiations for Adelaide city were used for this assessment, with an outcome indicating a satisfactory agreement between actual data variation and model.

  14. Modelling sequentially scored item responses

    NARCIS (Netherlands)

    Akkermans, W.

    2000-01-01

    The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is

  15. Sequential decisions: a computational comparison of observational and reinforcement accounts.

    Directory of Open Access Journals (Sweden)

    Nazanin Mohammadi Sepahvand

    Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.

  16. Sequential experimental design based generalised ANOVA

    Energy Technology Data Exchange (ETDEWEB)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  17. Uncertainty in a monthly water balance model using the generalized likelihood uncertainty estimation methodology

    Science.gov (United States)

    Rivera, Diego; Rivas, Yessica; Godoy, Alex

    2015-02-01

    Hydrological models are simplified representations of natural processes and subject to errors. Uncertainty bounds are a commonly used way to assess the impact of an input or model architecture uncertainty in model outputs. Different sets of parameters could have equally robust goodness-of-fit indicators, which is known as Equifinality. We assessed the outputs from a lumped conceptual hydrological model to an agricultural watershed in central Chile under strong interannual variability (coefficient of variability of 25%) by using the Equifinality concept and uncertainty bounds. The simulation period ran from January 1999 to December 2006. Equifinality and uncertainty bounds from GLUE methodology (Generalized Likelihood Uncertainty Estimation) were used to identify parameter sets as potential representations of the system. The aim of this paper is to exploit the use of uncertainty bounds to differentiate behavioural parameter sets in a simple hydrological model. Then, we analyze the presence of equifinality in order to improve the identification of relevant hydrological processes. The water balance model for Chillan River exhibits, at a first stage, equifinality. However, it was possible to narrow the range for the parameters and eventually identify a set of parameters representing the behaviour of the watershed (a behavioural model) in agreement with observational and soft data (calculation of areal precipitation over the watershed using an isohyetal map). The mean width of the uncertainty bound around the predicted runoff for the simulation period decreased from 50 to 20 m3s-1 after fixing the parameter controlling the areal precipitation over the watershed. This decrement is equivalent to decreasing the ratio between simulated and observed discharge from 5.2 to 2.5. Despite the criticisms against the GLUE methodology, such as the lack of statistical formality, it is identified as a useful tool assisting the modeller with the identification of critical parameters.

  18. Automated cleaning and uncertainty attribution of archival bathymetry based on a priori knowledge

    Science.gov (United States)

    Ladner, Rodney Wade; Elmore, Paul; Perkins, A. Louise; Bourgeois, Brian; Avera, Will

    2017-09-01

    Hydrographic offices hold large valuable historic bathymetric data sets, many of which were collected using older generation survey systems that contain little or no metadata and/or uncertainty estimates. These bathymetric data sets generally contain large outlier (errant) data points to clean, yet standard practice does not include rigorous automated procedures for systematic cleaning of these historical data sets and their subsequent conversion into reusable data formats. In this paper, we propose an automated method for this task. We utilize statistically diverse threshold tests, including a robust least trimmed squared method, to clean the data. We use LOESS weighted regression residuals together with a Student-t distribution to attribute uncertainty for each retained sounding; the resulting uncertainty values compare favorably with native estimates of uncertainty from co-located data sets which we use to estimate a point-wise goodness-of-fit measure. Storing a cleansed validated data set augmented with uncertainty in a re-usable format provides the details of this analysis for subsequent users. Our test results indicate that the method significantly improves the quality of the data set while concurrently providing confidence interval estimates and point-wise goodness-of-fit estimates as referenced to current hydrographic practices.

  19. Sequential bidding in day-ahead auctions for spot energy and power systems reserve

    International Nuclear Information System (INIS)

    Swider, Derk J.

    2005-01-01

    In this paper a novel approach for sequential bidding on day-ahead auction markets for spot energy and power systems reserve is presented. For the spot market a relatively simple method is considered as a competitive market is assumed. For the reserve market one bidder is assumed to behave strategically and the behavior of the competitors is summarized in a probability distribution of the market price. This results in a method for sequential bidding, where the bidding prices and capacities on the spot and reserve markets are calculated by maximizing a stochastic non-linear objective function of expected profit. With an exemplary application is shown that the trading sequence leads to increasing bidding capacities and prices in the reverse rank number of the markets. Hence, the consideration of a defined trading sequence greatly influences the mathematical representation of the optimal bidding behavior under price uncertainty in day-ahead auctions for spot energy and power systems reserve. (Author)

  20. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.

    2014-12-15

    This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.

  1. Validation and uncertainty analysis of a pre-treatment 2D dose prediction model

    Science.gov (United States)

    Baeza, Jose A.; Wolfs, Cecile J. A.; Nijsten, Sebastiaan M. J. J. G.; Verhaegen, Frank

    2018-02-01

    Independent verification of complex treatment delivery with megavolt photon beam radiotherapy (RT) has been effectively used to detect and prevent errors. This work presents the validation and uncertainty analysis of a model that predicts 2D portal dose images (PDIs) without a patient or phantom in the beam. The prediction model is based on an exponential point dose model with separable primary and secondary photon fluence components. The model includes a scatter kernel, off-axis ratio map, transmission values and penumbra kernels for beam-delimiting components. These parameters were derived through a model fitting procedure supplied with point dose and dose profile measurements of radiation fields. The model was validated against a treatment planning system (TPS; Eclipse) and radiochromic film measurements for complex clinical scenarios, including volumetric modulated arc therapy (VMAT). Confidence limits on fitted model parameters were calculated based on simulated measurements. A sensitivity analysis was performed to evaluate the effect of the parameter uncertainties on the model output. For the maximum uncertainty, the maximum deviating measurement sets were propagated through the fitting procedure and the model. The overall uncertainty was assessed using all simulated measurements. The validation of the prediction model against the TPS and the film showed a good agreement, with on average 90.8% and 90.5% of pixels passing a (2%,2 mm) global gamma analysis respectively, with a low dose threshold of 10%. The maximum and overall uncertainty of the model is dependent on the type of clinical plan used as input. The results can be used to study the robustness of the model. A model for predicting accurate 2D pre-treatment PDIs in complex RT scenarios can be used clinically and its uncertainties can be taken into account.

  2. Uncertainty governance: an integrated framework for managing and communicating uncertainties

    International Nuclear Information System (INIS)

    Umeki, H.; Naito, M.; Takase, H.

    2004-01-01

    Treatment of uncertainty, or in other words, reasoning with imperfect information is widely recognised as being of great importance within performance assessment (PA) of the geological disposal mainly because of the time scale of interest and spatial heterogeneity that geological environment exhibits. A wide range of formal methods have been proposed for the optimal processing of incomplete information. Many of these methods rely on the use of numerical information, the frequency based concept of probability in particular, to handle the imperfections. However, taking quantitative information as a base for models that solve the problem of handling imperfect information merely creates another problem, i.e., how to provide the quantitative information. In many situations this second problem proves more resistant to solution, and in recent years several authors have looked at a particularly ingenious way in accordance with the rules of well-founded methods such as Bayesian probability theory, possibility theory, and the Dempster-Shafer theory of evidence. Those methods, while drawing inspiration from quantitative methods, do not require the kind of complete numerical information required by quantitative methods. Instead they provide information that, though less precise than that provided by quantitative techniques, is often, if not sufficient, the best that could be achieved. Rather than searching for the best method for handling all imperfect information, our strategy for uncertainty management, that is recognition and evaluation of uncertainties associated with PA followed by planning and implementation of measures to reduce them, is to use whichever method best fits the problem at hand. Such an eclectic position leads naturally to integration of the different formalisms. While uncertainty management based on the combination of semi-quantitative methods forms an important part of our framework for uncertainty governance, it only solves half of the problem

  3. Sequential charged particle reaction

    International Nuclear Information System (INIS)

    Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo

    2004-01-01

    The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)

  4. Effects of average uncertainty and trial-type frequency on choice response time: A hierarchical extension of Hick/Hyman Law.

    Science.gov (United States)

    Mordkoff, J Toby

    2017-12-01

    Hick/Hyman Law is the linear relationship between average uncertainty and mean response time across entire blocks of trials. While unequal trial-type frequencies within blocks can be used to manipulate average uncertainty, the current version of the law does not apply to or account for the differences in mean response time across the different trial types contained in a block. Other simple predictors of the effects of trial-type frequency also fail to produce satisfactory fits. In an attempt to resolve this limitation, the present work takes a hierarchical approach, first fitting the block-level data using average uncertainty (i.e., Hick/Hyman Law is given priority), then fitting the remaining trial-level differences using various versions of trial-type frequency. The model that employed the relative probability of occurrence as the second-layer predictor produced very strong fits, thereby extending Hick/Hyman Law to the level of trial types within blocks. The advantages and implications of this hierarchical model are briefly discussed.

  5. SPATIAL UNCERTAINTY OF NUTRIENT LOSS BY EROSION IN SUGARCANE HARVESTING SCENARIOS

    Directory of Open Access Journals (Sweden)

    Patrícia Gabarra Mendonça

    2015-08-01

    Full Text Available The assessment of spatial uncertainty in the prediction of nutrient losses by erosion associated with landscape models is an important tool for soil conservation planning. The purpose of this study was to evaluate the spatial and local uncertainty in predicting depletion rates of soil nutrients (P, K, Ca, and Mg by soil erosion from green and burnt sugarcane harvesting scenarios, using sequential Gaussian simulation (SGS. A regular grid with equidistant intervals of 50 m (626 points was established in the 200-ha study area, in Tabapuã, São Paulo, Brazil. The rate of soil depletion (SD was calculated from the relation between the nutrient concentration in the sediments and the chemical properties in the original soil for all grid points. The data were subjected to descriptive statistical and geostatistical analysis. The mean SD rate for all nutrients was higher in the slash-and-burn than the green cane harvest scenario (Student’s t-test, pMg>K>P. The SD rate was highest in areas with greater slope. Lower uncertainties were associated to the areas with higher SD and steeper slopes. Spatial uncertainties were highest for areas of transition between concave and convex landforms.

  6. Eyewitness confidence in simultaneous and sequential lineups: a criterion shift account for sequential mistaken identification overconfidence.

    Science.gov (United States)

    Dobolyi, David G; Dodson, Chad S

    2013-12-01

    Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  7. Quantification of uncertainty in first-principles predicted mechanical properties of solids: Application to solid ion conductors

    Science.gov (United States)

    Ahmad, Zeeshan; Viswanathan, Venkatasubramanian

    2016-08-01

    Computationally-guided material discovery is being increasingly employed using a descriptor-based screening through the calculation of a few properties of interest. A precise understanding of the uncertainty associated with first-principles density functional theory calculated property values is important for the success of descriptor-based screening. The Bayesian error estimation approach has been built in to several recently developed exchange-correlation functionals, which allows an estimate of the uncertainty associated with properties related to the ground state energy, for example, adsorption energies. Here, we propose a robust and computationally efficient method for quantifying uncertainty in mechanical properties, which depend on the derivatives of the energy. The procedure involves calculating energies around the equilibrium cell volume with different strains and fitting the obtained energies to the corresponding energy-strain relationship. At each strain, we use instead of a single energy, an ensemble of energies, giving us an ensemble of fits and thereby, an ensemble of mechanical properties associated with each fit, whose spread can be used to quantify its uncertainty. The generation of ensemble of energies is only a post-processing step involving a perturbation of parameters of the exchange-correlation functional and solving for the energy non-self-consistently. The proposed method is computationally efficient and provides a more robust uncertainty estimate compared to the approach of self-consistent calculations employing several different exchange-correlation functionals. We demonstrate the method by calculating the uncertainty bounds for several materials belonging to different classes and having different structures using the developed method. We show that the calculated uncertainty bounds the property values obtained using three different GGA functionals: PBE, PBEsol, and RPBE. Finally, we apply the approach to calculate the uncertainty

  8. Assessment and visualization of uncertainty for countrywide soil organic matter map of Hungary using local entropy

    Science.gov (United States)

    Szatmári, Gábor; Pásztor, László

    2016-04-01

    Uncertainty is a general term expressing our imperfect knowledge in describing an environmental process and we are aware of it (Bárdossy and Fodor, 2004). Sampling, laboratory measurements, models and so on are subject to uncertainty. Effective quantification and visualization of uncertainty would be indispensable to stakeholders (e.g. policy makers, society). Soil related features and their spatial models should be stressfully targeted to uncertainty assessment because their inferences are further used in modelling and decision making process. The aim of our present study was to assess and effectively visualize the local uncertainty of the countrywide soil organic matter (SOM) spatial distribution model of Hungary using geostatistical tools and concepts. The Hungarian Soil Information and Monitoring System's SOM data (approximately 1,200 observations) and environmental related, spatially exhaustive secondary information (i.e. digital elevation model, climatic maps, MODIS satellite images and geological map) were used to model the countrywide SOM spatial distribution by regression kriging. It would be common to use the calculated estimation (or kriging) variance as a measure of uncertainty, however the normality and homoscedasticity hypotheses have to be refused according to our preliminary analysis on the data. Therefore, a normal score transformation and a sequential stochastic simulation approach was introduced to be able to model and assess the local uncertainty. Five hundred equally probable realizations (i.e. stochastic images) were generated. The number of the stochastic images is fairly enough to provide a model of uncertainty at each location, which is a complete description of uncertainty in geostatistics (Deutsch and Journel, 1998). Furthermore, these models can be applied e.g. to contour the probability of any events, which can be regarded as goal oriented digital soil maps and are of interest for agricultural management and decision making as well. A

  9. A sequential decision framework for increasing college students' support for organ donation and organ donor registration.

    Science.gov (United States)

    Peltier, James W; D'Alessandro, Anthony M; Dahl, Andrew J; Feeley, Thomas Hugh

    2012-09-01

    Despite the fact that college students support social causes, this age group has underparticipated in organ donor registration. Little research attention has been given to understanding deeper, higher-order relationships between the antecedent attitudes toward and perceptions of organ donation and registration behavior. To test a process model useful for understanding the sequential ordering of information necessary for moving college students along a hierarchical decision-making continuum from awareness to support to organ donor registration. The University of Wisconsin organ procurement organization collaborated with the Collegiate American Marketing Association on a 2-year grant funded by the US Health Resources and Services Administration. A total of 981 association members responded to an online questionnaire. The 5 antecedent measures were awareness of organ donation, need acknowledgment, benefits of organ donation, social support, and concerns about organ donation. The 2 consequence variables were support for organ donation and organ donation registration. Structural equation modeling indicated that 5 of 10 direct antecedent pathways led significantly into organ donation support and registration. The impact of the nonsignificant variables was captured via indirect effects through other decision variables. Model fit statistics were good: the goodness of fit index was .998, the adjusted goodness of fit index was .992, and the root mean square error of approximation was .001. This sequential decision-making model provides insight into the need to enhance the acceptance of organ donation and organ donor registration through a series of communications to move people from awareness to behavior.

  10. Monkeys and humans take local uncertainty into account when localizing a change.

    Science.gov (United States)

    Devkar, Deepna; Wright, Anthony A; Ma, Wei Ji

    2017-09-01

    Since sensory measurements are noisy, an observer is rarely certain about the identity of a stimulus. In visual perception tasks, observers generally take their uncertainty about a stimulus into account when doing so helps task performance. Whether the same holds in visual working memory tasks is largely unknown. Ten human and two monkey subjects localized a single change in orientation between a sample display containing three ellipses and a test display containing two ellipses. To manipulate uncertainty, we varied the reliability of orientation information by making each ellipse more or less elongated (two levels); reliability was independent across the stimuli. In both species, a variable-precision encoding model equipped with an "uncertainty-indifferent" decision rule, which uses only the noisy memories, fitted the data poorly. In both species, a much better fit was provided by a model in which the observer also takes the levels of reliability-driven uncertainty associated with the memories into account. In particular, a measured change in a low-reliability stimulus was given lower weight than the same change in a high-reliability stimulus. We did not find strong evidence that observers took reliability-independent variations in uncertainty into account. Our results illustrate the importance of studying the decision stage in comparison tasks and provide further evidence for evolutionary continuity of working memory systems between monkeys and humans.

  11. SAMMY, Multilevel R-Matrix Fits to Neutron and Charged-Particle Cross-Section Data Using Bayes' Equations

    International Nuclear Information System (INIS)

    Larson, Nancy M.

    2007-01-01

    ://www.ornl.gov/sci/nuclear_science_technology/nuclear_data/sammy/ErrataDetail.html). (See References in Section 10 below.). Please see the home page http://www.ornl.gov/sci/nuclear_science_technology/nuclear_data/ for the ORNL Nuclear Data Group and links from there to the SAMMY home page. 2 - Method of solution: Bayes' Theorem (generalized least squares) is used to find the 'best fit' values of parameters and the associated parameter covariance matrix. In the RRR, different data sets, or different energy ranges of the same data set, may be analyzed either simultaneously (though the implementation is somewhat awkward) or sequentially with results effectively equivalent to those which would be obtained via a simultaneous analysis, provided the output parameter values and covariance matrix from the first analysis are used as input to the second analysis. Also included are expeditious methods (the 'propagated uncertainty parameter' and 'implicit data covariance' procedures) of including the correct data covariance matrix within the fitting procedure. In the RRR, sequential analysis is the default mode though analyses can also be performed simultaneously. In the URR, the default mode is simultaneous analysis, though capability for sequential analyses is also available. 3 - Restrictions on the complexity of the problem: None noted

  12. Bayesian Mars for uncertainty quantification in stochastic transport problems

    International Nuclear Information System (INIS)

    Stripling, Hayes F.; McClarren, Ryan G.

    2011-01-01

    We present a method for estimating solutions to partial differential equations with uncertain parameters using a modification of the Bayesian Multivariate Adaptive Regression Splines (BMARS) emulator. The BMARS algorithm uses Markov chain Monte Carlo (MCMC) to construct a basis function composed of polynomial spline functions, for which derivatives and integrals are straightforward to compute. We use these calculations and a modification of the curve-fitting BMARS algorithm to search for a basis function (response surface) which, in combination with its derivatives/integrals, satisfies a governing differential equation and specified boundary condition. We further show that this fit can be improved by enforcing a conservation or other physics-based constraint. Our results indicate that estimates to solutions of simple first order partial differential equations (without uncertainty) can be efficiently computed with very little regression error. We then extend the method to estimate uncertainties in the solution to a pure absorber transport problem in a medium with uncertain cross-section. We describe and compare two strategies for propagating the uncertain cross-section through the BMARS algorithm; the results from each method are in close comparison with analytic results. We discuss the scalability of the algorithm to parallel architectures and the applicability of the two strategies to larger problems with more degrees of uncertainty. (author)

  13. A Procedure for the Sequential Determination of Radionuclides in Phosphogypsum Liquid Scintillation Counting and Alpha Spectrometry for 210Po, 210Pb, 226Ra, Th and U Radioisotopes

    International Nuclear Information System (INIS)

    2014-01-01

    Since 2004, the Environment Programme of the IAEA has included activities aimed at the development of a set of procedures for the determination of radionuclides in terrestrial environmental samples. Reliable, comparable and 'fit for purpose' results are essential requirements for any decision based on analytical measurements. For the analyst, tested and validated analytical procedures are extremely important tools for the production of such analytical data. For maximum utility, such procedures should be comprehensive, clearly formulated, and readily available to both the analyst and the customer for reference. In this publication, a combined procedure for the sequential determination of 210Po, 210Pb, 226Ra, Th and U radioisotopes in phosphogypsum is described. The method is based on the dissolution of small amounts of phosphogypsum by microwave digestion, followed by sequential separation of 210Po, 210Pb, Th and U radioisotopes by selective extraction chromatography using Sr, TEVA and UTEVA resins. Radium-226 is separated from interfering elements using Ba(Ra)SO4 co-precipitation. Lead-210 is determined by liquid scintillation counting. The alpha source of 210Po is prepared by autodeposition on a silver plate. The alpha sources of Th and U are prepared by electrodeposition on a stainless steel plate. A comprehensive methodology for the calculation of results, including the quantification of measurement uncertainty, was also developed. The procedure is introduced as a recommended procedure and validated in terms of trueness, repeatability and reproducibility in accordance with ISO guidelines

  14. Sequential decision making in computational sustainability via adaptive submodularity

    Science.gov (United States)

    Krause, Andreas; Golovin, Daniel; Converse, Sarah J.

    2015-01-01

    Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.

  15. Unbiased determination of polarized parton distributions and their uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Ball, Richard D. [Tait Institute, University of Edinburgh, JCMB, KB, Mayfield Rd, Edinburgh EH9 3JZ, Scotland (United Kingdom); Forte, Stefano, E-mail: forte@mi.infn.it [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Guffanti, Alberto [The Niels Bohr International Academy and Discovery Center, The Niels Bohr Institute, Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Nocera, Emanuele R. [Dipartimento di Fisica, Università di Milano and INFN, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ridolfi, Giovanni [Dipartimento di Fisica, Università di Genova and INFN, Sezione di Genova, Genova (Italy); Rojo, Juan [PH Department, TH Unit, CERN, CH-1211 Geneva 23 (Switzerland)

    2013-09-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations.

  16. Unbiased determination of polarized parton distributions and their uncertainties

    International Nuclear Information System (INIS)

    Ball, Richard D.; Forte, Stefano; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-01-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent polarized parton sets, and in particular obtain predictions for polarized first moments of PDFs based on it. We find that the uncertainties on the gluon, and to a lesser extent the strange PDF, were substantially underestimated in previous determinations

  17. Remarks on sequential designs in risk assessment

    International Nuclear Information System (INIS)

    Seidenfeld, T.

    1982-01-01

    The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism

  18. Parametric uncertainty modeling for robust control

    DEFF Research Database (Denmark)

    Rasmussen, K.H.; Jørgensen, Sten Bay

    1999-01-01

    The dynamic behaviour of a non-linear process can often be approximated with a time-varying linear model. In the presented methodology the dynamics is modeled non-conservatively as parametric uncertainty in linear lime invariant models. The obtained uncertainty description makes it possible...... to perform robustness analysis on a control system using the structured singular value. The idea behind the proposed method is to fit a rational function to the parameter variation. The parameter variation can then be expressed as a linear fractional transformation (LFT), It is discussed how the proposed...... point changes. It is shown that a diagonal PI control structure provides robust performance towards variations in feed flow rate or feed concentrations. However including both liquid and vapor flow delays robust performance specifications cannot be satisfied with this simple diagonal control structure...

  19. SURVEY DESIGN FOR SPECTRAL ENERGY DISTRIBUTION FITTING: A FISHER MATRIX APPROACH

    International Nuclear Information System (INIS)

    Acquaviva, Viviana; Gawiser, Eric; Bickerton, Steven J.; Grogin, Norman A.; Guo Yicheng; Lee, Seong-Kook

    2012-01-01

    The spectral energy distribution (SED) of a galaxy contains information on the galaxy's physical properties, and multi-wavelength observations are needed in order to measure these properties via SED fitting. In planning these surveys, optimization of the resources is essential. The Fisher Matrix (FM) formalism can be used to quickly determine the best possible experimental setup to achieve the desired constraints on the SED-fitting parameters. However, because it relies on the assumption of a Gaussian likelihood function, it is in general less accurate than other slower techniques that reconstruct the probability distribution function (PDF) from the direct comparison between models and data. We compare the uncertainties on SED-fitting parameters predicted by the FM to the ones obtained using the more thorough PDF-fitting techniques. We use both simulated spectra and real data, and consider a large variety of target galaxies differing in redshift, mass, age, star formation history, dust content, and wavelength coverage. We find that the uncertainties reported by the two methods agree within a factor of two in the vast majority (∼90%) of cases. If the age determination is uncertain, the top-hat prior in age used in PDF fitting to prevent each galaxy from being older than the universe needs to be incorporated in the FM, at least approximately, before the two methods can be properly compared. We conclude that the FM is a useful tool for astronomical survey design.

  20. Sequential lineup laps and eyewitness accuracy.

    Science.gov (United States)

    Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A

    2011-08-01

    Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.

  1. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    Science.gov (United States)

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε analysis is effective in the evaluation of maize-breeding trials.

  2. Robustness of the Sequential Lineup Advantage

    Science.gov (United States)

    Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.

    2009-01-01

    A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…

  3. [Method for optimal sensor placement in water distribution systems with nodal demand uncertainties].

    Science.gov (United States)

    Liu, Shu-Ming; Wu, Xue; Ouyang, Le-Yan

    2013-08-01

    The notion of identification fitness was proposed for optimizing sensor placement in water distribution systems. Nondominated Sorting Genetic Algorithm II was used to find the Pareto front between minimum overlap of possible detection times of two events and the best probability of detection, taking nodal demand uncertainties into account. This methodology was applied to an example network. The solutions show that the probability of detection and the number of possible locations are not remarkably affected by nodal demand uncertainties, but the sources identification accuracy declines with nodal demand uncertainties.

  4. An integrated, probabilistic model for improved seasonal forecasting of agricultural crop yield under environmental uncertainty

    Directory of Open Access Journals (Sweden)

    Nathaniel K. Newlands

    2014-06-01

    Full Text Available We present a novel forecasting method for generating agricultural crop yield forecasts at the seasonal and regional-scale, integrating agroclimate variables and remotely-sensed indices. The method devises a multivariate statistical model to compute bias and uncertainty in forecasted yield at the Census of Agricultural Region (CAR scale across the Canadian Prairies. The method uses robust variable-selection to select the best predictors within spatial subregions. Markov-Chain Monte Carlo (MCMC simulation and random forest-tree machine learning techniques are then integrated to generate sequential forecasts through the growing season. Cross-validation of the model was performed by hindcasting/backcasting it and comparing its forecasts against available historical data (1987-2011 for spring wheat (Triticum aestivum L.. The model was also validated for the 2012 growing season by comparing its forecast skill at the CAR, provincial and Canadian Prairie region scales against available statistical survey data. Mean percent departures between wheat yield forecasted were under-estimated by 1-4 % in mid-season and over-estimated by 1 % at the end of the growing season. This integrated methodology offers a consistent, generalizable approach for sequentially forecasting crop yield at the regional-scale. It provides a statistically robust, yet flexible way to concurrently adjust to data-rich and data-sparse situations, adaptively select different predictors of yield to changing levels of environmental uncertainty, and to update forecasts sequentially so as to incorporate new data as it becomes available. This integrated method also provides additional statistical support for assessing the accuracy and reliability of model-based crop yield forecasts in time and space.

  5. Exploration and extension of an improved Riemann track fitting algorithm

    Science.gov (United States)

    Strandlie, A.; Frühwirth, R.

    2017-09-01

    Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.

  6. Multi-agent sequential hypothesis testing

    KAUST Repository

    Kim, Kwang-Ki K.; Shamma, Jeff S.

    2014-01-01

    incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well

  7. Sequential stochastic optimization

    CERN Document Server

    Cairoli, Renzo

    1996-01-01

    Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet

  8. AMS-02 fits dark matter

    Science.gov (United States)

    Balázs, Csaba; Li, Tong

    2016-05-01

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  9. AMS-02 fits dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Balázs, Csaba; Li, Tong [ARC Centre of Excellence for Particle Physics at the Tera-scale,School of Physics and Astronomy, Monash University, Melbourne, Victoria 3800 (Australia)

    2016-05-05

    In this work we perform a comprehensive statistical analysis of the AMS-02 electron, positron fluxes and the antiproton-to-proton ratio in the context of a simplified dark matter model. We include known, standard astrophysical sources and a dark matter component in the cosmic ray injection spectra. To predict the AMS-02 observables we use propagation parameters extracted from observed fluxes of heavier nuclei and the low energy part of the AMS-02 data. We assume that the dark matter particle is a Majorana fermion coupling to third generation fermions via a spin-0 mediator, and annihilating to multiple channels at once. The simultaneous presence of various annihilation channels provides the dark matter model with additional flexibility, and this enables us to simultaneously fit all cosmic ray spectra using a simple particle physics model and coherent astrophysical assumptions. Our results indicate that AMS-02 observations are not only consistent with the dark matter hypothesis within the uncertainties, but adding a dark matter contribution improves the fit to the data. Assuming, however, that dark matter is solely responsible for this improvement of the fit, it is difficult to evade the latest CMB limits in this model.

  10. CURVE LSFIT, Gamma Spectrometer Calibration by Interactive Fitting Method

    International Nuclear Information System (INIS)

    Olson, D.G.

    1992-01-01

    1 - Description of program or function: CURVE and LSFIT are interactive programs designed to obtain the best data fit to an arbitrary curve. CURVE finds the type of fitting routine which produces the best curve. The types of fitting routines available are linear regression, exponential, logarithmic, power, least squares polynomial, and spline. LSFIT produces a reliable calibration curve for gamma ray spectrometry by using the uncertainty value associated with each data point. LSFIT is intended for use where an entire efficiency curve is to be made starting at 30 KeV and continuing to 1836 KeV. It creates calibration curves using up to three least squares polynomial fits to produce the best curve for photon energies above 120 KeV and a spline function to combine these fitted points with a best fit for points below 120 KeV. 2 - Method of solution: The quality of fit is tested by comparing the measured y-value to the y-value calculated from the fitted curve. The fractional difference between these two values is printed for the evaluation of the quality of the fit. 3 - Restrictions on the complexity of the problem - Maxima of: 2000 data points calibration curve output (LSFIT) 30 input data points 3 least squares polynomial fits (LSFIT) The least squares polynomial fit requires that the number of data points used exceed the degree of fit by at least two

  11. Exploring the sequential lineup advantage using WITNESS.

    Science.gov (United States)

    Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A

    2010-12-01

    Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.

  12. Bayesian analysis for uncertainty estimation of a canopy transpiration model

    Science.gov (United States)

    Samanta, S.; Mackay, D. S.; Clayton, M. K.; Kruger, E. L.; Ewers, B. E.

    2007-04-01

    A Bayesian approach was used to fit a conceptual transpiration model to half-hourly transpiration rates for a sugar maple (Acer saccharum) stand collected over a 5-month period and probabilistically estimate its parameter and prediction uncertainties. The model used the Penman-Monteith equation with the Jarvis model for canopy conductance. This deterministic model was extended by adding a normally distributed error term. This extension enabled using Markov chain Monte Carlo simulations to sample the posterior parameter distributions. The residuals revealed approximate conformance to the assumption of normally distributed errors. However, minor systematic structures in the residuals at fine timescales suggested model changes that would potentially improve the modeling of transpiration. Results also indicated considerable uncertainties in the parameter and transpiration estimates. This simple methodology of uncertainty analysis would facilitate the deductive step during the development cycle of deterministic conceptual models by accounting for these uncertainties while drawing inferences from data.

  13. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  14. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    Science.gov (United States)

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  15. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    Science.gov (United States)

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  16. Sequential memory: Binding dynamics

    Science.gov (United States)

    Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail

    2015-10-01

    Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.

  17. ALFA: an automated line fitting algorithm

    Science.gov (United States)

    Wesson, R.

    2016-03-01

    I present the automated line fitting algorithm, ALFA, a new code which can fit emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. In contrast to traditional emission line fitting methods which require the identification of spectral features suspected to be emission lines, ALFA instead uses a list of lines which are expected to be present to construct a synthetic spectrum. The parameters used to construct the synthetic spectrum are optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. I show that the results are in excellent agreement with those measured manually for a number of spectra. Where discrepancies exist, the manually measured fluxes are found to be less accurate than those returned by ALFA. Together with the code NEAT, ALFA provides a powerful way to rapidly extract physical information from observations, an increasingly vital function in the era of highly multiplexed spectroscopy. The two codes can deliver a reliable and comprehensive analysis of very large data sets in a few hours with little or no user interaction.

  18. Sequential reduction of external networks for the security- and short circuit monitor in power system control centers

    Energy Technology Data Exchange (ETDEWEB)

    Dietze, P [Siemens A.G., Erlangen (Germany, F.R.). Abt. ESTE

    1978-01-01

    For the evaluation of the effects of switching operations or simulation of line, transformer, and generator outages the influence of interconnected neighbor networks is modelled by network equivalents in the process computer. The basic passive conductivity model is produced by sequential reduction and adapted to fit the active network behavior. The reduction routine uses the admittance matrix, sparse technique and optimal ordering; it is applicable to process computer applications.

  19. Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters

    Science.gov (United States)

    Kim, A. G.

    2011-02-01

    I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.

  20. Sequential Probability Ration Tests : Conservative and Robust

    NARCIS (Netherlands)

    Kleijnen, J.P.C.; Shi, Wen

    2017-01-01

    In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output

  1. Muon g-2 Estimates. Can One Trust Effective Lagrangians and Global Fits?

    Energy Technology Data Exchange (ETDEWEB)

    Benayoun, M.; DelBuono, L. [Paris VI et Paris VII Univ. (France). LPNHE; David, P. [Paris VI et Paris VII Univ. (France). LPNHE; Paris-Diderot Univ./CNRS UMR 8236 (France). LIED; Jegerlehner, F. [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2015-07-15

    Previous studies have shown that the Hidden Local Symmetry (HLS) Model, supplied with appropriate symmetry breaking mechanisms, provides an Effective Lagrangian (BHLS) which encompasses a large number of processes within a unified framework; a global fit procedure allows for a simultaneous description of the e{sup +}e{sup -} annihilation into the 6 final states - π{sup +}π{sup -}, π{sup 0}γ, ηγ, π{sup +}π{sup -}π{sup 0}, K{sup +}K{sup -}, K{sub L}K{sub S} - and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a{sup th}{sub μ} of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved compared to its partner derived from integrating the measured spectra directly. However, most spectra for the process e{sup +}e{sup -} → π{sup +}π{sup -} undergo overall scale uncertainties which dominate the other sources, and one may suspect some bias in the dipion contribution to a{sup th}{sub μ}. However, an iterated fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied succesfully to the e{sup +}e{sup -} → π{sup +}π{sup -} data samples from CMD2, SND, KLOE (including the latest sample) and BaBar. The iterated fit solution is shown to be further improved and leads to a value for a{sub μ} different from aexp above the 4σ level. The contribution of the π{sup +}π{sup -} intermediate state up to 1.05 GeV to a{sub μ} derived from the iterated fit benefits from an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results. The main issue raised in this study and the kind of solution proposed may be of concern for other data driven methods when the data samples are dominated by global normalization uncertainties.

  2. Muon g-2 Estimates. Can One Trust Effective Lagrangians and Global Fits?

    International Nuclear Information System (INIS)

    Benayoun, M.; DelBuono, L.

    2015-07-01

    Previous studies have shown that the Hidden Local Symmetry (HLS) Model, supplied with appropriate symmetry breaking mechanisms, provides an Effective Lagrangian (BHLS) which encompasses a large number of processes within a unified framework; a global fit procedure allows for a simultaneous description of the e + e - annihilation into the 6 final states - π + π - , π 0 γ, ηγ, π + π - π 0 , K + K - , K L K S - and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a th μ of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved compared to its partner derived from integrating the measured spectra directly. However, most spectra for the process e + e - → π + π - undergo overall scale uncertainties which dominate the other sources, and one may suspect some bias in the dipion contribution to a th μ . However, an iterated fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied succesfully to the e + e - → π + π - data samples from CMD2, SND, KLOE (including the latest sample) and BaBar. The iterated fit solution is shown to be further improved and leads to a value for a μ different from aexp above the 4σ level. The contribution of the π + π - intermediate state up to 1.05 GeV to a μ derived from the iterated fit benefits from an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results. The main issue raised in this study and the kind of solution proposed may be of concern for other data driven methods when the data samples are dominated by global normalization uncertainties.

  3. Device-independent two-party cryptography secure against sequential attacks

    International Nuclear Information System (INIS)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-01-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser–Horne–Shimony–Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation. (paper)

  4. Device-independent two-party cryptography secure against sequential attacks

    Science.gov (United States)

    Kaniewski, Jędrzej; Wehner, Stephanie

    2016-05-01

    The goal of two-party cryptography is to enable two parties, Alice and Bob, to solve common tasks without the need for mutual trust. Examples of such tasks are private access to a database, and secure identification. Quantum communication enables security for all of these problems in the noisy-storage model by sending more signals than the adversary can store in a certain time frame. Here, we initiate the study of device-independent (DI) protocols for two-party cryptography in the noisy-storage model. Specifically, we present a relatively easy to implement protocol for a cryptographic building block known as weak string erasure and prove its security even if the devices used in the protocol are prepared by the dishonest party. DI two-party cryptography is made challenging by the fact that Alice and Bob do not trust each other, which requires new techniques to establish security. We fully analyse the case of memoryless devices (for which sequential attacks are optimal) and the case of sequential attacks for arbitrary devices. The key ingredient of the proof, which might be of independent interest, is an explicit (and tight) relation between the violation of the Clauser-Horne-Shimony-Holt inequality observed by Alice and Bob and uncertainty generated by Alice against Bob who is forced to measure his system before finding out Alice’s setting (guessing with postmeasurement information). In particular, we show that security is possible for arbitrarily small violation.

  5. Sequential lineup presentation: Patterns and policy

    OpenAIRE

    Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I

    2009-01-01

    Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...

  6. Sequential Product of Quantum Effects: An Overview

    Science.gov (United States)

    Gudder, Stan

    2010-12-01

    This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.

  7. Sequential Path Model for Grain Yield in Soybean

    Directory of Open Access Journals (Sweden)

    Mohammad SEDGHI

    2010-09-01

    Full Text Available This study was performed to determine some physiological traits that affect soybean,s grain yield via sequential path analysis. In a factorial experiment, two cultivars (Harcor and Williams were sown under four levels of nitrogen and two levels of weed management at the research station of Tabriz University, Iran, during 2004 and 2005. Grain yield, some yield components and physiological traits were measured. Correlation coefficient analysis showed that grain yield had significant positive and negative association with measured traits. A sequential path analysis was done in order to evaluate associations among grain yield and related traits by ordering the various variables in first, second and third order paths on the basis of their maximum direct effects and minimal collinearity. Two first-order variables, namely number of pods per plant and pre-flowering net photosynthesis revealed highest direct effect on total grain yield and explained 49, 44 and 47 % of the variation in grain yield based on 2004, 2005, and combined datasets, respectively. Four traits i.e. post-flowering net photosynthesis, plant height, leaf area index and intercepted radiation at the bottom layer of canopy were found to fit as second-order variables. Pre- and post-flowering chlorophyll content, main root length and intercepted radiation at the middle layer of canopy were placed at the third-order path. From the results concluded that, number of pods per plant and pre-flowering net photosynthesis are the best selection criteria in soybean for grain yield.

  8. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  9. Parametric fitting of corneal height data to a biconic surface.

    Science.gov (United States)

    Janunts, Edgar; Kannengießer, Marc; Langenbucher, Achim

    2015-03-01

    As the average corneal shape can effectively be approximated by a conic section, a determination of the corneal shape by biconic parameters is desired. The purpose of the paper is to introduce a straightforward mathematical approach for extracting clinically relevant parameters of corneal surface, such as radii of curvature and conic constants for principle meridians and astigmatism. A general description for modeling the ocular surfaces in a biconic form is given, based on which an implicit parametric surface fitting algorithm is introduced. The solution of the biconic fitting is obtained by a two sequential least squares optimization approach with constrains. The data input can be raw information from any corneal topographer with not necessarily a uniform data distribution. Various simulated and clinical data are studied including surfaces with rotationally symmetric and non-symmetric geometries. The clinical data was obtained from the Pentacam (Oculus) for the patient having undergone a refractive surgery. A sub-micrometer fitting accuracy was obtained for all simulated surfaces: 0,08 μm RMS fitting error at max for rotationally symmetric and 0,125 μm for non-symmetric surfaces. The astigmatism was recovered in a sub-minutes resolution. The equality in rotational symmetric and the superiority in non-symmetric surfaces of the presented model over the widely used quadric fitting model is shown. The introduced biconic surface fitting algorithm is able to recover the apical radii of curvature and conic constants in principle meridians. This methodology could be a platform for advanced IOL calculations and enhanced contact lens fitting. Copyright © 2014. Published by Elsevier GmbH.

  10. Evaluation and uncertainty estimates of Charpy-impact data

    International Nuclear Information System (INIS)

    Stallman, F.W.

    1982-01-01

    Shifts in transition temperature and upper-shelf energy from Charpy tests are used to determine the extent of radiation embrittlement in steels. In order to determine these parameters reliably and to obtain uncertainty estimates, curve fitting procedures need to be used. The hyperbolic tangent or similar models have been proposed to fit the temperature-impact-energy curve. These models are not based on the actual fracture mechanics and are indeed poorly suited in many applications. The results may be falsified by forcing an inflexible curve through too many data points. The nonlinearity of the fit poses additional problems. In this paper, a simple linear fit is proposed. By eliminating data which are irrelevant for the determination of a given parameter, better reliability and accuracy can be achieved. Additional input parameters like fluence and irradiation temperature can be included. This is important if there is a large variation of fluence and temperature in different test specimens. The method has been tested with Charpy specimens from the NRC-HSST experiments

  11. Quantifying the uncertainty of wave energy conversion device cost for policy appraisal: An Irish case study

    International Nuclear Information System (INIS)

    Farrell, Niall; Donoghue, Cathal O’; Morrissey, Karyn

    2015-01-01

    Wave Energy Conversion (WEC) devices are at a pre-commercial stage of development with feasibility studies sensitive to uncertainties surrounding assumed input costs. This may affect decision making. This paper analyses the impact these uncertainties may have on investor, developer and policymaker decisions using an Irish case study. Calibrated to data present in the literature, a probabilistic methodology is shown to be an effective means to carry this out. Value at Risk (VaR) and Conditional Value at Risk (CVaR) metrics are used to quantify the certainty of achieving a given cost or return on investment. We analyse the certainty of financial return provided by the proposed Irish Feed-in Tariff (FiT) policy. The influence of cost reduction through bulk discount is also discussed, with cost reduction targets for developers identified. Uncertainty is found to have a greater impact on the profitability of smaller installations and those subject to lower rates of cost reduction. This paper emphasises that a premium is required to account for cost uncertainty when setting FiT rates. By quantifying uncertainty, a means to specify an efficient premium is presented. - Highlights: • Probabilistic model quantifies uncertainty for wave energy feasibility analyses. • Methodology presented and applied to an Irish case study. • A feed-in tariff premium of 3–4 c/kWh required to account for cost uncertainty. • Sensitivity of uncertainty and cost to rates of technological change analysed. • Use of probabilistic model for investors and developers also demonstrated

  12. Viral fitness does not correlate with three genotype displacement events involving infectious hematopoietic necrosis virus

    Science.gov (United States)

    Kell, Alison M.; Wargo, Andrew R.; Kurath, Gael

    2014-01-01

    Viral genotype displacement events are characterized by the replacement of a previously dominant virus genotype by a novel genotype of the same virus species in a given geographic region. We examine here the fitness of three pairs of infectious hematopoietic necrosis virus (IHNV) genotypes involved in three major genotype displacement events in Washington state over the last 30 years to determine whether increased virus fitness correlates with displacement. Fitness was assessed using in vivo assays to measure viral replication in single infection, simultaneous co-infection, and sequential superinfection in the natural host, steelhead trout. In addition, virion stability of each genotype was measured in freshwater and seawater environments at various temperatures. By these methods, we found no correlation between increased viral fitness and displacement in the field. These results suggest that other pressures likely exist in the field with important consequences for IHNV evolution.

  13. Fitting phase shifts to electron-ion elastic scattering measurements

    International Nuclear Information System (INIS)

    Per, M.C.; Dickinson, A.S.

    2000-01-01

    We have derived non-Coulomb phase shifts from measured differential cross sections for electron scattering by the ions Na + , Cs + , N 3+ , Ar 8+ and Xe 6+ at energies below the inelastic threshold. Values of the scaled squared deviation between the observed and fitted differential cross sections, χ 2 , for the best-fit phase shifts were typically in the range 3-6 per degree of freedom. Generally good agreement with experiment is obtained, except for wide-angle scattering by Ar 8+ and Xe 6+ . Current measurements do not define phase shifts to better than approx. 0.1 rad even in the most favourable circumstances and uncertainties can be much larger. (author)

  14. Quantum Inequalities and Sequential Measurements

    International Nuclear Information System (INIS)

    Candelpergher, B.; Grandouz, T.; Rubinx, J.L.

    2011-01-01

    In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)

  15. A New Multidisciplinary Design Optimization Method Accounting for Discrete and Continuous Variables under Aleatory and Epistemic Uncertainties

    Directory of Open Access Journals (Sweden)

    Hong-Zhong Huang

    2012-02-01

    Full Text Available Various uncertainties are inevitable in complex engineered systems and must be carefully treated in design activities. Reliability-Based Multidisciplinary Design Optimization (RBMDO has been receiving increasing attention in the past decades to facilitate designing fully coupled systems but also achieving a desired reliability considering uncertainty. In this paper, a new formulation of multidisciplinary design optimization, namely RFCDV (random/fuzzy/continuous/discrete variables Multidisciplinary Design Optimization (RFCDV-MDO, is developed within the framework of Sequential Optimization and Reliability Assessment (SORA to deal with multidisciplinary design problems in which both aleatory and epistemic uncertainties are present. In addition, a hybrid discrete-continuous algorithm is put forth to efficiently solve problems where both discrete and continuous design variables exist. The effectiveness and computational efficiency of the proposed method are demonstrated via a mathematical problem and a pressure vessel design problem.

  16. On the use of the covariance matrix to fit correlated data

    Science.gov (United States)

    D'Agostini, G.

    1994-07-01

    Best fits to data which are affected by systematic uncertainties on the normalization factor have the tendency to produce curves lower than expected if the covariance matrix of the data points is used in the definition of the χ2. This paper shows that the effect is a direct consequence of the hypothesis used to estimate the empirical covariance matrix, namely the linearization on which the usual error propagation relies. The bias can become unacceptable if the normalization error is large, or a large number of data points are fitted.

  17. Uncertainty assessment in gamma spectrometric measurements of plutonium isotope ratios and age

    Energy Technology Data Exchange (ETDEWEB)

    Ramebaeck, H., E-mail: henrik.ramebeck@foi.se [Swedish Defence Research Agency, FOI, Division of CBRN Defence and Security, SE-901 82 Umea (Sweden); Chalmers University of Technology, Department of Chemical and Biological Engineering, Nuclear Chemistry, SE-412 96 Goeteborg (Sweden); Nygren, U.; Tovedal, A. [Swedish Defence Research Agency, FOI, Division of CBRN Defence and Security, SE-901 82 Umea (Sweden); Ekberg, C.; Skarnemark, G. [Chalmers University of Technology, Department of Chemical and Biological Engineering, Nuclear Chemistry, SE-412 96 Goeteborg (Sweden)

    2012-09-15

    A method for the assessment of the combined uncertainty in gamma spectrometric measurements of plutonium composition and age was evaluated. Two materials were measured. Isotope dilution inductively coupled plasma sector field mass spectrometry (ID-ICP-SFMS) was used as a reference method for comparing the results obtained with the gamma spectrometric method for one of the materials. For this material (weapons grade plutonium) the measurement results were in agreement between the two methods for all measurands. Moreover, the combined uncertainty in all isotope ratios considered in this material (R{sub Pu238/Pu239}, R{sub Pu240/Pu239}, R{sub Pu241/Pu239}, and R{sub Am241/Pu241} for age determination) were limited by counting statistics. However, the combined uncertainty for the other material (fuel grade plutonium) were limited by the response fit, which shows that the uncertainty in the response function is important to include in the combined measurement uncertainty of gamma spectrometric measurements of plutonium.

  18. Speciation fingerprints of binary mixtures by the optimized sequential two-phase separation

    International Nuclear Information System (INIS)

    Macasek, F.

    1995-01-01

    The analysis of the separation methods suitable for chemical speciation of radionuclides and metals, and advantages of sequential (double) distribution technique were discussed. The equilibria are relatively easy to control and the method enables to minimize a matrix composition adjustment, and therefore it minimizes also the disturbance of original (native) state of elements. The technique may consist in the repeat solvent extraction of sample, or the replicate equilibration with sorbent. The common condition of applicability is a linear separation isotherm of the species, what is mostly a reasonable condition in case of trace concentrations. The equations used for simultaneous fitting were written in general form. 1 tab., 1 fig., 2 refs

  19. Joint analysis of input and parametric uncertainties in watershed water quality modeling: A formal Bayesian approach

    Science.gov (United States)

    Han, Feng; Zheng, Yi

    2018-06-01

    Significant Input uncertainty is a major source of error in watershed water quality (WWQ) modeling. It remains challenging to address the input uncertainty in a rigorous Bayesian framework. This study develops the Bayesian Analysis of Input and Parametric Uncertainties (BAIPU), an approach for the joint analysis of input and parametric uncertainties through a tight coupling of Markov Chain Monte Carlo (MCMC) analysis and Bayesian Model Averaging (BMA). The formal likelihood function for this approach is derived considering a lag-1 autocorrelated, heteroscedastic, and Skew Exponential Power (SEP) distributed error model. A series of numerical experiments were performed based on a synthetic nitrate pollution case and on a real study case in the Newport Bay Watershed, California. The Soil and Water Assessment Tool (SWAT) and Differential Evolution Adaptive Metropolis (DREAM(ZS)) were used as the representative WWQ model and MCMC algorithm, respectively. The major findings include the following: (1) the BAIPU can be implemented and used to appropriately identify the uncertain parameters and characterize the predictive uncertainty; (2) the compensation effect between the input and parametric uncertainties can seriously mislead the modeling based management decisions, if the input uncertainty is not explicitly accounted for; (3) the BAIPU accounts for the interaction between the input and parametric uncertainties and therefore provides more accurate calibration and uncertainty results than a sequential analysis of the uncertainties; and (4) the BAIPU quantifies the credibility of different input assumptions on a statistical basis and can be implemented as an effective inverse modeling approach to the joint inference of parameters and inputs.

  20. Sequential Ensembles Tolerant to Synthetic Aperture Radar (SAR Soil Moisture Retrieval Errors

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Lee

    2016-04-01

    Full Text Available Due to complicated and undefined systematic errors in satellite observation, data assimilation integrating model states with satellite observations is more complicated than field measurements-based data assimilation at a local scale. In the case of Synthetic Aperture Radar (SAR soil moisture, the systematic errors arising from uncertainties in roughness conditions are significant and unavoidable, but current satellite bias correction methods do not resolve the problems very well. Thus, apart from the bias correction process of satellite observation, it is important to assess the inherent capability of satellite data assimilation in such sub-optimal but more realistic observational error conditions. To this end, time-evolving sequential ensembles of the Ensemble Kalman Filter (EnKF is compared with stationary ensemble of the Ensemble Optimal Interpolation (EnOI scheme that does not evolve the ensembles over time. As the sensitivity analysis demonstrated that the surface roughness is more sensitive to the SAR retrievals than measurement errors, it is a scope of this study to monitor how data assimilation alters the effects of roughness on SAR soil moisture retrievals. In results, two data assimilation schemes all provided intermediate values between SAR overestimation, and model underestimation. However, under the same SAR observational error conditions, the sequential ensembles approached a calibrated model showing the lowest Root Mean Square Error (RMSE, while the stationary ensemble converged towards the SAR observations exhibiting the highest RMSE. As compared to stationary ensembles, sequential ensembles have a better tolerance to SAR retrieval errors. Such inherent nature of EnKF suggests an operational merit as a satellite data assimilation system, due to the limitation of bias correction methods currently available.

  1. Application of a virtual coordinate measuring machine for measurement uncertainty estimation of aspherical lens parameters

    International Nuclear Information System (INIS)

    Küng, Alain; Meli, Felix; Nicolet, Anaïs; Thalmann, Rudolf

    2014-01-01

    Tactile ultra-precise coordinate measuring machines (CMMs) are very attractive for accurately measuring optical components with high slopes, such as aspheres. The METAS µ-CMM, which exhibits a single point measurement repeatability of a few nanometres, is routinely used for measurement services of microparts, including optical lenses. However, estimating the measurement uncertainty is very demanding. Because of the many combined influencing factors, an analytic determination of the uncertainty of parameters that are obtained by numerical fitting of the measured surface points is almost impossible. The application of numerical simulation (Monte Carlo methods) using a parametric fitting algorithm coupled with a virtual CMM based on a realistic model of the machine errors offers an ideal solution to this complex problem: to each measurement data point, a simulated measurement variation calculated from the numerical model of the METAS µ-CMM is added. Repeated several hundred times, these virtual measurements deliver the statistical data for calculating the probability density function, and thus the measurement uncertainty for each parameter. Additionally, the eventual cross-correlation between parameters can be analyzed. This method can be applied for the calibration and uncertainty estimation of any parameter of the equation representing a geometric element. In this article, we present the numerical simulation model of the METAS µ-CMM and the application of a Monte Carlo method for the uncertainty estimation of measured asphere parameters. (paper)

  2. Drought prediction using co-active neuro-fuzzy inference system, validation, and uncertainty analysis (case study: Birjand, Iran)

    Science.gov (United States)

    Memarian, Hadi; Pourreza Bilondi, Mohsen; Rezaei, Majid

    2016-08-01

    This work aims to assess the capability of co-active neuro-fuzzy inference system (CANFIS) for drought forecasting of Birjand, Iran through the combination of global climatic signals with rainfall and lagged values of Standardized Precipitation Index (SPI) index. Using stepwise regression and correlation analyses, the signals NINO 1 + 2, NINO 3, Multivariate Enso Index, Tropical Southern Atlantic index, Atlantic Multi-decadal Oscillation index, and NINO 3.4 were recognized as the effective signals on the drought event in Birjand. Based on the results from stepwise regression analysis and regarding the processor limitations, eight models were extracted for further processing by CANFIS. The metrics P-factor and D-factor were utilized for uncertainty analysis, based on the sequential uncertainty fitting algorithm. Sensitivity analysis showed that for all models, NINO indices and rainfall variable had the largest impact on network performance. In model 4 (as the model with the lowest error during training and testing processes), NINO 1 + 2(t-5) with an average sensitivity of 0.7 showed the highest impact on network performance. Next, the variables rainfall, NINO 1 + 2(t), and NINO 3(t-6) with the average sensitivity of 0.59, 0.28, and 0.28, respectively, could have the highest effect on network performance. The findings based on network performance metrics indicated that the global indices with a time lag represented a better correlation with El Niño Southern Oscillation (ENSO). Uncertainty analysis of the model 4 demonstrated that 68 % of the observed data were bracketed by the 95PPU and D-Factor value (0.79) was also within a reasonable range. Therefore, the fourth model with a combination of the input variables NINO 1 + 2 (with 5 months of lag and without any lag), monthly rainfall, and NINO 3 (with 6 months of lag) and correlation coefficient of 0.903 (between observed and simulated SPI) was selected as the most accurate model for drought forecasting using CANFIS

  3. On uncertainty quantification in hydrogeology and hydrogeophysics

    Science.gov (United States)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  4. Sequential Generalized Transforms on Function Space

    Directory of Open Access Journals (Sweden)

    Jae Gil Choi

    2013-01-01

    Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].

  5. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1998-01-01

    We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...

  6. Sequential Bayesian geoacoustic inversion for mobile and compact source-receiver configuration.

    Science.gov (United States)

    Carrière, Olivier; Hermand, Jean-Pierre

    2012-04-01

    Geoacoustic characterization of wide areas through inversion requires easily deployable configurations including free-drifting platforms, underwater gliders and autonomous vehicles, typically performing repeated transmissions during their course. In this paper, the inverse problem is formulated as sequential Bayesian filtering to take advantage of repeated transmission measurements. Nonlinear Kalman filters implement a random-walk model for geometry and environment and an acoustic propagation code in the measurement model. Data from MREA/BP07 sea trials are tested consisting of multitone and frequency-modulated signals (bands: 0.25-0.8 and 0.8-1.6 kHz) received on a shallow vertical array of four hydrophones 5-m spaced drifting over 0.7-1.6 km range. Space- and time-coherent processing are applied to the respective signal types. Kalman filter outputs are compared to a sequence of global optimizations performed independently on each received signal. For both signal types, the sequential approach is more accurate but also more efficient. Due to frequency diversity, the processing of modulated signals produces a more stable tracking. Although an extended Kalman filter provides comparable estimates of the tracked parameters, the ensemble Kalman filter is necessary to properly assess uncertainty. In spite of mild range dependence and simplified bottom model, all tracked geoacoustic parameters are consistent with high-resolution seismic profiling, core logging P-wave velocity, and previous inversion results with fixed geometries.

  7. Sequential probability ratio controllers for safeguards radiation monitors

    International Nuclear Information System (INIS)

    Fehlau, P.E.; Coop, K.L.; Nixon, K.V.

    1984-01-01

    Sequential hypothesis tests applied to nuclear safeguards accounting methods make the methods more sensitive to detecting diversion. The sequential tests also improve transient signal detection in safeguards radiation monitors. This paper describes three microprocessor control units with sequential probability-ratio tests for detecting transient increases in radiation intensity. The control units are designed for three specific applications: low-intensity monitoring with Poisson probability ratios, higher intensity gamma-ray monitoring where fixed counting intervals are shortened by sequential testing, and monitoring moving traffic where the sequential technique responds to variable-duration signals. The fixed-interval controller shortens a customary 50-s monitoring time to an average of 18 s, making the monitoring delay less bothersome. The controller for monitoring moving vehicles benefits from the sequential technique by maintaining more than half its sensitivity when the normal passage speed doubles

  8. The uncertainties in estimating measurement uncertainties

    International Nuclear Information System (INIS)

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  9. Assessing performance of flaw characterization methods through uncertainty propagation

    Science.gov (United States)

    Miorelli, R.; Le Bourdais, F.; Artusi, X.

    2018-04-01

    In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.

  10. Uncertainties of predictions from parton distributions 1, experimental errors

    CERN Document Server

    Martin, A D; Stirling, William James; Thorne, R S; CERN. Geneva

    2003-01-01

    We determine the uncertainties on observables arising from the errors on the experimental data that are fitted in the global MRST2001 parton analysis. By diagonalizing the error matrix we produce sets of partons suitable for use within the framework of linear propagation of errors, which is the most convenient method for calculating the uncertainties. Despite the potential limitations of this approach we find that it can be made to work well in practice. This is confirmed by our alternative approach of using the more rigorous Lagrange multiplier method to determine the errors on physical quantities directly. As particular examples we determine the uncertainties on the predictions of the charged-current deep-inelastic structure functions, on the cross-sections for W production and for Higgs boson production via gluon--gluon fusion at the Tevatron and the LHC, on the ratio of W-minus to W-plus production at the LHC and on the moments of the non-singlet quark distributions. We discuss the corresponding uncertain...

  11. Biased lineups: sequential presentation reduces the problem.

    Science.gov (United States)

    Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C

    1991-12-01

    Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.

  12. BOOK REVIEW: Evaluating the Measurement Uncertainty: Fundamentals and practical guidance

    Science.gov (United States)

    Lira, Ignacio

    2003-08-01

    on to treat evaluation of expanded uncertainty, joint treatment of several measurands, least-squares adjustment, curve fitting and more. Chapter 6 is devoted to Bayesian inference. Perhaps one can say that Evaluating the Measurement Uncertainty caters to a wider reader-base than the GUM; however, a mathematical or statistical background is still advantageous. Also, this is not a book with a library of worked overall uncertainty evaluations for various measurements; the feel of the book is rather theoretical. The novice will still have some work to do—but this is a good place to start. I think this book is a fitting companion to the GUM because the text complements the GUM, from fundamental principles to more sophisticated measurement situations, and moreover includes intelligent discussion regarding intent and interpretation. Evaluating the Measurement Uncertainty is detailed, and I think most metrologists will really enjoy the detail and care put into this book. Jennifer Decker

  13. Track fitting and resolution with digital detectors

    International Nuclear Information System (INIS)

    Duerdoth, I.

    1982-01-01

    The analysis of data from detectors which give digitised measurements, such as MWPCs, is considered. These measurements are necessarily correlated and it is shown that the uncertainty in the combination of N measurements may fall faster than the canonical 1/√N. A new method of track fitting is described which exploits the digital aspects and which takes the correlations into account. It divides the parameter space into cells and the centroid of a cell is taken as the best estimate. The method is shown to have some advantages over the standard least-squares analysis. If the least-squares method is used for digital detectors the goodness-of-fit may not be a reliable estimate of the accuracy. The cell method is particularly suitable for implementation on microcomputers which lack floating point and divide facilities. (orig.)

  14. Lineup composition, suspect position, and the sequential lineup advantage.

    Science.gov (United States)

    Carlson, Curt A; Gronlund, Scott D; Clark, Steven E

    2008-06-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved

  15. Error estimation and global fitting in transverse-relaxation dispersion experiments to determine chemical-exchange parameters

    International Nuclear Information System (INIS)

    Ishima, Rieko; Torchia, Dennis A.

    2005-01-01

    Off-resonance effects can introduce significant systematic errors in R 2 measurements in constant-time Carr-Purcell-Meiboom-Gill (CPMG) transverse relaxation dispersion experiments. For an off-resonance chemical shift of 500 Hz, 15 N relaxation dispersion profiles obtained from experiment and computer simulation indicated a systematic error of ca. 3%. This error is three- to five-fold larger than the random error in R 2 caused by noise. Good estimates of total R 2 uncertainty are critical in order to obtain accurate estimates in optimized chemical exchange parameters and their uncertainties derived from χ 2 minimization of a target function. Here, we present a simple empirical approach that provides a good estimate of the total error (systematic + random) in 15 N R 2 values measured for the HIV protease. The advantage of this empirical error estimate is that it is applicable even when some of the factors that contribute to the off-resonance error are not known. These errors are incorporated into a χ 2 minimization protocol, in which the Carver-Richards equation is used fit the observed R 2 dispersion profiles, that yields optimized chemical exchange parameters and their confidence limits. Optimized parameters are also derived, using the same protein sample and data-fitting protocol, from 1 H R 2 measurements in which systematic errors are negligible. Although 1 H and 15 N relaxation profiles of individual residues were well fit, the optimized exchange parameters had large uncertainties (confidence limits). In contrast, when a single pair of exchange parameters (the exchange lifetime, τ ex , and the fractional population, p a ), were constrained to globally fit all R 2 profiles for residues in the dimer interface of the protein, confidence limits were less than 8% for all optimized exchange parameters. In addition, F-tests showed that quality of the fits obtained using τ ex , p a as global parameters were not improved when these parameters were free to fit the R

  16. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Directory of Open Access Journals (Sweden)

    Villemereuil Pierre de

    2012-06-01

    Full Text Available Abstract Background Uncertainty in comparative analyses can come from at least two sources: a phylogenetic uncertainty in the tree topology or branch lengths, and b uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow and inflated significance in hypothesis testing (e.g. p-values will be too small. Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible

  17. Bayesian models for comparative analysis integrating phylogenetic uncertainty

    Science.gov (United States)

    2012-01-01

    Background Uncertainty in comparative analyses can come from at least two sources: a) phylogenetic uncertainty in the tree topology or branch lengths, and b) uncertainty due to intraspecific variation in trait values, either due to measurement error or natural individual variation. Most phylogenetic comparative methods do not account for such uncertainties. Not accounting for these sources of uncertainty leads to false perceptions of precision (confidence intervals will be too narrow) and inflated significance in hypothesis testing (e.g. p-values will be too small). Although there is some application-specific software for fitting Bayesian models accounting for phylogenetic error, more general and flexible software is desirable. Methods We developed models to directly incorporate phylogenetic uncertainty into a range of analyses that biologists commonly perform, using a Bayesian framework and Markov Chain Monte Carlo analyses. Results We demonstrate applications in linear regression, quantification of phylogenetic signal, and measurement error models. Phylogenetic uncertainty was incorporated by applying a prior distribution for the phylogeny, where this distribution consisted of the posterior tree sets from Bayesian phylogenetic tree estimation programs. The models were analysed using simulated data sets, and applied to a real data set on plant traits, from rainforest plant species in Northern Australia. Analyses were performed using the free and open source software OpenBUGS and JAGS. Conclusions Incorporating phylogenetic uncertainty through an empirical prior distribution of trees leads to more precise estimation of regression model parameters than using a single consensus tree and enables a more realistic estimation of confidence intervals. In addition, models incorporating measurement errors and/or individual variation, in one or both variables, are easily formulated in the Bayesian framework. We show that BUGS is a useful, flexible general purpose tool for

  18. Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    CERN Document Server

    Flächer, Henning; Haller, J; Höcker, A; Mönig, K; Stelzer, J

    2009-01-01

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter projec...

  19. Parameter Optimisation and Uncertainty Analysis in Visual MODFLOW based Flow Model for predicting the groundwater head in an Eastern Indian Aquifer

    Science.gov (United States)

    Mohanty, B.; Jena, S.; Panda, R. K.

    2016-12-01

    The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  20. Tradable permit allocations and sequential choice

    Energy Technology Data Exchange (ETDEWEB)

    MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)

    2011-01-15

    This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)

  1. Sequential versus "sandwich" sequencing of adjuvant chemoradiation for the treatment of stage III uterine endometrioid adenocarcinoma.

    Science.gov (United States)

    Lu, Sharon M; Chang-Halpenny, Christine; Hwang-Graziano, Julie

    2015-04-01

    To compare the efficacy and tolerance of adjuvant chemotherapy and radiotherapy delivered in sequential (chemotherapy followed by radiation) versus "sandwich" fashion (chemotherapy, interval radiation, and remaining chemotherapy) after surgery in patients with FIGO stage III uterine endometrioid adenocarcinoma. From 2004 to 2011, we identified 51 patients treated at our institution fitting the above criteria. All patients received surgical staging followed by adjuvant chemoradiation (external-beam radiation therapy (EBRT) with or without high-dose rate (HDR) vaginal brachytherapy (VB)). Of these, 73% and 27% of patients received their adjuvant therapy in sequential and sandwich fashion, respectively. There were no significant differences in clinical or pathologic factors between patients treated with either regimen. Thirty-nine (76%) patients had stage IIIC disease. The majority of patients received 6 cycles of paclitaxel with carboplatin or cisplatin. Median EBRT dose was 45 Gy and 54% of patients received HDR VB boost (median dose 21 Gy). There were no significant differences in the estimated 5-year overall survival, local progression-free survival, and distant metastasis-free survival between the sequential and sandwich groups: 87% vs. 77% (p=0.37), 89% vs. 100% (p=0.21), and 78% vs. 85% (p=0.79), respectively. No grade 3-4 genitourinary or gastrointestinal toxicities were reported in either group. There was a trend towards higher incidence of grade 3-4 hematologic toxicity in the sandwich group. Adjuvant chemoradiation for FIGO stage III endometrioid uterine cancer given in either sequential or sandwich fashion appears to offer equally excellent early clinical outcomes and acceptably low toxicity. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Uncertainty in biology a computational modeling approach

    CERN Document Server

    Gomez-Cabrero, David

    2016-01-01

    Computational modeling of biomedical processes is gaining more and more weight in the current research into the etiology of biomedical problems and potential treatment strategies.  Computational modeling allows to reduce, refine and replace animal experimentation as well as to translate findings obtained in these experiments to the human background. However these biomedical problems are inherently complex with a myriad of influencing factors, which strongly complicates the model building and validation process.  This book wants to address four main issues related to the building and validation of computational models of biomedical processes: Modeling establishment under uncertainty Model selection and parameter fitting Sensitivity analysis and model adaptation Model predictions under uncertainty In each of the abovementioned areas, the book discusses a number of key-techniques by means of a general theoretical description followed by one or more practical examples.  This book is intended for graduate stude...

  3. Sunk costs equal sunk boats? The effect of entry costs in a transboundary sequential fishery

    DEFF Research Database (Denmark)

    Punt, M. J.

    2017-01-01

    that for other fisheries substantial sunk investments are needed. In this paper I investigate the effect of such sunk entry costs in a sequential fisheries. I model the uncertainty as a shock to the stock dependent fishing costs, in a two player game, where one of the players faces sunk entry costs. I find that......, depending on parameters, sunk costs can i) increase the competitive pressure on the fish stock compared to a game where entry is free ii) act as a deterrence mechanism and iii) act as a commitment device. I conclude that entry costs can play a crucial role because they can change the outcome of the game...

  4. Learning-induced uncertainty reduction in perceptual decisions is task-dependent

    Directory of Open Access Journals (Sweden)

    Feitong eYang

    2014-05-01

    Full Text Available Perceptual decision making in which decisions are reached primarily from extracting and evaluating sensory information requires close interactions between the sensory system and decision-related networks in the brain. Uncertainty pervades every aspect of this process and can be considered related to either the stimulus signal or decision criterion. Here, we investigated the learning-induced reduction of both the signal and criterion uncertainty in two perceptual decision tasks based on two Glass pattern stimulus sets. This was achieved by manipulating spiral angle and signal level of radial and concentric Glass patterns. The behavioral results showed that the participants trained with a task based on criterion comparison improved their categorization accuracy for both tasks, whereas the participants who were trained on a task based on signal detection improved their categorization accuracy only on their trained task. We fitted the behavioral data with a computational model that can dissociate the contribution of the signal and criterion uncertainties. The modeling results indicated that the participants trained on the criterion comparison task reduced both the criterion and signal uncertainty. By contrast, the participants who were trained on the signal detection task only reduced their signal uncertainty after training. Our results suggest that the signal uncertainty can be resolved by training participants to extract signals from noisy environments and to discriminate between clear signals, which are evidenced by reduced perception variance after both training procedures. Conversely, the criterion uncertainty can only be resolved by the training of fine discrimination. These findings demonstrate that uncertainty in perceptual decision-making can be reduced with training but that the reduction of different types of uncertainty is task-dependent.

  5. Applying the minimax principle to sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    2002-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master, a nonmaster, or to continue sampling and administering another random item. The framework of minimax sequential decision theory (minimum

  6. Uncertainty management in stratigraphic well correlation and stratigraphic architectures: A training-based method

    Science.gov (United States)

    Edwards, Jonathan; Lallier, Florent; Caumon, Guillaume; Carpentier, Cédric

    2018-02-01

    We discuss the sampling and the volumetric impact of stratigraphic correlation uncertainties in basins and reservoirs. From an input set of wells, we evaluate the probability for two stratigraphic units to be associated using an analog stratigraphic model. In the presence of multiple wells, this method sequentially updates a stratigraphic column defining the stratigraphic layering for each possible set of realizations. The resulting correlations are then used to create stratigraphic grids in three dimensions. We apply this method on a set of synthetic wells sampling a forward stratigraphic model built with Dionisos. To perform cross-validation of the method, we introduce a distance comparing the relative geological time of two models for each geographic position, and we compare the models in terms of volumes. Results show the ability of the method to automatically generate stratigraphic correlation scenarios, and also highlight some challenges when sampling stratigraphic uncertainties from multiple wells.

  7. Classical and sequential limit analysis revisited

    Science.gov (United States)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  8. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    Science.gov (United States)

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  9. DUST SPECTRAL ENERGY DISTRIBUTIONS IN THE ERA OF HERSCHEL AND PLANCK: A HIERARCHICAL BAYESIAN-FITTING TECHNIQUE

    International Nuclear Information System (INIS)

    Kelly, Brandon C.; Goodman, Alyssa A.; Shetty, Rahul; Stutz, Amelia M.; Launhardt, Ralf; Kauffmann, Jens

    2012-01-01

    We present a hierarchical Bayesian method for fitting infrared spectral energy distributions (SEDs) of dust emission to observed fluxes. Under the standard assumption of optically thin single temperature (T) sources, the dust SED as represented by a power-law-modified blackbody is subject to a strong degeneracy between T and the spectral index β. The traditional non-hierarchical approaches, typically based on χ 2 minimization, are severely limited by this degeneracy, as it produces an artificial anti-correlation between T and β even with modest levels of observational noise. The hierarchical Bayesian method rigorously and self-consistently treats measurement uncertainties, including calibration and noise, resulting in more precise SED fits. As a result, the Bayesian fits do not produce any spurious anti-correlations between the SED parameters due to measurement uncertainty. We demonstrate that the Bayesian method is substantially more accurate than the χ 2 fit in recovering the SED parameters, as well as the correlations between them. As an illustration, we apply our method to Herschel and submillimeter ground-based observations of the star-forming Bok globule CB244. This source is a small, nearby molecular cloud containing a single low-mass protostar and a starless core. We find that T and β are weakly positively correlated—in contradiction with the χ 2 fits, which indicate a T-β anti-correlation from the same data set. Additionally, in comparison to the χ 2 fits the Bayesian SED parameter estimates exhibit a reduced range in values.

  10. Steering Evolution with Sequential Therapy to Prevent the Emergence of Bacterial Antibiotic Resistance.

    Directory of Open Access Journals (Sweden)

    Daniel Nichol

    2015-09-01

    Full Text Available The increasing rate of antibiotic resistance and slowing discovery of novel antibiotic treatments presents a growing threat to public health. Here, we consider a simple model of evolution in asexually reproducing populations which considers adaptation as a biased random walk on a fitness landscape. This model associates the global properties of the fitness landscape with the algebraic properties of a Markov chain transition matrix and allows us to derive general results on the non-commutativity and irreversibility of natural selection as well as antibiotic cycling strategies. Using this formalism, we analyze 15 empirical fitness landscapes of E. coli under selection by different β-lactam antibiotics and demonstrate that the emergence of resistance to a given antibiotic can be either hindered or promoted by different sequences of drug application. Specifically, we demonstrate that the majority, approximately 70%, of sequential drug treatments with 2-4 drugs promote resistance to the final antibiotic. Further, we derive optimal drug application sequences with which we can probabilistically 'steer' the population through genotype space to avoid the emergence of resistance. This suggests a new strategy in the war against antibiotic-resistant organisms: drug sequencing to shepherd evolution through genotype space to states from which resistance cannot emerge and by which to maximize the chance of successful therapy.

  11. Transition from positive to neutral in mutation fixation along with continuing rising fitness in thermal adaptive evolution.

    Science.gov (United States)

    Kishimoto, Toshihiko; Iijima, Leo; Tatsumi, Makoto; Ono, Naoaki; Oyake, Ayana; Hashimoto, Tomomi; Matsuo, Moe; Okubo, Masato; Suzuki, Shingo; Mori, Kotaro; Kashiwagi, Akiko; Furusawa, Chikara; Ying, Bei-Wen; Yomo, Tetsuya

    2010-10-21

    It remains to be determined experimentally whether increasing fitness is related to positive selection, while stationary fitness is related to neutral evolution. Long-term laboratory evolution in Escherichia coli was performed under conditions of thermal stress under defined laboratory conditions. The complete cell growth data showed common continuous fitness recovery to every 2°C or 4°C stepwise temperature upshift, finally resulting in an evolved E. coli strain with an improved upper temperature limit as high as 45.9°C after 523 days of serial transfer, equivalent to 7,560 generations, in minimal medium. Two-phase fitness dynamics, a rapid growth recovery phase followed by a gradual increasing growth phase, was clearly observed at diverse temperatures throughout the entire evolutionary process. Whole-genome sequence analysis revealed the transition from positive to neutral in mutation fixation, accompanied with a considerable escalation of spontaneous substitution rate in the late fitness recovery phase. It suggested that continually increasing fitness not always resulted in the reduction of genetic diversity due to the sequential takeovers by fit mutants, but caused the accumulation of a considerable number of mutations that facilitated the neutral evolution.

  12. Combined fit of spectrum and composition data as measured by the Pierre Auger Observatory

    Energy Technology Data Exchange (ETDEWEB)

    Aab, A. [Institute for Mathematics, Astrophysics and Particle Physics (IMAPP), Radboud Universiteit, Nijmegen (Netherlands); Abreu, P.; Andringa, S. [Laboratório de Instrumentação e Física Experimental de Partículas—LIP and Instituto Superior Técnico—IST, Universidade de Lisboa—UL (Portugal); Aglietta, M. [Osservatorio Astrofisico di Torino (INAF), Torino (Italy); Samarai, I. Al [Laboratoire de Physique Nucléaire et de Hautes Energies (LPNHE), Universités Paris 6 et Paris 7, CNRS-IN2P3 (France); Albuquerque, I.F.M. [Universidade de São Paulo, Inst. de Física, São Paulo (Brazil); Allekotte, I. [Centro Atómico Bariloche and Instituto Balseiro (CNEA-UNCuyo-CONICET) (Argentina); Almela, A.; Andrada, B. [Instituto de Tecnologías en Detección y Astropartículas (CNEA, CONICET, UNSAM), Centro Atómico Constituyentes, Comisión Nacional de Energía Atómica (Argentina); Castillo, J. Alvarez [Universidad Nacional Autónoma de México, México (Mexico); Alvarez-Muñiz, J. [Universidad de Santiago de Compostela (Spain); Anastasi, G.A. [Gran Sasso Science Institute (INFN), L' Aquila (Italy); Anchordoqui, L., E-mail: auger_spokespersons@fnal.gov [Department of Physics and Astronomy, Lehman College, City University of New York (United States); and others

    2017-04-01

    We present a combined fit of a simple astrophysical model of UHECR sources to both the energy spectrum and mass composition data measured by the Pierre Auger Observatory. The fit has been performed for energies above 5 ⋅ 10{sup 18} eV, i.e. the region of the all-particle spectrum above the so-called 'ankle' feature. The astrophysical model we adopted consists of identical sources uniformly distributed in a comoving volume, where nuclei are accelerated through a rigidity-dependent mechanism. The fit results suggest sources characterized by relatively low maximum injection energies, hard spectra and heavy chemical composition. We also show that uncertainties about physical quantities relevant to UHECR propagation and shower development have a non-negligible impact on the fit results.

  13. arXiv A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    CERN Document Server

    Kieseler, Jan

    2017-11-22

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A d...

  14. Aerosol-type retrieval and uncertainty quantification from OMI data

    Science.gov (United States)

    Kauppi, Anu; Kolmonen, Pekka; Laine, Marko; Tamminen, Johanna

    2017-11-01

    We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs) and top-of-atmosphere (TOA) spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD). The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI) measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the difficulty in model

  15. Aerosol-type retrieval and uncertainty quantification from OMI data

    Directory of Open Access Journals (Sweden)

    A. Kauppi

    2017-11-01

    Full Text Available We discuss uncertainty quantification for aerosol-type selection in satellite-based atmospheric aerosol retrieval. The retrieval procedure uses precalculated aerosol microphysical models stored in look-up tables (LUTs and top-of-atmosphere (TOA spectral reflectance measurements to solve the aerosol characteristics. The forward model approximations cause systematic differences between the modelled and observed reflectance. Acknowledging this model discrepancy as a source of uncertainty allows us to produce more realistic uncertainty estimates and assists the selection of the most appropriate LUTs for each individual retrieval.This paper focuses on the aerosol microphysical model selection and characterisation of uncertainty in the retrieved aerosol type and aerosol optical depth (AOD. The concept of model evidence is used as a tool for model comparison. The method is based on Bayesian inference approach, in which all uncertainties are described as a posterior probability distribution. When there is no single best-matching aerosol microphysical model, we use a statistical technique based on Bayesian model averaging to combine AOD posterior probability densities of the best-fitting models to obtain an averaged AOD estimate. We also determine the shared evidence of the best-matching models of a certain main aerosol type in order to quantify how plausible it is that it represents the underlying atmospheric aerosol conditions.The developed method is applied to Ozone Monitoring Instrument (OMI measurements using a multiwavelength approach for retrieving the aerosol type and AOD estimate with uncertainty quantification for cloud-free over-land pixels. Several larger pixel set areas were studied in order to investigate the robustness of the developed method. We evaluated the retrieved AOD by comparison with ground-based measurements at example sites. We found that the uncertainty of AOD expressed by posterior probability distribution reflects the

  16. Trial Sequential Methods for Meta-Analysis

    Science.gov (United States)

    Kulinskaya, Elena; Wood, John

    2014-01-01

    Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…

  17. A Global Moving Hotspot Reference Frame: How well it fits?

    Science.gov (United States)

    Doubrovine, P. V.; Steinberger, B.; Torsvik, T. H.

    2010-12-01

    Since the early 1970s, when Jason Morgan proposed that hotspot tracks record motion of lithosphere over deep-seated mantle plumes, the concept of fixed hotspots has dominated the way we think about absolute plate reconstructions. In the last decade, with compelling evidence for southward drift of the Hawaiian hotspot from paleomagnetic studies, and for the relative motion between the Pacific and Indo-Atlantic hotspots from refined plate circuit reconstructions, the perception changed and a global moving hotspot reference frame (GMHRF) was introduced, in which numerical models of mantle convection and advection of plume conduits in the mantle flow were used to estimate hotspot motion. This reference frame showed qualitatively better performance in fitting hotspot tracks globally, but the error analysis and formal estimates of the goodness of fitted rotations were lacking in this model. Here we present a new generation of the GMHRF, in which updated plate circuit reconstructions and radiometric age data from the hotspot tracks were combined with numerical models of plume motion, and uncertainties of absolute plate rotations were estimated through spherical regression analysis. The overall quality of fit was evaluated using a formal statistical test, by comparing misfits produced by the model with uncertainties assigned to the data. Alternative plate circuit models linking the Pacific plate to the plates of Indo-Atlantic hemisphere were tested and compared to the fixed hotspot models with identical error budgets. Our results show that, with an appropriate choice of the Pacific plate circuit, it is possible to reconcile relative plate motions and modeled motions of mantle plumes globally back to Late Cretaceous time (80 Ma). In contrast, all fixed hotspot models failed to produce acceptable fits for Paleogene to Late Cretaceous time (30-80 Ma), highlighting significance of relative motion between the Pacific and Indo-Atlantic hotspots during this interval. The

  18. Sequentially pulsed traveling wave accelerator

    Science.gov (United States)

    Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA

    2009-08-18

    A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.

  19. A simplified model of choice behavior under uncertainty

    Directory of Open Access Journals (Sweden)

    Ching-Hung Lin

    2016-08-01

    Full Text Available The Iowa Gambling Task (IGT has been standardized as a clinical assessment tool (Bechara, 2007. Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU model (Busemeyer and Stout, 2002 to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated the prospect utility (PU models (Ahn et al., 2008 to be more effective than the EU models in the IGT. Nevertheless, after some preliminary tests, we propose that Ahn et al. (2008 PU model is not optimal due to some incompatible results between our behavioral and modeling data. This study aims to modify Ahn et al. (2008 PU model to a simplified model and collected 145 subjects’ IGT performance as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly while α approaching zero. More specifically, we retested the key parameters α, λ , and A in the PU model. Notably, the power of influence of the parameters α, λ, and A has a hierarchical order in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay-loss-shift rather than foreseeing the long-term outcome. However, there still have other behavioral variables that are not well revealed under these dynamic uncertainty situations. Therefore, the optimal behavioral models may not have been found. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated.

  20. Study of Monte Carlo approach to experimental uncertainty propagation with MSTW 2008 PDFs

    CERN Document Server

    Watt, G.

    2012-01-01

    We investigate the Monte Carlo approach to propagation of experimental uncertainties within the context of the established 'MSTW 2008' global analysis of parton distribution functions (PDFs) of the proton at next-to-leading order in the strong coupling. We show that the Monte Carlo approach using replicas of the original data gives PDF uncertainties in good agreement with the usual Hessian approach using the standard Delta(chi^2) = 1 criterion, then we explore potential parameterisation bias by increasing the number of free parameters, concluding that any parameterisation bias is likely to be small, with the exception of the valence-quark distributions at low momentum fractions x. We motivate the need for a larger tolerance, Delta(chi^2) > 1, by making fits to restricted data sets and idealised consistent or inconsistent pseudodata. Instead of using data replicas, we alternatively produce PDF sets randomly distributed according to the covariance matrix of fit parameters including appropriate tolerance values,...

  1. General methods for analysis of sequential "n-step" kinetic mechanisms: application to single turnover kinetics of helicase-catalyzed DNA unwinding.

    Science.gov (United States)

    Lucius, Aaron L; Maluf, Nasib K; Fischer, Christopher J; Lohman, Timothy M

    2003-10-01

    Helicase-catalyzed DNA unwinding is often studied using "all or none" assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using "n-step" sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the "kinetic step size", m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using "n-step" sequential mechanisms has previously been limited by an inability to float the number of "unwinding steps", n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, f(ss)(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain f(ss)(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation.

  2. An Efficient System Based On Closed Sequential Patterns for Web Recommendations

    OpenAIRE

    Utpala Niranjan; R.B.V. Subramanyam; V-Khana

    2010-01-01

    Sequential pattern mining, since its introduction has received considerable attention among the researchers with broad applications. The sequential pattern algorithms generally face problems when mining long sequential patterns or while using very low support threshold. One possible solution of such problems is by mining the closed sequential patterns, which is a condensed representation of sequential patterns. Recently, several researchers have utilized the sequential pattern discovery for d...

  3. [Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.

    Science.gov (United States)

    Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui

    2018-05-01

    The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.

  4. Evaluation of Uncertainties in the Determination of Phosphorus by RNAA

    International Nuclear Information System (INIS)

    Rick L. Paul

    2000-01-01

    A radiochemical neutron activation analysis (RNAA) procedure for the determination of phosphorus in metals and other materials has been developed and critically evaluated. Uncertainties evaluated as type A include those arising from measurement replication, yield determination, neutron self-shielding, irradiation geometry, measurement of the quantity for concentration normalization (sample mass, area, etc.), and analysis of standards. Uncertainties evaluated as type B include those arising from beta contamination corrections, beta decay curve fitting, and beta self-absorption corrections. The evaluation of uncertainties in the determination of phosphorus is illustrated for three different materials in Table I. The metal standard reference materials (SRMs) 2175 and 861 were analyzed for value assignment of phosphorus; implanted silicon was analyzed to evaluate the technique for certification of phosphorus. The most significant difference in the error evaluation of the three materials lies in the type B uncertainties. The relatively uncomplicated matrix of the high-purity silicon allows virtually complete purification of phosphorus from other beta emitters; hence, minimal contamination correction is needed. Furthermore, because the chemistry is less rigorous, the carrier yield is more reproducible, and self-absorption corrections are less significant. Improvements in the chemical purification procedures for phosphorus in complex matrices will decrease the type B uncertainties for all samples. Uncertainties in the determination of carrier yield, the most significant type A error in the analysis of the silicon, also need to be evaluated more rigorously and minimized in the future

  5. Effect of Baseflow Separation on Uncertainty of Hydrological Modeling in the Xinanjiang Model

    Directory of Open Access Journals (Sweden)

    Kairong Lin

    2014-01-01

    Full Text Available Based on the idea of inputting more available useful information for evaluation to gain less uncertainty, this study focuses on how well the uncertainty can be reduced by considering the baseflow estimation information obtained from the smoothed minima method (SMM. The Xinanjiang model and the generalized likelihood uncertainty estimation (GLUE method with the shuffled complex evolution Metropolis (SCEM-UA sampling algorithm were used for hydrological modeling and uncertainty analysis, respectively. The Jiangkou basin, located in the upper of the Hanjiang River, was selected as case study. It was found that the number and standard deviation of behavioral parameter sets both decreased when the threshold value for the baseflow efficiency index increased, and the high Nash-Sutcliffe efficiency coefficients correspond well with the high baseflow efficiency coefficients. The results also showed that uncertainty interval width decreased significantly, while containing ratio did not decrease by much and the simulated runoff with the behavioral parameter sets can fit better to the observed runoff, when threshold for the baseflow efficiency index was taken into consideration. These implied that using the baseflow estimation information can reduce the uncertainty in hydrological modeling to some degree and gain more reasonable prediction bounds.

  6. Inside the fitness for work consultation: a qualitative study.

    Science.gov (United States)

    Cohen, D A; Aylward, M; Rollnick, S

    2009-08-01

    Evidence now suggests that work is generally good for physical and mental health and well-being. Worklessness for whatever reason can lead to poorer physical and mental health. The role of the general practitioner (GP) in the management of fitness for work is pivotal. To understand the interaction between GP and patient in the fitness for work consultation. This study forms part of a larger research project to develop a learning programme for GPs around the fitness for work consultation based on behaviour change methodology. A qualitative study set in South Wales. Structured discussion groups with seven GPs. Two sessions each lasting 3 h were conducted to explore the GP and patient interaction around the fitness for work consultation. Multiple methods were used to enhance engagement. Thematic analysis was used to analyse the data. Four major themes emerged from the meetings: role legitimacy, negotiation, managing the patient and managing the systems. Within these, subthemes emerged around role legitimacy. 'It's not my job', 'It's not what I trained for' and the 'shifting agenda' Negotiation was likened to 'A polite tug of war' and subthemes around decision making, managing the agenda and dealing with uncertainty emerged. This study starts to unravel the complexity of the fitness for work consultation. It illustrates how GPs struggle with the 'importance' of their role and 'confidence' in managing the fitness for work consultation. It addresses the skillful negotiation that is required to manage the consultation effectively.

  7. Best-estimate reactor core monitor using state feedback strategies to resolve uncertainties

    International Nuclear Information System (INIS)

    Martin, R.P.

    1997-01-01

    The development and demonstration of a new algorithm for quantifying uncertainty in best-estimate simulation codes has been investigated. Demonstration is given by way of a prototype reactor core monitor. The architecture of this monitor integrates a distributed parameter estimation technique and the infrastructure required to support this control theory-based algorithm into a production-grade best-estimate simulation code. The Kalman filter with the sequential least-squares parameter estimation algorithm has been extended for application into the computational environment of a best-estimate simulation code, i.e., RELAP5/DOE. In control system terminology this configuration can be thought of as a best-estimate observer

  8. Discrimination between sequential and simultaneous virtual channels with electrical hearing.

    Science.gov (United States)

    Landsberger, David; Galvin, John J

    2011-09-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation modes. For sequential VCs, the interpulse interval (IPI) varied between 0.0 and 1.8 ms. All stimuli were presented at comfortably loud, loudness-balanced levels at a 250 pulse per second per electrode (ppse) stimulation rate. On average, CI subjects were able to reliably discriminate between sequential and simultaneous VCs. While there was no significant effect of IPI or stimulation mode on VC discrimination, some subjects exhibited better VC discrimination with BP + 1 stimulation. Subjects' discrimination between sequential and simultaneous VCs was correlated with electrode discrimination, suggesting that spatial selectivity may influence perception of sequential VCs. To maintain equal loudness, sequential VC amplitudes were nearly double those of simultaneous VCs, presumably resulting in a broader spread of excitation. These results suggest that perceptual differences between simultaneous and sequential VCs might be explained by differences in the spread of excitation. © 2011 Acoustical Society of America

  9. Fitness ranking of individual mutants drives patterns of epistatic interactions in HIV-1.

    Directory of Open Access Journals (Sweden)

    Javier P Martínez

    Full Text Available Fitness interactions between mutations, referred to as epistasis, can strongly impact evolution. For RNA viruses and retroviruses with their high mutation rates, epistasis may be particularly important to overcome fitness losses due to the accumulation of deleterious mutations and thus could influence the frequency of mutants in a viral population. As human immunodeficiency virus type 1 (HIV-1 resistance to azidothymidine (AZT requires selection of sequential mutations, it is a good system to study the impact of epistasis. Here we present a thorough analysis of a classical AZT-resistance pathway (the 41-215 cluster of HIV-1 variants by fitness measurements in single round infection assays covering physiological drug concentrations ex vivo. The sign and value of epistasis varied and did not predict the epistatic effect on the mutant frequency. This complex behavior is explained by the fitness ranking of the variants that strongly depends on environmental factors, i.e., the presence and absence of drugs and the host cells used. Although some interactions compensate fitness losses, the observed small effect on the relative mutant frequencies suggests that epistasis might be inefficient as a buffering mechanism for fitness losses in vivo. While the use of epistasis-based hypotheses to make general assumptions on the evolutionary dynamics of viral populations is appealing, our data caution their interpretation without further knowledge on the characteristics of the viral mutant spectrum under different environmental conditions.

  10. Sequential versus simultaneous market delineation

    DEFF Research Database (Denmark)

    Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus

    2005-01-01

    and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...

  11. On-orbit servicing system assessment and optimization methods based on lifecycle simulation under mixed aleatory and epistemic uncertainties

    Science.gov (United States)

    Yao, Wen; Chen, Xiaoqian; Huang, Yiyong; van Tooren, Michel

    2013-06-01

    To assess the on-orbit servicing (OOS) paradigm and optimize its utilities by taking advantage of its inherent flexibility and responsiveness, the OOS system assessment and optimization methods based on lifecycle simulation under uncertainties are studied. The uncertainty sources considered in this paper include both the aleatory (random launch/OOS operation failure and on-orbit component failure) and the epistemic (the unknown trend of the end-used market price) types. Firstly, the lifecycle simulation under uncertainties is discussed. The chronological flowchart is presented. The cost and benefit models are established, and the uncertainties thereof are modeled. The dynamic programming method to make optimal decision in face of the uncertain events is introduced. Secondly, the method to analyze the propagation effects of the uncertainties on the OOS utilities is studied. With combined probability and evidence theory, a Monte Carlo lifecycle Simulation based Unified Uncertainty Analysis (MCS-UUA) approach is proposed, based on which the OOS utility assessment tool under mixed uncertainties is developed. Thirdly, to further optimize the OOS system under mixed uncertainties, the reliability-based optimization (RBO) method is studied. To alleviate the computational burden of the traditional RBO method which involves nested optimum search and uncertainty analysis, the framework of Sequential Optimization and Mixed Uncertainty Analysis (SOMUA) is employed to integrate MCS-UUA, and the RBO algorithm SOMUA-MCS is developed. Fourthly, a case study on the OOS system for a hypothetical GEO commercial communication satellite is investigated with the proposed assessment tool. Furthermore, the OOS system is optimized with SOMUA-MCS. Lastly, some conclusions are given and future research prospects are highlighted.

  12. A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    Science.gov (United States)

    Kieseler, Jan

    2017-11-01

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.

  13. A method and tool for combining differential or inclusive measurements obtained with simultaneously constrained uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kieseler, Jan [CERN, Geneva (Switzerland)

    2017-11-15

    A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections. (orig.)

  14. Sequential sampling and biorational chemistries for management of lepidopteran pests of vegetable amaranth in the Caribbean.

    Science.gov (United States)

    Clarke-Harris, Dionne; Fleischer, Shelby J

    2003-06-01

    Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.

  15. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...

  16. Sequential logic analysis and synthesis

    CERN Document Server

    Cavanagh, Joseph

    2007-01-01

    Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a

  17. Risk-based flood protection planning under climate change and modeling uncertainty: a pre-alpine case study

    Directory of Open Access Journals (Sweden)

    B. Dittes

    2018-05-01

    Full Text Available Planning authorities are faced with a range of questions when planning flood protection measures: is the existing protection adequate for current and future demands or should it be extended? How will flood patterns change in the future? How should the uncertainty pertaining to this influence the planning decision, e.g., for delaying planning or including a safety margin? Is it sufficient to follow a protection criterion (e.g., to protect from the 100-year flood or should the planning be conducted in a risk-based way? How important is it for flood protection planning to accurately estimate flood frequency (changes, costs and damage? These are questions that we address for a medium-sized pre-alpine catchment in southern Germany, using a sequential Bayesian decision making framework that quantitatively addresses the full spectrum of uncertainty. We evaluate different flood protection systems considered by local agencies in a test study catchment. Despite large uncertainties in damage, cost and climate, the recommendation is robust for the most conservative approach. This demonstrates the feasibility of making robust decisions under large uncertainty. Furthermore, by comparison to a previous study, it highlights the benefits of risk-based planning over the planning of flood protection to a prescribed return period.

  18. Risk-based flood protection planning under climate change and modeling uncertainty: a pre-alpine case study

    Science.gov (United States)

    Dittes, Beatrice; Kaiser, Maria; Špačková, Olga; Rieger, Wolfgang; Disse, Markus; Straub, Daniel

    2018-05-01

    Planning authorities are faced with a range of questions when planning flood protection measures: is the existing protection adequate for current and future demands or should it be extended? How will flood patterns change in the future? How should the uncertainty pertaining to this influence the planning decision, e.g., for delaying planning or including a safety margin? Is it sufficient to follow a protection criterion (e.g., to protect from the 100-year flood) or should the planning be conducted in a risk-based way? How important is it for flood protection planning to accurately estimate flood frequency (changes), costs and damage? These are questions that we address for a medium-sized pre-alpine catchment in southern Germany, using a sequential Bayesian decision making framework that quantitatively addresses the full spectrum of uncertainty. We evaluate different flood protection systems considered by local agencies in a test study catchment. Despite large uncertainties in damage, cost and climate, the recommendation is robust for the most conservative approach. This demonstrates the feasibility of making robust decisions under large uncertainty. Furthermore, by comparison to a previous study, it highlights the benefits of risk-based planning over the planning of flood protection to a prescribed return period.

  19. Sequential Design of Experiments to Maximize Learning from Carbon Capture Pilot Plant Testing

    Energy Technology Data Exchange (ETDEWEB)

    Soepyan, Frits B.; Morgan, Joshua C.; Omell, Benjamin P.; Zamarripa-Perez, Miguel A.; Matuszewski, Michael S.; Miller, David C.

    2018-02-06

    Pilot plant test campaigns can be expensive and time-consuming. Therefore, it is of interest to maximize the amount of learning and the efficiency of the test campaign given the limited number of experiments that can be conducted. This work investigates the use of sequential design of experiments (SDOE) to overcome these challenges by demonstrating its usefulness for a recent solvent-based CO2 capture plant test campaign. Unlike traditional design of experiments methods, SDOE regularly uses information from ongoing experiments to determine the optimum locations in the design space for subsequent runs within the same experiment. However, there are challenges that need to be addressed, including reducing the high computational burden to efficiently update the model, and the need to incorporate the methodology into a computational tool. We address these challenges by applying SDOE in combination with a software tool, the Framework for Optimization, Quantification of Uncertainty and Surrogates (FOQUS) (Miller et al., 2014a, 2016, 2017). The results of applying SDOE on a pilot plant test campaign for CO2 capture suggests that relative to traditional design of experiments methods, SDOE can more effectively reduce the uncertainty of the model, thus decreasing technical risk. Future work includes integrating SDOE into FOQUS and using SDOE to support additional large-scale pilot plant test campaigns.

  20. A rigorous methodology for development and uncertainty analysis of group contribution based property models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    ) weighted-least-square regression. 3) Initialization of estimation by use of linear algebra providing a first guess. 4) Sequential parameter and simultaneous GC parameter by using of 4 different minimization algorithms. 5) Thorough uncertainty analysis: a) based on asymptotic approximation of parameter...... covariance matrix b) based on boot strap method. Providing 95%-confidence intervals of parameters and predicted property. 6) Performance statistics analysis and model application. The application of the methodology is shown for a new GC model built to predict lower flammability limit (LFL) for refrigerants...... their credibility and robustness in wider industrial and scientific applications....

  1. Track benchmarking method for uncertainty quantification of particle tracking velocimetry interpolations

    International Nuclear Information System (INIS)

    Schneiders, Jan F G; Sciacchitano, Andrea

    2017-01-01

    The track benchmarking method (TBM) is proposed for uncertainty quantification of particle tracking velocimetry (PTV) data mapped onto a regular grid. The method provides statistical uncertainty for a velocity time-series and can in addition be used to obtain instantaneous uncertainty at increased computational cost. Interpolation techniques are typically used to map velocity data from scattered PTV (e.g. tomographic PTV and Shake-the-Box) measurements onto a Cartesian grid. Recent examples of these techniques are the FlowFit and VIC+  methods. The TBM approach estimates the random uncertainty in dense velocity fields by performing the velocity interpolation using a subset of typically 95% of the particle tracks and by considering the remaining tracks as an independent benchmarking reference. In addition, also a bias introduced by the interpolation technique is identified. The numerical assessment shows that the approach is accurate when particle trajectories are measured over an extended number of snapshots, typically on the order of 10. When only short particle tracks are available, the TBM estimate overestimates the measurement error. A correction to TBM is proposed and assessed to compensate for this overestimation. The experimental assessment considers the case of a jet flow, processed both by tomographic PIV and by VIC+. The uncertainty obtained by TBM provides a quantitative evaluation of the measurement accuracy and precision and highlights the regions of high error by means of bias and random uncertainty maps. In this way, it is possible to quantify the uncertainty reduction achieved by advanced interpolation algorithms with respect to standard correlation-based tomographic PIV. The use of TBM for uncertainty quantification and comparison of different processing techniques is demonstrated. (paper)

  2. Statistical method for determining ages of globular clusters by fitting isochrones

    International Nuclear Information System (INIS)

    Flannery, B.P.; Johnson, B.C.

    1982-01-01

    We describe a statistical procedure to compare models of stellar evolution and atmospheres with color-magnitude diagrams of globular clusters. The isochrone depends on five parameters: m-M, age, [Fe/H], Y, and α, but in practice we can only determine m-M and age for an assumed composition. The technique allows us to determine parameters of the model, their uncertainty, and to assess goodness of fit. We test the method, and evaluate the effect of assumptions on an extensive set of Monte Carlo simulations. We apply the method to extensive observations of NGC 6752 and M5, and to smaller data sets for the clusters M3, M5, M15, and M92. We determine age and m-M for two assumed values of helium Y = (0.2, 0.3), and three values of metallicity with a spread in [Fe/H] of +- 0.3 dex. These result in a spread in age of 5-8 Gyr (1 Gyr = 10 9 yr), and a spread in m-M of 0.5 mag. The mean age is generally younger by 2-3 Gyr than previous estimates. Likely uncertainty associated with an individual fit can be small as 0.4 Gyr. Most importantly, we find that two uncalibratable sources of systematic error make the results suspect. These are uncertainty in the stellar temperatures induced by choice of mixing length, and known errors in stellar atmospheres. These effects could reduce age estimates by an additional 5 Gyr. We conclude that observations do not preclude ages as young as 10 Gyr for globular clusters

  3. Structural Consistency, Consistency, and Sequential Rationality.

    OpenAIRE

    Kreps, David M; Ramey, Garey

    1987-01-01

    Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...

  4. Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations

    International Nuclear Information System (INIS)

    Shaukata, Nadeem; Shim, Hyung Jin

    2015-01-01

    In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of

  5. Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shaukata, Nadeem; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2015-10-15

    In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of

  6. A comparison of approaches in fitting continuum SEDs

    International Nuclear Information System (INIS)

    Liu Yao; Wang Hong-Chi; Madlener David; Wolf Sebastian

    2013-01-01

    We present a detailed comparison of two approaches, the use of a pre-calculated database and simulated annealing (SA), for fitting the continuum spectral energy distribution (SED) of astrophysical objects whose appearance is dominated by surrounding dust. While pre-calculated databases are commonly used to model SED data, only a few studies to date employed SA due to its unclear accuracy and convergence time for this specific problem. From a methodological point of view, different approaches lead to different fitting quality, demand on computational resources and calculation time. We compare the fitting quality and computational costs of these two approaches for the task of SED fitting to provide a guide to the practitioner to find a compromise between desired accuracy and available resources. To reduce uncertainties inherent to real datasets, we introduce a reference model resembling a typical circumstellar system with 10 free parameters. We derive the SED of the reference model with our code MC3 D at 78 logarithmically distributed wavelengths in the range [0.3 μm, 1.3 mm] and use this setup to simulate SEDs for the database and SA. Our result directly demonstrates the applicability of SA in the field of SED modeling, since the algorithm regularly finds better solutions to the optimization problem than a pre-calculated database. As both methods have advantages and shortcomings, a hybrid approach is preferable. While the database provides an approximate fit and overall probability distributions for all parameters deduced using Bayesian analysis, SA can be used to improve upon the results returned by the model grid.

  7. Convolution based profile fitting

    International Nuclear Information System (INIS)

    Kern, A.; Coelho, A.A.; Cheary, R.W.

    2002-01-01

    diffractometers (e.g. BM16 at ESRF and Station 2.3 at Daresbury). In the literature, convolution based profile fitting is normally associated with microstructure analysis where the sample contribution needs to be separated from the instrument contribution in an observed profile. This is no longer the case. Convolution based profile fitting can be also performed on a fully empirical basis to provide better fits to data and a greater variety of profile shapes. With convolution based profile fitting virtually any peak shape and its angular dependence can be modelled. The approach may be based on a physical model (FPA) or performed empirically. The quality of fit by convolution is normally better than using other methods. The uncertainty in derived parameters is therefore reduced. The number of parameters required to describe a pattern is normally smaller than the 'analytical function approach' and therefore parameter correlation is reduced significantly, therefore, increasing profile complexity does not necessarily require an increasing number of parameters. Copyright (2002) Australian X-ray Analytical Association Inc

  8. Bayesian uncertainty quantification in linear models for diffusion MRI.

    Science.gov (United States)

    Sjölund, Jens; Eklund, Anders; Özarslan, Evren; Herberthson, Magnus; Bånkestad, Maria; Knutsson, Hans

    2018-03-29

    Diffusion MRI (dMRI) is a valuable tool in the assessment of tissue microstructure. By fitting a model to the dMRI signal it is possible to derive various quantitative features. Several of the most popular dMRI signal models are expansions in an appropriately chosen basis, where the coefficients are determined using some variation of least-squares. However, such approaches lack any notion of uncertainty, which could be valuable in e.g. group analyses. In this work, we use a probabilistic interpretation of linear least-squares methods to recast popular dMRI models as Bayesian ones. This makes it possible to quantify the uncertainty of any derived quantity. In particular, for quantities that are affine functions of the coefficients, the posterior distribution can be expressed in closed-form. We simulated measurements from single- and double-tensor models where the correct values of several quantities are known, to validate that the theoretically derived quantiles agree with those observed empirically. We included results from residual bootstrap for comparison and found good agreement. The validation employed several different models: Diffusion Tensor Imaging (DTI), Mean Apparent Propagator MRI (MAP-MRI) and Constrained Spherical Deconvolution (CSD). We also used in vivo data to visualize maps of quantitative features and corresponding uncertainties, and to show how our approach can be used in a group analysis to downweight subjects with high uncertainty. In summary, we convert successful linear models for dMRI signal estimation to probabilistic models, capable of accurate uncertainty quantification. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Generalized infimum and sequential product of quantum effects

    International Nuclear Information System (INIS)

    Li Yuan; Sun Xiuhong; Chen Zhengli

    2007-01-01

    The quantum effects for a physical system can be described by the set E(H) of positive operators on a complex Hilbert space H that are bounded above by the identity operator I. For A, B(set-membership sign)E(H), the operation of sequential product A(convolution sign)B=A 1/2 BA 1/2 was proposed as a model for sequential quantum measurements. A nice investigation of properties of the sequential product has been carried over [Gudder, S. and Nagy, G., 'Sequential quantum measurements', J. Math. Phys. 42, 5212 (2001)]. In this note, we extend some results of this reference. In particular, a gap in the proof of Theorem 3.2 in this reference is overcome. In addition, some properties of generalized infimum A sqcap B are studied

  10. New fit of thermal neutron constants (TNC for 233,235U, 239,241Pu and 252Cf(sf: Microscopic vs. maxwellian data

    Directory of Open Access Journals (Sweden)

    Pronyaev Vladimir G.

    2017-01-01

    Full Text Available An IAEA project to update the Neutron Standards is near completion. Traditionally, the Thermal Neutron Constants (TNC evaluated data by Axton for thermal-neutron scattering, capture and fission on four fissile nuclei and the total nu-bar of 252Cf(sf are used as input in the combined least-square fit with neutron cross section standards. The evaluation by Axton (1986 was based on a least-square fit of both thermal-spectrum averaged cross sections (Maxwellian data and microscopic cross sections at 2200 m/s. There is a second Axton evaluation based exclusively on measured microscopic cross sections at 2200 m/s (excluding Maxwellian data. Both evaluations disagree within quoted uncertainties for fission and capture cross sections and total multiplicities of uranium isotopes. There are two factors, which may lead to such difference: Westcott g-factors with estimated 0.2% uncertainties used in the Axton's fit, and deviation of the thermal spectra from Maxwellian shape. To exclude or mitigate the impact of these factors, a new combined GMA fit of standards was undertaken with Axton's TNC evaluation based on 2200 m/s data used as a prior. New microscopic data at the thermal point, available since 1986, were added to the combined fit. Additionally, an independent evaluation of TNC was undertaken using CONRAD code. Both GMA and CONRAD results are consistent within quoted uncertainties. New evaluation shows a small increase of fission and capture thermal cross sections, and a corresponding decrease in evaluated thermal nubar for uranium isotopes and 239Pu.

  11. Sustainable Multi-Product Seafood Production Planning Under Uncertainty

    International Nuclear Information System (INIS)

    Simanjuntak, Ruth; Mawengkang, Herman; Sembiring, Monalisa; Sinaga, Rani; Pakpahan, Endang J

    2013-01-01

    A multi-product fish production planning produces simultaneously multi fish products from several classes of raw resources. The goal in sustainable production planning is to meet customer demand over a fixed time horizon divided into planning periods by optimizing the tradeoff between economic objectives such as production cost, waste processed cost, and customer satisfaction level. The major decisions are production and inventory levels for each product and the number of workforce in each planning period. In this paper we consider the management of small scale traditional business at North Sumatera Province which performs processing fish into several local seafood products. The inherent uncertainty of data (e.g. demand, fish availability), together with the sequential evolution of data over time leads the sustainable production planning problem to a nonlinear mixed-integer stochastic programming model. We use scenario generation based approach and feasible neighborhood search for solving the model.

  12. Effects of Input Data Content on the Uncertainty of Simulating Water Resources

    Directory of Open Access Journals (Sweden)

    Carla Camargos

    2018-05-01

    Full Text Available The widely used, partly-deterministic Soil and Water Assessment Tool (SWAT requires a large amount of spatial input data, such as a digital elevation model (DEM, land use, and soil maps. Modelers make an effort to apply the most specific data possible for the study area to reflect the heterogeneous characteristics of landscapes. Regional data, especially with fine resolution, is often preferred. However, such data is not always available and can be computationally demanding. Despite being coarser, global data are usually free and available to the public. Previous studies revealed the importance for single investigations of different input maps. However, it remains unknown whether higher-resolution data can lead to reliable results. This study investigates how global and regional input datasets affect parameter uncertainty when estimating river discharges. We analyze eight different setups for the SWAT model for a catchment in Luxembourg, combining different land-use, elevation, and soil input data. The Metropolis–Hasting Markov Chain Monte Carlo (MCMC algorithm is used to infer posterior model parameter uncertainty. We conclude that our higher resolved DEM improves the general model performance in reproducing low flows by 10%. The less detailed soil-map improved the fit of low flows by 25%. In addition, more detailed land-use maps reduce the bias of the model discharge simulations by 50%. Also, despite presenting similar parameter uncertainty (P-factor ranging from 0.34 to 0.41 and R-factor from 0.41 to 0.45 for all setups, the results show a disparate parameter posterior distribution. This indicates that no assessment of all sources of uncertainty simultaneously is compensated by the fitted parameter values. We conclude that our result can give some guidance for future SWAT applications in the selection of the degree of detail for input data.

  13. On the evaluation of uncertainties for state estimation with the Kalman filter

    International Nuclear Information System (INIS)

    Eichstädt, S; Makarava, N; Elster, C

    2016-01-01

    The Kalman filter is an established tool for the analysis of dynamic systems with normally distributed noise, and it has been successfully applied in numerous areas. It provides sequentially calculated estimates of the system states along with a corresponding covariance matrix. For nonlinear systems, the extended Kalman filter is often used. This is derived from the Kalman filter by linearization around the current estimate. A key issue in metrology is the evaluation of the uncertainty associated with the Kalman filter state estimates. The ‘Guide to the Expression of Uncertainty in Measurement’ (GUM) and its supplements serve as the de facto standard for uncertainty evaluation in metrology. We explore the relationship between the covariance matrix produced by the Kalman filter and a GUM-compliant uncertainty analysis. In addition, the results of a Bayesian analysis are considered. For the case of linear systems with known system matrices, we show that all three approaches are compatible. When the system matrices are not precisely known, however, or when the system is nonlinear, this equivalence breaks down and different results can then be reached. For precisely known nonlinear systems, though, the result of the extended Kalman filter still corresponds to the linearized uncertainty propagation of the GUM. The extended Kalman filter can suffer from linearization and convergence errors. These disadvantages can be avoided to some extent by applying Monte Carlo procedures, and we propose such a method which is GUM-compliant and can also be applied online during the estimation. We illustrate all procedures in terms of a 2D dynamic system and compare the results with those obtained by particle filtering, which has been proposed for the approximate calculation of a Bayesian solution. Finally, we give some recommendations based on our findings. (paper)

  14. NNPDF2.1: Including heavy quark mass effects in NNPDF fits

    International Nuclear Information System (INIS)

    Guffanti, A.

    2011-01-01

    In this contribution we present the NNPDF2.1 parton distribution functions (PDF) set. The NNPDF2.1 set is a set extracted from a global fit to Deep-Inelastic Scattering (DIS), fixed target Drell-Yan (DY), Electroweak vector boson and inclusive jet cross-sections at colliders data. It is performed using the NNPDF methodology which relies on Monte Carlo techniques for determination of uncertainties and Neural Networks as unbiased interpolants.

  15. Test and intercomparisons of data fitting with general least squares code GMA versus Bayesian code GLUCS

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    Data fitting with GMA and GLUCS gives consistent results. Difference in the evaluated central values obtained with different formalisms can be related to the general accuracy with which fits could be done in different formalisms. It has stochastic nature and should be accounted in the final results of the data evaluation as small SERC uncertainty. Some shift in central values of data evaluated with GLUCS and GMA relative the central values evaluated with the R-matrix model code RAC is observed for cases of fitting strongly varying data and is related to the PPP. The procedure of evaluation, free from PPP, should be elaborated. (author)

  16. The global electroweak fit at NNLO and prospects for the LHC and ILC

    International Nuclear Information System (INIS)

    Baak, M.; Hoecker, A.; Cuth, J.; Schott, M.; Haller, J.; Kogler, R.; Moenig, K.; Stelzer, J.

    2014-01-01

    For a long time, global fits of the electroweak sector of the standard model (SM) have been used to exploit measurements of electroweak precision observables at lepton colliders (LEP, SLC), together with measurements at hadron colliders (Tevatron, LHC) and accurate theoretical predictions at multi-loop level, to constrain free parameters of the SM, such as the Higgs and top masses. Today, all fundamental SM parameters entering these fits are experimentally determined, including information on the Higgs couplings, and the global fits are used as powerful tools to assess the validity of the theory and to constrain scenarios for new physics. Future measurements at the Large Hadron Collider (LHC) and the International Linear Collider (ILC) promise to improve the experimental precision of key observables used in the fits. This paper presents updated electroweak fit results using the latest NNLO theoretical predictions and prospects for the LHC and ILC. The impact of experimental and theoretical uncertainties is analysed in detail. We compare constraints from the electroweak fit on the Higgs couplings with direct LHC measurements, and we examine present and future prospects of these constraints using a model with modified couplings of the Higgs boson to fermions and bosons. (orig.)

  17. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  18. Uncertainties in model-independent extractions of amplitudes from complete experiments

    International Nuclear Information System (INIS)

    Hoblit, S.; Sandorfi, A.M.; Kamano, H.; Lee, T.-S.H.

    2012-01-01

    A new generation of over-complete experiments is underway, with the goal of performing a high precision extraction of pseudoscalar meson photo-production amplitudes. Such experimentally determined amplitudes can be used both as a test to validate models and as a starting point for an analytic continuation in the complex plane to search for poles. Of crucial importance for both is the level of uncertainty in the extracted multipoles. We have probed these uncertainties by analyses of pseudo-data for KLambda photoproduction, first for the set of 8 observables that have been published for the K + Lambda channel and then for pseudo-data on a complete set of 16 observables with the uncertainties expected from analyses of ongoing CLAS experiments. In fitting multipoles, we have used a combined Monte Carlo sampling of the amplitude space, with gradient minimization, and have found a shallow X 2 valley pitted with a large number of local minima. This results in bands of solutions that are experimentally indistinguishable. All ongoing experiments will measure observables with limited statistics. We have found a dependence on the particular random choice of values of Gaussian distributed pseudo-data, due to the presence of multiple local minima. This results in actual uncertainties for reconstructed multipoles that are often considerable larger than those returned by gradient minimization routines such as Minuit which find a single local minimum. As intuitively expected, this additional level of uncertainty decreases as larger numbers of observables are included.

  19. Fitness

    Science.gov (United States)

    ... gov home http://www.girlshealth.gov/ Home Fitness Fitness Want to look and feel your best? Physical ... are? Check out this info: What is physical fitness? top Physical fitness means you can do everyday ...

  20. Understanding uncertainty

    CERN Document Server

    Lindley, Dennis V

    2013-01-01

    Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.

  1. Sequential analysis in neonatal research-systematic review.

    Science.gov (United States)

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  2. Group-sequential analysis may allow for early trial termination

    DEFF Research Database (Denmark)

    Gerke, Oke; Vilstrup, Mie H; Halekoh, Ulrich

    2017-01-01

    BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG-PET/CT mea......BACKGROUND: Group-sequential testing is widely used in pivotal therapeutic, but rarely in diagnostic research, although it may save studies, time, and costs. The purpose of this paper was to demonstrate a group-sequential analysis strategy in an intra-observer study on quantitative FDG...... assumed to be normally distributed, and sequential one-sided hypothesis tests on the population standard deviation of the differences against a hypothesised value of 1.5 were performed, employing an alpha spending function. The fixed-sample analysis (N = 45) was compared with the group-sequential analysis...... strategies comprising one (at N = 23), two (at N = 15, 30), or three interim analyses (at N = 11, 23, 34), respectively, which were defined post hoc. RESULTS: When performing interim analyses with one third and two thirds of patients, sufficient agreement could be concluded after the first interim analysis...

  3. Comparison of ablation centration after bilateral sequential versus simultaneous LASIK.

    Science.gov (United States)

    Lin, Jane-Ming; Tsai, Yi-Yu

    2005-01-01

    To compare ablation centration after bilateral sequential and simultaneous myopic LASIK. A retrospective randomized case series was performed of 670 eyes of 335 consecutive patients who had undergone either bilateral sequential (group 1) or simultaneous (group 2) myopic LASIK between July 2000 and July 2001 at the China Medical University Hospital, Taichung, Taiwan. The ablation centrations of the first and second eyes in the two groups were compared 3 months postoperatively. Of 670 eyes, 274 eyes (137 patients) comprised the sequential group and 396 eyes (198 patients) comprised the simultaneous group. Three months post-operatively, 220 eyes of 110 patients (80%) in the sequential group and 236 eyes of 118 patients (60%) in the simultaneous group provided topographic data for centration analysis. For the first eyes, mean decentration was 0.39 +/- 0.26 mm in the sequential group and 0.41 +/- 0.19 mm in the simultaneous group (P = .30). For the second eyes, mean decentration was 0.28 +/- 0.23 mm in the sequential group and 0.30 +/- 0.21 mm in the simultaneous group (P = .36). Decentration in the second eyes significantly improved in both groups (group 1, P = .02; group 2, P sequential group and 0.32 +/- 0.18 mm in the simultaneous group (P = .33). The difference of ablation center angles between the first and second eyes was 43.2 sequential group and 45.1 +/- 50.8 degrees in the simultaneous group (P = .42). Simultaneous bilateral LASIK is comparable to sequential surgery in ablation centration.

  4. A Survey of Multi-Objective Sequential Decision-Making

    NARCIS (Netherlands)

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential

  5. Hamiltonian inclusive fitness: a fitter fitness concept.

    Science.gov (United States)

    Costa, James T

    2013-01-01

    In 1963-1964 W. D. Hamilton introduced the concept of inclusive fitness, the only significant elaboration of Darwinian fitness since the nineteenth century. I discuss the origin of the modern fitness concept, providing context for Hamilton's discovery of inclusive fitness in relation to the puzzle of altruism. While fitness conceptually originates with Darwin, the term itself stems from Spencer and crystallized quantitatively in the early twentieth century. Hamiltonian inclusive fitness, with Price's reformulation, provided the solution to Darwin's 'special difficulty'-the evolution of caste polymorphism and sterility in social insects. Hamilton further explored the roles of inclusive fitness and reciprocation to tackle Darwin's other difficulty, the evolution of human altruism. The heuristically powerful inclusive fitness concept ramified over the past 50 years: the number and diversity of 'offspring ideas' that it has engendered render it a fitter fitness concept, one that Darwin would have appreciated.

  6. Sequential lineups: shift in criterion or decision strategy?

    Science.gov (United States)

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  7. Muon g-2 estimates: can one trust effective Lagrangians and global fits?

    Energy Technology Data Exchange (ETDEWEB)

    Benayoun, M., E-mail: benayoun@in2p3.fr [LPNHE des Universités Paris VI et Paris VII IN2P3/CNRS, 75252, Paris (France); David, P. [LPNHE des Universités Paris VI et Paris VII IN2P3/CNRS, 75252, Paris (France); LIED, Université Paris-Diderot/CNRS UMR 8236, 75013, Paris (France); DelBuono, L. [LPNHE des Universités Paris VI et Paris VII IN2P3/CNRS, 75252, Paris (France); Jegerlehner, F. [Institut für Physik, Humboldt-Universität zu Berlin, Newtonstrasse 15, 12489, Berlin (Germany); Deutsches Elektronen-Synchrotron (DESY), Platanenallee 6, 15738, Zeuthen (Germany)

    2015-12-26

    Previous studies have shown that the Hidden Local Symmetry (HLS) model, supplied with appropriate symmetry breaking mechanisms, provides an effective Lagrangian (Broken Hidden Local Symmetry, BHLS) which encompasses a large number of processes within a unified framework. Based on it, a global fit procedure allows for a simultaneous description of the e{sup +}e{sup -} annihilation into six final states—π{sup +}π{sup -}, π{sup 0}γ, ηγ, π{sup +}π{sup -}π{sup 0}, K{sup +}K{sup -}, K{sub L}K{sub S}—and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a{sub μ}{sup th} of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved in comparison to the standard approach of integrating the measured spectra directly. However, because most spectra for the annihilation process e{sup +}e{sup -}→π{sup +}π{sup -} undergo overall scale uncertainties which dominate the other sources, one may suspect some bias in the dipion contribution to a{sub μ}{sup th}, which could question the reliability of the global fit method. However, an iterated global fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied successfully to the e{sup +}e{sup -}→π{sup +}π{sup -} data samples from CMD2, SND, KLOE, BaBar, and BESSIII. The iterated fit solution is shown to further improve the prediction for a{sub μ}, which we find to deviate from its experimental value above the 4σ level. The contribution to a{sub μ} of the π{sup +}π{sup -} intermediate state up to 1.05 GeV has an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results.

  8. Muon g - 2 estimates. Can one trust effective Lagrangians and global fits?

    Energy Technology Data Exchange (ETDEWEB)

    Benayoun, M.; DelBuono, L. [LPNHE des Universites Paris VI et Paris VII IN2P3/CNRS, Paris (France); David, P. [LPNHE des Universites Paris VI et Paris VII IN2P3/CNRS, Paris (France); LIED, Universite Paris-Diderot/CNRS UMR 8236, Paris (France); Jegerlehner, F. [Humboldt-Universitaet zu Berlin, Institut fuer Physik, Berlin (Germany); Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2015-12-15

    Previous studies have shown that the Hidden Local Symmetry (HLS) model, supplied with appropriate symmetry breaking mechanisms, provides an effective Lagrangian (Broken Hidden Local Symmetry, BHLS) which encompasses a large number of processes within a unified framework. Based on it, a global fit procedure allows for a simultaneous description of the e{sup +}e{sup -} annihilation into six final states - π{sup +}π{sup -}, π{sup 0}γ, ηγ, π{sup +}π{sup -}π{sup 0}, K{sup +}K{sup -}, K{sub L}K{sub S} - and includes the dipion spectrum in the τ decay and some more light meson decay partial widths. The contribution to the muon anomalous magnetic moment a{sub μ}{sup th} of these annihilation channels over the range of validity of the HLS model (up to 1.05 GeV) is found much improved in comparison to the standard approach of integrating the measured spectra directly. However, because most spectra for the annihilation process e{sup +}e{sup -} → π{sup +}π{sup -} undergo overall scale uncertainties which dominate the other sources, one may suspect some bias in the dipion contribution to a{sub μ}{sup th}, which could question the reliability of the global fit method. However, an iterated global fit algorithm, shown to lead to unbiased results by a Monte Carlo study, is defined and applied successfully to the e{sup +}e{sup -} → π{sup +}π{sup -} data samples from CMD2, SND, KLOE, BaBar, and BESSIII. The iterated fit solution is shown to further improve the prediction for a{sub μ}, which we find to deviate from its experimental value above the 4σ level. The contribution to a{sub μ} of the π{sup +}π{sup -} intermediate state up to 1.05 GeV has an uncertainty about 3 times smaller than the corresponding usual estimate. Therefore, global fit techniques are shown to work and lead to improved unbiased results. (orig.)

  9. The Uncertainty of Biomass Estimates from Modeled ICESat-2 Returns Across a Boreal Forest Gradient

    Science.gov (United States)

    Montesano, P. M.; Rosette, J.; Sun, G.; North, P.; Nelson, R. F.; Dubayah, R. O.; Ranson, K. J.; Kharuk, V.

    2014-01-01

    The Forest Light (FLIGHT) radiative transfer model was used to examine the uncertainty of vegetation structure measurements from NASA's planned ICESat-2 photon counting light detection and ranging (LiDAR) instrument across a synthetic Larix forest gradient in the taiga-tundra ecotone. The simulations demonstrate how measurements from the planned spaceborne mission, which differ from those of previous LiDAR systems, may perform across a boreal forest to non-forest structure gradient in globally important ecological region of northern Siberia. We used a modified version of FLIGHT to simulate the acquisition parameters of ICESat-2. Modeled returns were analyzed from collections of sequential footprints along LiDAR tracks (link-scales) of lengths ranging from 20 m-90 m. These link-scales traversed synthetic forest stands that were initialized with parameters drawn from field surveys in Siberian Larix forests. LiDAR returns from vegetation were compiled for 100 simulated LiDAR collections for each 10 Mg · ha(exp -1) interval in the 0-100 Mg · ha(exp -1) above-ground biomass density (AGB) forest gradient. Canopy height metrics were computed and AGB was inferred from empirical models. The root mean square error (RMSE) and RMSE uncertainty associated with the distribution of inferred AGB within each AGB interval across the gradient was examined. Simulation results of the bright daylight and low vegetation reflectivity conditions for collecting photon counting LiDAR with no topographic relief show that 1-2 photons are returned for 79%-88% of LiDAR shots. Signal photons account for approximately 67% of all LiDAR returns, while approximately 50% of shots result in 1 signal photon returned. The proportion of these signal photon returns do not differ significantly (p greater than 0.05) for AGB intervals greater than 20 Mg · ha(exp -1). The 50m link-scale approximates the finest horizontal resolution (length) at which photon counting LiDAR collection provides strong model

  10. Risk newsboy: approach for addressing uncertainty in developing action levels and cleanup limits

    International Nuclear Information System (INIS)

    Cooke, Roger; MacDonell, Margaret

    2007-01-01

    Site cleanup decisions involve developing action levels and residual limits for key contaminants, to assure health protection during the cleanup period and into the long term. Uncertainty is inherent in the toxicity information used to define these levels, based on incomplete scientific knowledge regarding dose-response relationships across various hazards and exposures at environmentally relevant levels. This problem can be addressed by applying principles used to manage uncertainty in operations research, as illustrated by the newsboy dilemma. Each day a newsboy must balance the risk of buying more papers than he can sell against the risk of not buying enough. Setting action levels and cleanup limits involves a similar concept of balancing and distributing risks and benefits in the face of uncertainty. The newsboy approach can be applied to develop health-based target concentrations for both radiological and chemical contaminants, with stakeholder input being crucial to assessing 'regret' levels. Associated tools include structured expert judgment elicitation to quantify uncertainty in the dose-response relationship, and mathematical techniques such as probabilistic inversion and iterative proportional fitting. (authors)

  11. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  12. Uncertainties of Predictions from Parton Distribution Functions 1, the Lagrange Multiplier Method

    CERN Document Server

    Stump, D R; Brock, R; Casey, D; Huston, J; Kalk, J; Lai, H L; Tung, W K

    2002-01-01

    We apply the Lagrange Multiplier method to study the uncertainties of physical predictions due to the uncertainties of parton distribution functions (PDFs), using the cross section for W production at a hadron collider as an archetypal example. An effective chi-squared function based on the CTEQ global QCD analysis is used to generate a series of PDFs, each of which represents the best fit to the global data for some specified value of the cross section. By analyzing the likelihood of these "alterative hypotheses", using available information on errors from the individual experiments, we estimate that the fractional uncertainty of the cross section due to current experimental input to the PDF analysis is approximately 4% at the Tevatron, and 10% at the LHC. We give sets of PDFs corresponding to these up and down variations of the cross section. We also present similar results on Z production at the colliders. Our method can be applied to any combination of physical variables in precision QCD phenomenology, an...

  13. An exploratory sequential design to validate measures of moral emotions.

    Science.gov (United States)

    Márquez, Margarita G; Delgado, Ana R

    2017-05-01

    This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.

  14. A minimax procedure in the context of sequential mastery testing

    NARCIS (Netherlands)

    Vos, Hendrik J.

    1999-01-01

    The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory

  15. Automatic fitting of Gaussian peaks using abductive machine learning

    Science.gov (United States)

    Abdel-Aal, R. E.

    1998-02-01

    Analytical techniques have been used for many years for fitting Gaussian peaks in nuclear spectroscopy. However, the complexity of the approach warrants looking for machine-learning alternatives where intensive computations are required only once (during training), while actual analysis on individual spectra is greatly simplified and quickened. This should allow the use of simple portable systems for fast and automated analysis of large numbers of spectra, particularly in situations where accuracy may be traded for speed and simplicity. This paper proposes the use of abductive networks machine learning for this purpose. The Abductory Induction Mechanism (AIM) tool was used to build models for analyzing both single and double Gaussian peaks in the presence of noise depicting statistical uncertainties in collected spectra. AIM networks were synthesized by training on 1000 representative simulated spectra and evaluated on 500 new spectra. A classifier network determines the multiplicity of single/double peaks with an accuracy of 5.8%. With statistical uncertainties corresponding to a peak count of 100, average percentage absolute errors for the height, position, and width of single peaks are 4.9, 2.9, and 4.2%, respectively. For double peaks, these average errors are within 7.0, 3.1, and 5.9%, respectively. Models have been developed which account for the effect of a linear background on a single peak. Performance is compared with a neural network application and with an analytical curve-fitting routine, and the new technique is applied to actual data of an alpha spectrum.

  16. Automatic fitting of Gaussian peaks using abductive machine learning

    International Nuclear Information System (INIS)

    Abdel-Aal, R.E.

    1998-01-01

    Analytical techniques have been used for many years for fitting Gaussian peaks in nuclear spectroscopy. However, the complexity of the approach warrants looking for machine-learning alternatives where intensive computations are required only once (during training), while actual analysis on individual spectra is greatly simplified and quickened. This should allow the use of simple portable systems for fast and automated analysis of large numbers of spectra, particularly in situations where accuracy may be traded for speed and simplicity. This paper proposes the use of abductive networks machine learning for this purpose. The Abductory Induction Mechanism (AIM) tool was used to build models for analyzing both single and double Gaussian peaks in the presence of noise depicting statistical uncertainties in collected spectra. AIM networks were synthesized by training on 1,000 representative simulated spectra and evaluated on 500 new spectra. A classifier network determines the multiplicity of single/double peaks with an accuracy of 98%. With statistical uncertainties corresponding to a peak count of 100, average percentage absolute errors for the height, position, and width of single peaks are 4.9, 2.9, and 4.2%, respectively. For double peaks, these average errors are within 7.0, 3.1, and 5.9%, respectively. Models have been developed which account for the effect of a linear background on a single peak. Performance is compared with a neural network application and with an analytical curve-fitting routine, and the new technique is applied to actual data of an alpha spectrum

  17. Recognizing and responding to uncertainty: a grounded theory of nurses' uncertainty.

    Science.gov (United States)

    Cranley, Lisa A; Doran, Diane M; Tourangeau, Ann E; Kushniruk, Andre; Nagle, Lynn

    2012-08-01

    There has been little research to date exploring nurses' uncertainty in their practice. Understanding nurses' uncertainty is important because it has potential implications for how care is delivered. The purpose of this study is to develop a substantive theory to explain how staff nurses experience and respond to uncertainty in their practice. Between 2006 and 2008, a grounded theory study was conducted that included in-depth semi-structured interviews. Fourteen staff nurses working in adult medical-surgical intensive care units at two teaching hospitals in Ontario, Canada, participated in the study. The theory recognizing and responding to uncertainty characterizes the processes through which nurses' uncertainty manifested and how it was managed. Recognizing uncertainty involved the processes of assessing, reflecting, questioning, and/or being unable to predict aspects of the patient situation. Nurses' responses to uncertainty highlighted the cognitive-affective strategies used to manage uncertainty. Study findings highlight the importance of acknowledging uncertainty and having collegial support to manage uncertainty. The theory adds to our understanding the processes involved in recognizing uncertainty, strategies and outcomes of managing uncertainty, and influencing factors. Tailored nursing education programs should be developed to assist nurses in developing skills in articulating and managing their uncertainty. Further research is needed to extend, test and refine the theory of recognizing and responding to uncertainty to develop strategies for managing uncertainty. This theory advances the nursing perspective of uncertainty in clinical practice. The theory is relevant to nurses who are faced with uncertainty and complex clinical decisions, to managers who support nurses in their clinical decision-making, and to researchers who investigate ways to improve decision-making and care delivery. ©2012 Sigma Theta Tau International.

  18. General Methods for Analysis of Sequential “n-step” Kinetic Mechanisms: Application to Single Turnover Kinetics of Helicase-Catalyzed DNA Unwinding

    Science.gov (United States)

    Lucius, Aaron L.; Maluf, Nasib K.; Fischer, Christopher J.; Lohman, Timothy M.

    2003-01-01

    Helicase-catalyzed DNA unwinding is often studied using “all or none” assays that detect only the final product of fully unwound DNA. Even using these assays, quantitative analysis of DNA unwinding time courses for DNA duplexes of different lengths, L, using “n-step” sequential mechanisms, can reveal information about the number of intermediates in the unwinding reaction and the “kinetic step size”, m, defined as the average number of basepairs unwound between two successive rate limiting steps in the unwinding cycle. Simultaneous nonlinear least-squares analysis using “n-step” sequential mechanisms has previously been limited by an inability to float the number of “unwinding steps”, n, and m, in the fitting algorithm. Here we discuss the behavior of single turnover DNA unwinding time courses and describe novel methods for nonlinear least-squares analysis that overcome these problems. Analytic expressions for the time courses, fss(t), when obtainable, can be written using gamma and incomplete gamma functions. When analytic expressions are not obtainable, the numerical solution of the inverse Laplace transform can be used to obtain fss(t). Both methods allow n and m to be continuous fitting parameters. These approaches are generally applicable to enzymes that translocate along a lattice or require repetition of a series of steps before product formation. PMID:14507688

  19. Using cost-benefit concepts in design floods improves communication of uncertainty

    Science.gov (United States)

    Ganora, Daniele; Botto, Anna; Laio, Francesco; Claps, Pierluigi

    2017-04-01

    Flood frequency analysis, i.e. the study of the relationships between the magnitude and the rarity of high flows in a river, is the usual procedure adopted to assess flood hazard, preliminary to the plan/design of flood protection measures. It grounds on the fit of a probability distribution to the peak discharge values recorded in gauging stations and the final estimates over a region are thus affected by uncertainty, due to the limited sample availability and of the possible alternatives in terms of the probabilistic model and the parameter estimation methods used. In the last decade, the scientific community dealt with this issue by developing a number of methods to quantify such uncertainty components. Usually, uncertainty is visually represented through confidence bands, which are easy to understand, but are not yet demonstrated to be useful for design purposes: they usually disorient decision makers, as the design flood is no longer univocally defined, making the decision process undetermined. These considerations motivated the development of the uncertainty-compliant design flood estimator (UNCODE) procedure (Botto et al., 2014) that allows one to select meaningful flood design values accounting for the associated uncertainty by considering additional constraints based on cost-benefit criteria. This method suggests an explicit multiplication factor that corrects the traditional (without uncertainty) design flood estimates to incorporate the effects of uncertainty in the estimate at the same safety level. Even though the UNCODE method was developed for design purposes, it can represent a powerful and robust tool to help clarifying the effects of the uncertainty in statistical estimation. As the process produces increased design flood estimates, this outcome demonstrates how uncertainty leads to more expensive flood protection measures, or insufficiency of current defenses. Moreover, the UNCODE approach can be used to assess the "value" of data, as the costs

  20. Technical player profiles related to the physical fitness of young female volleyball players predict team performance.

    Science.gov (United States)

    Dávila-Romero, C; Hernández-Mocholí, M A; García-Hermoso, A

    2015-03-01

    This study is divided into three sequential stages: identification of fitness and game performance profiles (individual player performance), an assessment of the relationship between these profiles, and an assessment of the relationship between individual player profiles and team performance during play (in championship performance). The overall study sample comprised 525 (19 teams) female volleyball players aged 12-16 years and a subsample (N.=43) used to examine study aims one and two was selected from overall sample. Anthropometric, fitness and individual player performance (actual game) data were collected in the subsample. These data were analyzed through clustering methods, ANOVA and independence chi-square test. Then, we investigated whether the proportion of players with the highest individual player performance profile might predict a team's results in the championship. Cluster analysis identified three volleyball fitness profiles (high, medium, and low) and two individual player performance profiles (high and low). The results showed a relationship between both types of profile (fitness and individual player performance). Then, linear regression revealed a moderate relationship between the number of players with a high volleyball fitness profile and a team's results in the championship (R2=0.23). The current study findings may enable coaches and trainers to manage training programs more efficiently in order to obtain tailor-made training, identify volleyball-specific physical fitness training requirements and reach better results during competitions.

  1. Multichannel, sequential or combined X-ray spectrometry

    International Nuclear Information System (INIS)

    Florestan, J.

    1979-01-01

    X-ray spectrometer qualities and defects are evaluated for sequential and multichannel categories. Multichannel X-ray spectrometer has time-coherency advantage and its results could be more reproducible; on the other hand some spatial incoherency limits low percentage and traces applications, specially when backgrounds are very variable. In this last case, sequential X-ray spectrometer would find again great usefulness [fr

  2. Induction of simultaneous and sequential malolactic fermentation in durian wine.

    Science.gov (United States)

    Taniasuri, Fransisca; Lee, Pin-Rou; Liu, Shao-Quan

    2016-08-02

    This study represented for the first time the impact of malolactic fermentation (MLF) induced by Oenococcus oeni and its inoculation strategies (simultaneous vs. sequential) on the fermentation performance as well as aroma compound profile of durian wine. There was no negative impact of simultaneous inoculation of O. oeni and Saccharomyces cerevisiae on the growth and fermentation kinetics of S. cerevisiae as compared to sequential fermentation. Simultaneous MLF did not lead to an excessive increase in volatile acidity as compared to sequential MLF. The kinetic changes of organic acids (i.e. malic, lactic, succinic, acetic and α-ketoglutaric acids) varied with simultaneous and sequential MLF relative to yeast alone. MLF, regardless of inoculation mode, resulted in higher production of fermentation-derived volatiles as compared to control (alcoholic fermentation only), including esters, volatile fatty acids, and terpenes, except for higher alcohols. Most indigenous volatile sulphur compounds in durian were decreased to trace levels with little differences among the control, simultaneous and sequential MLF. Among the different wines, the wine with simultaneous MLF had higher concentrations of terpenes and acetate esters while sequential MLF had increased concentrations of medium- and long-chain ethyl esters. Relative to alcoholic fermentation only, both simultaneous and sequential MLF reduced acetaldehyde substantially with sequential MLF being more effective. These findings illustrate that MLF is an effective and novel way of modulating the volatile and aroma compound profile of durian wine. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Sequential Banking.

    OpenAIRE

    Bizer, David S; DeMarzo, Peter M

    1992-01-01

    The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...

  4. Equivalence between quantum simultaneous games and quantum sequential games

    OpenAIRE

    Kobayashi, Naoki

    2007-01-01

    A framework for discussing relationships between different types of games is proposed. Within the framework, quantum simultaneous games, finite quantum simultaneous games, quantum sequential games, and finite quantum sequential games are defined. In addition, a notion of equivalence between two games is defined. Finally, the following three theorems are shown: (1) For any quantum simultaneous game G, there exists a quantum sequential game equivalent to G. (2) For any finite quantum simultaneo...

  5. Accounting for Heterogeneous Returns in Sequential Schooling Decisions

    NARCIS (Netherlands)

    Zamarro, G.

    2006-01-01

    This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each

  6. Simultaneous Versus Sequential Ptosis and Strabismus Surgery in Children.

    Science.gov (United States)

    Revere, Karen E; Binenbaum, Gil; Li, Jonathan; Mills, Monte D; Katowitz, William R; Katowitz, James A

    The authors sought to compare the clinical outcomes of simultaneous versus sequential ptosis and strabismus surgery in children. Retrospective, single-center cohort study of children requiring both ptosis and strabismus surgery on the same eye. Simultaneous surgeries were performed during a single anesthetic event; sequential surgeries were performed at least 7 weeks apart. Outcomes were ptosis surgery success (margin reflex distance 1 ≥ 2 mm, good eyelid contour, and good eyelid crease); strabismus surgery success (ocular alignment within 10 prism diopters of orthophoria and/or improved head position); surgical complications; and reoperations. Fifty-six children were studied, 38 had simultaneous surgery and 18 sequential. Strabismus surgery was performed first in 38/38 simultaneous and 6/18 sequential cases. Mean age at first surgery was 64 months, with mean follow up 27 months. A total of 75% of children had congenital ptosis; 64% had comitant strabismus. A majority of ptosis surgeries were frontalis sling (59%) or Fasanella-Servat (30%) procedures. There were no significant differences between simultaneous and sequential groups with regards to surgical success rates, complications, or reoperations (all p > 0.28). In the first comparative study of simultaneous versus sequential ptosis and strabismus surgery, no advantage for sequential surgery was seen. Despite a theoretical risk of postoperative eyelid malposition or complications when surgeries were performed in a combined manner, the rate of such outcomes was not increased with simultaneous surgeries. Performing ptosis and strabismus surgery together appears to be clinically effective and safe, and reduces anesthesia exposure during childhood.

  7. Improvements in Spectrum's fit to program data tool.

    Science.gov (United States)

    Mahiane, Severin G; Marsh, Kimberly; Grantham, Kelsey; Crichlow, Shawna; Caceres, Karen; Stover, John

    2017-04-01

    The Joint United Nations Program on HIV/AIDS-supported Spectrum software package (Glastonbury, Connecticut, USA) is used by most countries worldwide to monitor the HIV epidemic. In Spectrum, HIV incidence trends among adults (aged 15-49 years) are derived by either fitting to seroprevalence surveillance and survey data or generating curves consistent with program and vital registration data, such as historical trends in the number of newly diagnosed infections or people living with HIV and AIDS related deaths. This article describes development and application of the fit to program data (FPD) tool in Joint United Nations Program on HIV/AIDS' 2016 estimates round. In the FPD tool, HIV incidence trends are described as a simple or double logistic function. Function parameters are estimated from historical program data on newly reported HIV cases, people living with HIV or AIDS-related deaths. Inputs can be adjusted for proportions undiagnosed or misclassified deaths. Maximum likelihood estimation or minimum chi-squared distance methods are used to identify the best fitting curve. Asymptotic properties of the estimators from these fits are used to estimate uncertainty. The FPD tool was used to fit incidence for 62 countries in 2016. Maximum likelihood and minimum chi-squared distance methods gave similar results. A double logistic curve adequately described observed trends in all but four countries where a simple logistic curve performed better. Robust HIV-related program and vital registration data are routinely available in many middle-income and high-income countries, whereas HIV seroprevalence surveillance and survey data may be scarce. In these countries, the FPD tool offers a simpler, improved approach to estimating HIV incidence trends.

  8. Forced Sequence Sequential Decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis

    In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....

  9. Impact of the heavy quark matching scales in PDF fits

    Energy Technology Data Exchange (ETDEWEB)

    Bertone, V. [VU Univ., Amsterdam (Netherlands). Dept. of Physics and Astronomy; Nikhef Theory Goup, Amsterdam (Netherlands); Britzger, D. [DESY, Hamburg (Germany); Camarda, S. [CERN, Geneva (Switzerland); Collaboration: The xFitter Developers' Team; and others

    2017-07-15

    We investigate the impact of displaced heavy quark matching scales in a global fit. The heavy quark matching scale μ{sub m} determines at which energy scale μ the QCD theory transitions from N{sub F} to N{sub F}+1 in the Variable Flavor Number Scheme (VFNS) for the evolution of the Parton Distribution Functions (PDFs) and strong coupling α{sub S}(μ). We study the variation of the matching scales, and their impact on a global PDF fit of the combined HERA data. As the choice of the matching scale μ{sub m} effectively is a choice of scheme, this represents a theoretical uncertainty; ideally, we would like to see minimal dependence on this parameter. For the transition across the charm quark (from N{sub F}=3 to 4), we find a large μ{sub m}=μ{sub c} dependence of the global fit χ{sup 2} at NLO, but this is significantly reduced at NNLO. For the transition across the bottom quark (from N{sub F}=4 to 5), we have a reduced μ{sub m}=μ{sub b} dependence of the χ{sup 2} at both NLO and NNLO as compared to the charm. This feature is now implemented in xFitter 2.0.0, an open source QCD fit framework.

  10. Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers

    International Nuclear Information System (INIS)

    Cardoso, Vanderlei

    2002-01-01

    The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)

  11. Unbiased determination of polarized parton distributions and their uncertainties

    CERN Document Server

    Ball, Richard D.; Guffanti, Alberto; Nocera, Emanuele R.; Ridolfi, Giovanni; Rojo, Juan

    2013-01-01

    We present a determination of a set of polarized parton distributions (PDFs) of the nucleon, at next-to-leading order, from a global set of longitudinally polarized deep-inelastic scattering data: NNPDFpol1.0. The determination is based on the NNPDF methodology: a Monte Carlo approach, with neural networks used as unbiased interpolants, previously applied to the determination of unpolarized parton distributions, and designed to provide a faithful and statistically sound representation of PDF uncertainties. We present our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the polarized evolution equations and its benchmarking, and the method used to compute physical observables. We review the NNPDF methodology for parametrization and fitting of neural networks, the algorithm used to determine the optimal fit, and its adaptation to the polarized case. We finally present our set of polarized parton distributions. We discuss its statistical properties, ...

  12. A determination of parton distributions with faithful uncertainty estimation

    International Nuclear Information System (INIS)

    Ball, Richard D.; Del Debbio, Luigi; Forte, Stefano; Guffanti, Alberto; Latorre, Jose I.; Piccione, Andrea; Rojo, Juan; Ubiali, Maria

    2009-01-01

    We present the determination of a set of parton distributions of the nucleon, at next-to-leading order, from a global set of deep-inelastic scattering data: NNPDF1.0. The determination is based on a Monte Carlo approach, with neural networks used as unbiased interpolants. This method, previously discussed by us and applied to a determination of the nonsinglet quark distribution, is designed to provide a faithful and statistically sound representation of the uncertainty on parton distributions. We discuss our dataset, its statistical features, and its Monte Carlo representation. We summarize the technique used to solve the evolution equations and its benchmarking, and the method used to compute physical observables. We discuss the parametrization and fitting of neural networks, and the algorithm used to determine the optimal fit. We finally present our set of parton distributions. We discuss its statistical properties, test for its stability upon various modifications of the fitting procedure, and compare it to other recent parton sets. We use it to compute the benchmark W and Z cross sections at the LHC. We discuss issues of delivery and interfacing to commonly used packages such as LHAPDF

  13. Unquestioned answers or unanswered questions: beliefs about science guide responses to uncertainty in climate change risk communication.

    Science.gov (United States)

    Rabinovich, Anna; Morton, Thomas A

    2012-06-01

    In two experimental studies we investigated the effect of beliefs about the nature and purpose of science (classical vs. Kuhnian models of science) on responses to uncertainty in scientific messages about climate change risk. The results revealed a significant interaction between both measured (Study 1) and manipulated (Study 2) beliefs about science and the level of communicated uncertainty on willingness to act in line with the message. Specifically, messages that communicated high uncertainty were more persuasive for participants who shared an understanding of science as debate than for those who believed that science is a search for absolute truth. In addition, participants who had a concept of science as debate were more motivated by higher (rather than lower) uncertainty in climate change messages. The results suggest that achieving alignment between the general public's beliefs about science and the style of the scientific messages is crucial for successful risk communication in science. Accordingly, rather than uncertainty always undermining the effectiveness of science communication, uncertainty can enhance message effects when it fits the audience's understanding of what science is. © 2012 Society for Risk Analysis.

  14. Reading Remediation Based on Sequential and Simultaneous Processing.

    Science.gov (United States)

    Gunnison, Judy; And Others

    1982-01-01

    The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…

  15. Effect of precipitation spatial distribution uncertainty on the uncertainty bounds of a snowmelt runoff model output

    Science.gov (United States)

    Jacquin, A. P.

    2012-04-01

    goodness of fit of the model realizations. GLUE-type uncertainty bounds during the verification period are derived at the probability levels p=85%, 90% and 95%. Results indicate that, as expected, prediction uncertainty bounds indeed change if precipitation factors FPi are estimated a priori rather than being allowed to vary, but that this change is not dramatic. Firstly, the width of the uncertainty bounds at the same probability level only slightly reduces compared to the case where precipitation factors are allowed to vary. Secondly, the ability to enclose the observations improves, but the decrease in the fraction of outliers is not significant. These results are probably due to the narrow range of variability allowed to the precipitation factors FPi in the first experiment, which implies that although they indicate the shape of the functional relationship between precipitation and height, the magnitude of precipitation estimates were mainly determined by the magnitude of the observations at the available raingauge. It is probable that the situation where no prior information is available on the realistic ranges of variation of the precipitation factors, and the inclusion of precipitation data uncertainty, would have led to a different conclusion. Acknowledgements: This research was funded by FONDECYT, Research Project 1110279.

  16. Measurement uncertainties in regression analysis with scarcity of data

    International Nuclear Information System (INIS)

    Sousa, J A; Ribeiro, A S; Cox, M G; Harris, P M; Sousa, J F V

    2010-01-01

    The evaluation of measurement uncertainty, in certain fields of science, faces the problem of scarcity of data. This is certainly the case in the testing of geological soils in civil engineering, where tests can take several days or weeks and where the same sample is not available for further testing, being destroyed during the experiment. In this particular study attention will be paid to triaxial compression tests used to typify particular soils. The purpose of the testing is to determine two parameters that characterize the soil, namely, cohesion and friction angle. These parameters are defined in terms of the intercept and slope of a straight line fitted to a small number of points (usually three) derived from experimental data. The use of ordinary least squares to obtain uncertainties associated with estimates of the two parameters would be unreliable if there were only three points (and no replicates) and hence only one degrees of freedom.

  17. Uncertainty analyses of unsaturated zone travel time at Yucca Mountain

    International Nuclear Information System (INIS)

    Nichols, W.E.; Freshley, M.D.

    1993-01-01

    Uncertainty analysis method can be applied to numerical models of ground-water flow to estimate the relative importance of physical and hydrologic input variables with respect to ground-water travel time. Monte Carlo numerical simulations of unsaturated flow in the Calico Hills nonwelded zeolitic (CHnz) layer at Yucca Mountain, Nevada, indicate that variability in recharge, and to a lesser extent in matrix porosity, explains most of the variability in predictions of water travel time through the unsaturated zone. Variations in saturated hydraulic conductivity and unsaturated curve-fitting parameters were not statistically significant in explaining variability in water travel time through the unsaturated CHnz unit. The results of this study suggest that the large uncertainty associated with recharge rate estimates for the Yucca Mountain site is of concern because the performance of the potential repository would be more sensitive to uncertainty in recharge than to any other parameter evaluated. These results are not exhaustive because of the limited site characterization data available and because of the preliminary nature of this study, which is limited to a single stratigraphic unit, one dimension, and does not account for fracture flow or other potential fast pathways at Yucca Mountain

  18. Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty

    International Nuclear Information System (INIS)

    Helton, Jon C.; Johnson, Jay D.

    2011-01-01

    In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.

  19. Uncertainty analysis guide

    International Nuclear Information System (INIS)

    Andres, T.H.

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  20. Uncertainty analysis guide

    Energy Technology Data Exchange (ETDEWEB)

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  1. C-quence: a tool for analyzing qualitative sequential data.

    Science.gov (United States)

    Duncan, Starkey; Collier, Nicholson T

    2002-02-01

    C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.

  2. Gfitter - Revisiting the global electroweak fit of the Standard Model and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Flaecher, H.; Hoecker, A. [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Goebel, M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)]|[Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Haller, J. [Hamburg Univ. (Germany). Inst. fuer Experimentalphysik; Moenig, K.; Stelzer, J. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)]|[Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2008-11-15

    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M{sub H}=116.4{sup +18.3}{sub -1.3} GeV, and the 2{sigma} and 3{sigma} allowed regions [114,145] GeV and [[113,168] and [180,225

  3. How to sell renewable electricity. Interactions of the intraday and day-ahead market under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Knaut, Andreas; Obermueller, Frank

    2016-04-15

    Uncertainty about renewable production increases the importance of sequential short-term trading in electricity markets. We consider a two-stage market where conventional and renewable producers compete in order to satisfy the demand of consumers. The trading in the first stage takes place under uncertainty about production levels of renewable producers, which can be associated with trading in the day-ahead market. In the second stage, which we consider as the intraday market, uncertainty about the production levels is resolved. Our model is able to capture different levels of flexibility for conventional producers as well as different levels of competition for renewable producers. We find that it is optimal for renewable producers to sell less than the expected production in the day-ahead market. In situations with high renewable production it is even profitable for renewable producers to withhold quantities in the intraday market. However, for an increasing number of renewable producers, the optimal quantity tends towards the expected production level. More competition as well as a more flexible power plant fleet lead to an increase in overall welfare, which can even be further increased by delaying the gate-closure of the day-ahead market or by improving the quality of renewable production forecasts.

  4. Calibration Uncertainties in the Droplet Measurement Technologies Cloud Condensation Nuclei Counter

    Science.gov (United States)

    Hibert, Kurt James

    Cloud condensation nuclei (CCN) serve as the nucleation sites for the condensation of water vapor in Earth's atmosphere and are important for their effect on climate and weather. The influence of CCN on cloud radiative properties (aerosol indirect effect) is the most uncertain of quantified radiative forcing changes that have occurred since pre-industrial times. CCN influence the weather because intrinsic and extrinsic aerosol properties affect cloud formation and precipitation development. To quantify these effects, it is necessary to accurately measure CCN, which requires accurate calibrations using a consistent methodology. Furthermore, the calibration uncertainties are required to compare measurements from different field projects. CCN uncertainties also aid the integration of CCN measurements with atmospheric models. The commercially available Droplet Measurement Technologies (DMT) CCN Counter is used by many research groups, so it is important to quantify its calibration uncertainty. Uncertainties in the calibration of the DMT CCN counter exist in the flow rate and supersaturation values. The concentration depends on the accuracy of the flow rate calibration, which does not have a large (4.3 %) uncertainty. The supersaturation depends on chamber pressure, temperature, and flow rate. The supersaturation calibration is a complex process since the chamber's supersaturation must be inferred from a temperature difference measurement. Additionally, calibration errors can result from the Kohler theory assumptions, fitting methods utilized, the influence of multiply-charged particles, and calibration points used. In order to determine the calibration uncertainties and the pressure dependence of the supersaturation calibration, three calibrations are done at each pressure level: 700, 840, and 980 hPa. Typically 700 hPa is the pressure used for aircraft measurements in the boundary layer, 840 hPa is the calibration pressure at DMT in Boulder, CO, and 980 hPa is the

  5. Impact of Uncertainties in the Cosmological Parameters on the Measurement of Primordial non-Gaussianity

    CERN Document Server

    Liguori, M

    2008-01-01

    We study the impact of cosmological parameters' uncertainties on estimates of the primordial NG parameter f_NL in local and equilateral models of non-Gaussianity. We show that propagating these errors increases the f_NL relative uncertainty by 16% for WMAP and 5 % for Planck in the local case, whereas for equilateral configurations the correction term are 14% and 4%, respectively. If we assume for local f_NL a central value of order 60, according to recent WMAP 5-years estimates, we obtain for Planck a final correction \\Delta f_NL = 3. Although not dramatic, this correction is at the level of the expected estimator uncertainty for Planck, and should then be taken into account when quoting the significance of an eventual future detection. In current estimates of f_NL the cosmological parameters are held fixed at their best-fit values. We finally note that the impact of uncertainties in the cosmological parameters on the final f_NL error bar would become totally negligible if the parameters were allowed to vary...

  6. Top-down attention affects sequential regularity representation in the human visual system.

    Science.gov (United States)

    Kimura, Motohiro; Widmann, Andreas; Schröger, Erich

    2010-08-01

    Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.

  7. SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Devic, S; Tomic, N; DeBlois, F; Seuntjens, J [McGill University, Montreal, QC (Canada); Lewis, D [RCF Consulting, LLC, Monroe, CT (United States); Aldelaijan, S [King Faisal Specialist Hospital & Research Center, Riyadh (Saudi Arabia)

    2016-06-15

    Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response by using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis

  8. Estimating uncertainty in multivariate responses to selection.

    Science.gov (United States)

    Stinchcombe, John R; Simonsen, Anna K; Blows, Mark W

    2014-04-01

    Predicting the responses to natural selection is one of the key goals of evolutionary biology. Two of the challenges in fulfilling this goal have been the realization that many estimates of natural selection might be highly biased by environmentally induced covariances between traits and fitness, and that many estimated responses to selection do not incorporate or report uncertainty in the estimates. Here we describe the application of a framework that blends the merits of the Robertson-Price Identity approach and the multivariate breeder's equation to address these challenges. The approach allows genetic covariance matrices, selection differentials, selection gradients, and responses to selection to be estimated without environmentally induced bias, direct and indirect selection and responses to selection to be distinguished, and if implemented in a Bayesian-MCMC framework, statistically robust estimates of uncertainty on all of these parameters to be made. We illustrate our approach with a worked example of previously published data. More generally, we suggest that applying both the Robertson-Price Identity and the multivariate breeder's equation will facilitate hypothesis testing about natural selection, genetic constraints, and evolutionary responses. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  9. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  10. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  11. Computing sequential equilibria for two-player games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...

  12. Computing Sequential Equilibria for Two-Player Games

    DEFF Research Database (Denmark)

    Miltersen, Peter Bro; Sørensen, Troels Bjerre

    2006-01-01

    Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...

  13. The fitness of apps: a theory-based examination of mobile fitness app usage over 5 months

    Science.gov (United States)

    Kim, Jinsook

    2017-01-01

    Background There are thousands of fitness-related smartphone applications (“apps”) available for free and purchase, but there is uncertainty if these apps help individuals achieve and maintain personal fitness. Technology usage attrition is also a concern among research studies on health technologies. Methods Usage of three fitness apps was examined over 5 months to assess adherence and effectiveness. Initially, 64 participants downloaded three free apps available on Android and iOS and 47 remained in the study until posttest. With a one group pre-posttest design and checkpoints at months 1, 3, and 5, exercise and exercise with fitness apps were examined in the framework of the Theory of Planned Behavior (TPB) using a validated survey. Apps were selected based on their function from the Functional Triad. Perceived fitness was also measured. T-tests, sign tests, Fisher’s exact tests, and linear and logistic regression were used to compare pre to posttests and users to non-users of the apps. Results Forty-seven participants completed both pre and posttests. Individual item scores indicated no significant change pre to posttest except for decreases observed in usefulness of using apps for exercise (attitude) (−0.78, Papps (subjective norm) (−1.02, Papps (perceived behavioral control) (−1.29, Papps over the next 2 weeks (behavioral intention) (Papps (−1.72, Papps (−2.56, Papp users (n=32) to non-users (n=15), there was only a significant difference in subscale total scores at posttest for attitude toward exercising using apps, which was significantly more favorable among users than non-users (32.3 vs. 27.6, PApp usage and effectiveness appears to have a connection to usefulness (attitude) and to perceived difficulties of exercising using apps (perceived behavioral control). Exercise and exercise using apps are not influenced by peer influence (subjective norm). Intention to exercise using these particular apps decreased (behavioral intention). Those who

  14. Measurement Uncertainty in Racial and Ethnic Identification among Adolescents of Mixed Ancestry: A Latent Variable Approach

    Science.gov (United States)

    Tracy, Allison J.; Erkut, Sumru; Porche, Michelle V.; Kim, Jo; Charmaraman, Linda; Grossman, Jennifer M.; Ceder, Ineke; Garcia, Heidie Vazquez

    2010-01-01

    In this article, we operationalize identification of mixed racial and ethnic ancestry among adolescents as a latent variable to (a) account for measurement uncertainty, and (b) compare alternative wording formats for racial and ethnic self-categorization in surveys. Two latent variable models were fit to multiple mixed-ancestry indicator data from…

  15. LikelihoodLib - Fitting, Function Maximization, and Numerical Analysis

    CERN Document Server

    Smirnov, I B

    2001-01-01

    A new class library is designed for function maximization, minimization, solution of equations and for other problems related to mathematical analysis of multi-parameter functions by numerical iterative methods. When we search the maximum or another special point of a function, we may change and fit all parameters simultaneously, sequentially, recursively, or by any combination of these methods. The discussion is focused on the first the most complicated method, although the others are also supported by the library. For this method we apply: control of precision by interval computations; the calculation of derivatives either by differential arithmetic, or by the method of finite differences with the step lengths which provide suppression of the influence of numerical noise; possible synchronization of the subjective function calls with minimization of the number of iterations; competitive application of various methods for step calculation, and converging to the solution by many trajectories.

  16. The sequential structure of brain activation predicts skill.

    Science.gov (United States)

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. A one-sided sequential test

    Energy Technology Data Exchange (ETDEWEB)

    Racz, A.; Lux, I. [Hungarian Academy of Sciences, Budapest (Hungary). Atomic Energy Research Inst.

    1996-04-16

    The applicability of the classical sequential probability ratio testing (SPRT) for early failure detection problems is limited by the fact that there is an extra time delay between the occurrence of the failure and its first recognition. Chien and Adams developed a method to minimize this time for the case when the problem can be formulated as testing the mean value of a Gaussian signal. In our paper we propose a procedure that can be applied for both mean and variance testing and that minimizes the time delay. The method is based on a special parametrization of the classical SPRT. The one-sided sequential tests (OSST) can reproduce the results of the Chien-Adams test when applied for mean values. (author).

  18. The fit between national culture, organizing and managing

    DEFF Research Database (Denmark)

    Søndergaard, Mikael

    2006-01-01

    We hypothesize a fit betwen national cultural environment of the organization and contingency variables subject to managerial discretion. Such a hypothesis implies that national cultures is a contextual variable in contingency thoery and uses emperically derived culture contingency theory to argue...... are negatively correlated with uncertainty avoidance. We derive a number of important implication for organization design theory and practice....... that national culture chracteristics affect management's choices as to how to organize and manage people.  A tightly matched population of 4400 city managers from 14 Western countries constitutes strong material for the analysis as cultural and behavioral variables were directly analyzed. Findings suggest...

  19. Estimates of Uncertainties in Analysis of Positron Lifetime Spectra for Metals

    DEFF Research Database (Denmark)

    Eldrup, Morten Mostgaard; Huang, Y. M.; McKee, B. T. A.

    1978-01-01

    by excluding the peak regions of the spectra from the analysis. The influence of using incorrect source-surface components in the analysis may on the other hand be reduced by including the peak regions of the spectra. A main conclusion of the work is that extreme caution should be exercised to avoid......The effects of uncertainties and errors in various constraints used in the analysis of multi-component life-time spectra of positrons annihilating in metals containing defects have been investigated in detail using computer simulated decay spectra and subsequent analysis. It is found...... that the errors in the fitted values of the main components lifetimes and intensities introduced from incorrect values of the instrumental resolution function and of the source-surface components can easily exceed the statistical uncertainties. The effect of an incorrect resolution function may be reduced...

  20. Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks

    DEFF Research Database (Denmark)

    Gu, Tao; Wang, Liang; Chen, Hanhua

    2010-01-01

    Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...

  1. The uncertainty of reference standards--a guide to understanding factors impacting uncertainty, uncertainty calculations, and vendor certifications.

    Science.gov (United States)

    Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma

    2009-10-01

    Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.

  2. Heisenberg's principle of uncertainty and the uncertainty relations

    International Nuclear Information System (INIS)

    Redei, Miklos

    1987-01-01

    The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs

  3. Discrimination between sequential and simultaneous virtual channels with electrical hearing

    OpenAIRE

    Landsberger, David; Galvin, John J.

    2011-01-01

    In cochlear implants (CIs), simultaneous or sequential stimulation of adjacent electrodes can produce intermediate pitch percepts between those of the component electrodes. However, it is unclear whether simultaneous and sequential virtual channels (VCs) can be discriminated. In this study, CI users were asked to discriminate simultaneous and sequential VCs; discrimination was measured for monopolar (MP) and bipolar + 1 stimulation (BP + 1), i.e., relatively broad and focused stimulation mode...

  4. Model Uncertainties for Valencia RPA Effect for MINERvA

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Richard [Univ. of Minnesota, Duluth, MN (United States)

    2017-05-08

    This technical note describes the application of the Valencia RPA multi-nucleon effect and its uncertainty to QE reactions from the GENIE neutrino event generator. The analysis of MINERvA neutrino data in Rodrigues et al. PRL 116 071802 (2016) paper makes clear the need for an RPA suppression, especially at very low momentum and energy transfer. That published analysis does not constrain the magnitude of the effect; it only tests models with and without the effect against the data. Other MINERvA analyses need an expression of the model uncertainty in the RPA effect. A well-described uncertainty can be used for systematics for unfolding, for model errors in the analysis of non-QE samples, and as input for fitting exercises for model testing or constraining backgrounds. This prescription takes uncertainties on the parameters in the Valencia RPA model and adds a (not-as-tight) constraint from muon capture data. For MINERvA we apply it as a 2D ($q_0$,$q_3$) weight to GENIE events, in lieu of generating a full beyond-Fermi-gas quasielastic events. Because it is a weight, it can be applied to the generated and fully Geant4 simulated events used in analysis without a special GENIE sample. For some limited uses, it could be cast as a 1D $Q^2$ weight without much trouble. This procedure is a suitable starting point for NOvA and DUNE where the energy dependence is modest, but probably not adequate for T2K or MicroBooNE.

  5. A systematic framework for effective uncertainty assessment of severe accident calculations; Hybrid qualitative and quantitative methodology

    International Nuclear Information System (INIS)

    Hoseyni, Seyed Mohsen; Pourgol-Mohammad, Mohammad; Tehranifard, Ali Abbaspour; Yousefpour, Faramarz

    2014-01-01

    This paper describes a systematic framework for characterizing important phenomena and quantifying the degree of contribution of each parameter to the output in severe accident uncertainty assessment. The proposed methodology comprises qualitative as well as quantitative phases. The qualitative part so called Modified PIRT, being a robust process of PIRT for more precise quantification of uncertainties, is a two step process for identifying and ranking based on uncertainty importance in severe accident phenomena. In this process identified severe accident phenomena are ranked according to their effect on the figure of merit and their level of knowledge. Analytical Hierarchical Process (AHP) serves here as a systematic approach for severe accident phenomena ranking. Formal uncertainty importance technique is used to estimate the degree of credibility of the severe accident model(s) used to represent the important phenomena. The methodology uses subjective justification by evaluating available information and data from experiments, and code predictions for this step. The quantitative part utilizes uncertainty importance measures for the quantification of the effect of each input parameter to the output uncertainty. A response surface fitting approach is proposed for estimating associated uncertainties with less calculation cost. The quantitative results are used to plan in reducing epistemic uncertainty in the output variable(s). The application of the proposed methodology is demonstrated for the ACRR MP-2 severe accident test facility. - Highlights: • A two stage framework for severe accident uncertainty analysis is proposed. • Modified PIRT qualitatively identifies and ranks uncertainty sources more precisely. • Uncertainty importance measure quantitatively calculates effect of each uncertainty source. • Methodology is applied successfully on ACRR MP-2 severe accident test facility

  6. Uncertainty analysis on probabilistic fracture mechanics assessment methodology

    International Nuclear Information System (INIS)

    Rastogi, Rohit; Vinod, Gopika; Chandra, Vikas; Bhasin, Vivek; Babar, A.K.; Rao, V.V.S.S.; Vaze, K.K.; Kushwaha, H.S.; Venkat-Raj, V.

    1999-01-01

    Fracture Mechanics has found a profound usage in the area of design of components and assessing fitness for purpose/residual life estimation of an operating component. Since defect size and material properties are statistically distributed, various probabilistic approaches have been employed for the computation of fracture probability. Monte Carlo Simulation is one such procedure towards the analysis of fracture probability. This paper deals with uncertainty analysis using the Monte Carlo Simulation methods. These methods were developed based on the R6 failure assessment procedure, which has been widely used in analysing the integrity of structures. The application of this method is illustrated with a case study. (author)

  7. Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design

    Science.gov (United States)

    Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff

    2016-01-01

    Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…

  8. Sequential dependencies in magnitude scaling of loudness

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Jesteadt, Walt

    2013-01-01

    Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...

  9. Visual short-term memory for sequential arrays.

    Science.gov (United States)

    Kumar, Arjun; Jiang, Yuhong

    2005-04-01

    The capacity of visual short-term memory (VSTM) for a single visual display has been investigated in past research, but VSTM for multiple sequential arrays has been explored only recently. In this study, we investigate the capacity of VSTM across two sequential arrays separated by a variable stimulus onset asynchrony (SOA). VSTM for spatial locations (Experiment 1), colors (Experiments 2-4), orientations (Experiments 3 and 4), and conjunction of color and orientation (Experiment 4) were tested, with the SOA across the two sequential arrays varying from 100 to 1,500 msec. We find that VSTM for the trailing array is much better than VSTM for the leading array, but when averaged across the two arrays VSTM has a constant capacity independent of the SOA. We suggest that multiple displays compete for retention in VSTM and that separating information into two temporally discrete groups does not enhance the overall capacity of VSTM.

  10. Tensor-guided fitting of subduction slab depths

    Science.gov (United States)

    Bazargani, Farhad; Hayes, Gavin P.

    2013-01-01

    Geophysical measurements are often acquired at scattered locations in space. Therefore, interpolating or fitting the sparsely sampled data as a uniform function of space (a procedure commonly known as gridding) is a ubiquitous problem in geophysics. Most gridding methods require a model of spatial correlation for data. This spatial correlation model can often be inferred from some sort of secondary information, which may also be sparsely sampled in space. In this paper, we present a new method to model the geometry of a subducting slab in which we use a data‐fitting approach to address the problem. Earthquakes and active‐source seismic surveys provide estimates of depths of subducting slabs but only at scattered locations. In addition to estimates of depths from earthquake locations, focal mechanisms of subduction zone earthquakes also provide estimates of the strikes of the subducting slab on which they occur. We use these spatially sparse strike samples and the Earth’s curved surface geometry to infer a model for spatial correlation that guides a blended neighbor interpolation of slab depths. We then modify the interpolation method to account for the uncertainties associated with the depth estimates.

  11. Impact of the heavy-quark matching scales in PDF fits

    Energy Technology Data Exchange (ETDEWEB)

    Bertone, V. [VU University, Department of Physics and Astronomy, Amsterdam (Netherlands); Nikhef Theory Group Science Park 105, Amsterdam (Netherlands); Britzger, D.; Geiser, A.; Glazov, A.; Zenaiev, O. [DESY, Hamburg (Germany); Camarda, S. [CERN, Geneva (Switzerland); Cooper-Sarkar, A.; Giuli, F. [University of Oxford (United Kingdom); Godat, E.; Lyonnet, F.; Olness, F. [SMU Physics, Dallas, TX (United States); Kusina, A. [Universite Grenoble Alpes, CNRS/IN2P3, Laboratoire de Physique Subatomique et de Cosmologie, Grenoble (France); Polish Academy of Sciences, Institute of Nuclear Physics, Krakow (Poland); Luszczak, A. [T. Kosciuszko Cracow University of Technology, Krakow (Poland); Placakyte, R. [Universitaet Hamburg, Institut fuer Theoretische Physik, Hamburg (Germany); Radescu, V. [DESY, Hamburg (Germany); CERN, Geneva (Switzerland); Schienbein, I. [Universite Grenoble Alpes, CNRS/IN2P3, Laboratoire de Physique Subatomique et de Cosmologie, Grenoble (France); Collaboration: The xFitter Developers' Team

    2017-12-15

    We investigate the impact of displaced heavy-quark matching scales in a global fit. The heavy-quark matching scale μ{sub m} determines at which energy scale μ the QCD theory transitions from N{sub F} to N{sub F} + 1 in the variable flavor number scheme (VFNS) for the evolution of the parton distribution functions (PDFs) and strong coupling α{sub S}(μ). We study the variation of the matching scales, and their impact on a global PDF fit of the combined HERA data. As the choice of the matching scale μ{sub m} effectively is a choice of scheme, this represents a theoretical uncertainty; ideally, we would like to see minimal dependence on this parameter. For the transition across the charm quark (from N{sub F} = 3 to 4), we find a large μ{sub m} = μ{sub c} dependence of the global fit χ{sup 2} at NLO, but this is significantly reduced at NNLO. For the transition across the bottom quark (from N{sub F} = 4 to 5), we have a reduced μ{sub m} = μ{sub b} dependence of the χ{sup 2} at both NLO and NNLO as compared to the charm. This feature is now implemented in xFitter 2.0.0, an open source QCD fit framework. (orig.)

  12. Uncertainties in model-based outcome predictions for treatment planning

    International Nuclear Information System (INIS)

    Deasy, Joseph O.; Chao, K.S. Clifford; Markman, Jerry

    2001-01-01

    Purpose: Model-based treatment-plan-specific outcome predictions (such as normal tissue complication probability [NTCP] or the relative reduction in salivary function) are typically presented without reference to underlying uncertainties. We provide a method to assess the reliability of treatment-plan-specific dose-volume outcome model predictions. Methods and Materials: A practical method is proposed for evaluating model prediction based on the original input data together with bootstrap-based estimates of parameter uncertainties. The general framework is applicable to continuous variable predictions (e.g., prediction of long-term salivary function) and dichotomous variable predictions (e.g., tumor control probability [TCP] or NTCP). Using bootstrap resampling, a histogram of the likelihood of alternative parameter values is generated. For a given patient and treatment plan we generate a histogram of alternative model results by computing the model predicted outcome for each parameter set in the bootstrap list. Residual uncertainty ('noise') is accounted for by adding a random component to the computed outcome values. The residual noise distribution is estimated from the original fit between model predictions and patient data. Results: The method is demonstrated using a continuous-endpoint model to predict long-term salivary function for head-and-neck cancer patients. Histograms represent the probabilities for the level of posttreatment salivary function based on the input clinical data, the salivary function model, and the three-dimensional dose distribution. For some patients there is significant uncertainty in the prediction of xerostomia, whereas for other patients the predictions are expected to be more reliable. In contrast, TCP and NTCP endpoints are dichotomous, and parameter uncertainties should be folded directly into the estimated probabilities, thereby improving the accuracy of the estimates. Using bootstrap parameter estimates, competing treatment

  13. The target-to-foils shift in simultaneous and sequential lineups.

    Science.gov (United States)

    Clark, Steven E; Davey, Sherrie L

    2005-04-01

    A theoretical cornerstone in eyewitness identification research is the proposition that witnesses, in making decisions from standard simultaneous lineups, make relative judgments. The present research considers two sources of support for this proposal. An experiment by G. L. Wells (1993) showed that if the target is removed from a lineup, witnesses shift their responses to pick foils, rather than rejecting the lineups, a result we will term a target-to-foils shift. Additional empirical support is provided by results from sequential lineups which typically show higher accuracy than simultaneous lineups, presumably because of a decrease in the use of relative judgments in making identification decisions. The combination of these two lines of research suggests that the target-to-foils shift should be reduced in sequential lineups relative to simultaneous lineups. Results of two experiments showed an overall advantage for sequential lineups, but also showed a target-to-foils shift equal in size for simultaneous and sequential lineups. Additional analyses indicated that the target-to-foils shift in sequential lineups was moderated in part by an order effect and was produced with (Experiment 2) or without (Experiment 1) a shift in decision criterion. This complex pattern of results suggests that more work is needed to understand the processes which underlie decisions in simultaneous and sequential lineups.

  14. Detailed modeling of the statistical uncertainty of Thomson scattering measurements

    International Nuclear Information System (INIS)

    Morton, L A; Parke, E; Hartog, D J Den

    2013-01-01

    The uncertainty of electron density and temperature fluctuation measurements is determined by statistical uncertainty introduced by multiple noise sources. In order to quantify these uncertainties precisely, a simple but comprehensive model was made of the noise sources in the MST Thomson scattering system and of the resulting variance in the integrated scattered signals. The model agrees well with experimental and simulated results. The signal uncertainties are then used by our existing Bayesian analysis routine to find the most likely electron temperature and density, with confidence intervals. In the model, photonic noise from scattered light and plasma background light is multiplied by the noise enhancement factor (F) of the avalanche photodiode (APD). Electronic noise from the amplifier and digitizer is added. The amplifier response function shapes the signal and induces correlation in the noise. The data analysis routine fits a characteristic pulse to the digitized signals from the amplifier, giving the integrated scattered signals. A finite digitization rate loses information and can cause numerical integration error. We find a formula for the variance of the scattered signals in terms of the background and pulse amplitudes, and three calibration constants. The constants are measured easily under operating conditions, resulting in accurate estimation of the scattered signals' uncertainty. We measure F ≈ 3 for our APDs, in agreement with other measurements for similar APDs. This value is wavelength-independent, simplifying analysis. The correlated noise we observe is reproduced well using a Gaussian response function. Numerical integration error can be made negligible by using an interpolated characteristic pulse, allowing digitization rates as low as the detector bandwidth. The effect of background noise is also determined

  15. Quantifying the Contribution of Post-Processing in Computed Tomography Measurement Uncertainty

    DEFF Research Database (Denmark)

    Stolfi, Alessandro; Thompson, Mary Kathryn; Carli, Lorenzo

    2016-01-01

    by calculating the standard deviation of 10 repeated measurement evaluations on the same data set. The evaluations were performed on an industrial assembly. Each evaluation includes several dimensional and geometrical measurands that were expected to have different responses to the various post......-processing settings. It was found that the definition of the datum system had the largest impact on the uncertainty with a standard deviation of a few microns. The surface determination and data fitting had smaller contributions with sub-micron repeatability....

  16. Adaptive Motion Planning in Bin-Picking with Object Uncertainties

    DEFF Research Database (Denmark)

    Iversen, Thomas Fridolin; Ellekilde, Lars-Peter; Miró, Jaime Valls

    2017-01-01

    Doing motion planning for bin-picking with object uncertainties requires either a re-grasp of picked objects or an online sensor system. Using the latter is advantageous in terms of computational time, as no time is wasted doing an extra pick and place action. It does, however, put extra...... requirements on the motion planner, as the target position may change on-the-fly. This paper solves that problem by using a state adjusting Partial Observable Markov Decision Process, where the state space is modified between runs, to better fit earlier solved problems. The approach relies on a set...

  17. Dynamics-based sequential memory: Winnerless competition of patterns

    International Nuclear Information System (INIS)

    Seliger, Philip; Tsimring, Lev S.; Rabinovich, Mikhail I.

    2003-01-01

    We introduce a biologically motivated dynamical principle of sequential memory which is based on winnerless competition (WLC) of event images. This mechanism is implemented in a two-layer neural model of sequential spatial memory. We present the learning dynamics which leads to the formation of a WLC network. After learning, the system is capable of associative retrieval of prerecorded sequences of patterns

  18. Sequential, progressive, equal-power, reflective beam-splitter arrays

    Science.gov (United States)

    Manhart, Paul K.

    2017-11-01

    The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.

  19. Persistence of transmitted HIV-1 drug resistance mutations associated with fitness costs and viral genetic backgrounds.

    Directory of Open Access Journals (Sweden)

    Wan-Lin Yang

    2015-03-01

    Full Text Available Transmission of drug-resistant pathogens presents an almost-universal challenge for fighting infectious diseases. Transmitted drug resistance mutations (TDRM can persist in the absence of drugs for considerable time. It is generally believed that differential TDRM-persistence is caused, at least partially, by variations in TDRM-fitness-costs. However, in vivo epidemiological evidence for the impact of fitness costs on TDRM-persistence is rare. Here, we studied the persistence of TDRM in HIV-1 using longitudinally-sampled nucleotide sequences from the Swiss-HIV-Cohort-Study (SHCS. All treatment-naïve individuals with TDRM at baseline were included. Persistence of TDRM was quantified via reversion rates (RR determined with interval-censored survival models. Fitness costs of TDRM were estimated in the genetic background in which they occurred using a previously published and validated machine-learning algorithm (based on in vitro replicative capacities and were included in the survival models as explanatory variables. In 857 sequential samples from 168 treatment-naïve patients, 17 TDRM were analyzed. RR varied substantially and ranged from 174.0/100-person-years;CI=[51.4, 588.8] (for 184V to 2.7/100-person-years;[0.7, 10.9] (for 215D. RR increased significantly with fitness cost (increase by 1.6[1.3,2.0] per standard deviation of fitness costs. When subdividing fitness costs into the average fitness cost of a given mutation and the deviation from the average fitness cost of a mutation in a given genetic background, we found that both components were significantly associated with reversion-rates. Our results show that the substantial variations of TDRM persistence in the absence of drugs are associated with fitness-cost differences both among mutations and among different genetic backgrounds for the same mutation.

  20. A Best-Estimate Reactor Core Monitor Using State Feedback Strategies to Reduce Uncertainties

    International Nuclear Information System (INIS)

    Martin, Robert P.; Edwards, Robert M.

    2000-01-01

    The development and demonstration of a new algorithm to reduce modeling and state-estimation uncertainty in best-estimate simulation codes has been investigated. Demonstration is given by way of a prototype reactor core monitor. The architecture of this monitor integrates a control-theory-based, distributed-parameter estimation technique into a production-grade best-estimate simulation code. The Kalman Filter-Sequential Least-Squares (KFSLS) parameter estimation algorithm has been extended for application into the computational environment of the best-estimate simulation code RELAP5-3D. In control system terminology, this configuration can be thought of as a 'best-estimate' observer. The application to a distributed-parameter reactor system involves a unique modal model that approximates physical components, such as the reactor, by describing both states and parameters by an orthogonal expansion. The basic KFSLS parameter estimation is used to dynamically refine a spatially varying (distributed) parameter. The application of the distributed-parameter estimator is expected to complement a traditional nonlinear best-estimate simulation code by providing a mechanism for reducing both code input (modeling) and output (state-estimation) uncertainty in complex, distributed-parameter systems

  1. Basal ganglia and cortical networks for sequential ordering and rhythm of complex movements

    Directory of Open Access Journals (Sweden)

    Jeffery G. Bednark

    2015-07-01

    Full Text Available Voluntary actions require the concurrent engagement and coordinated control of complex temporal (e.g. rhythm and ordinal motor processes. Using high-resolution functional magnetic resonance imaging (fMRI and multi-voxel pattern analysis (MVPA, we sought to determine the degree to which these complex motor processes are dissociable in basal ganglia and cortical networks. We employed three different finger-tapping tasks that differed in the demand on the sequential temporal rhythm or sequential ordering of submovements. Our results demonstrate that sequential rhythm and sequential order tasks were partially dissociable based on activation differences. The sequential rhythm task activated a widespread network centered around the SMA and basal-ganglia regions including the dorsomedial putamen and caudate nucleus, while the sequential order task preferentially activated a fronto-parietal network. There was also extensive overlap between sequential rhythm and sequential order tasks, with both tasks commonly activating bilateral premotor, supplementary motor, and superior/inferior parietal cortical regions, as well as regions of the caudate/putamen of the basal ganglia and the ventro-lateral thalamus. Importantly, within the cortical regions that were active for both complex movements, MVPA could accurately classify different patterns of activation for the sequential rhythm and sequential order tasks. In the basal ganglia, however, overlapping activation for the sequential rhythm and sequential order tasks, which was found in classic motor circuits of the putamen and ventro-lateral thalamus, could not be accurately differentiated by MVPA. Overall, our results highlight the convergent architecture of the motor system, where complex motor information that is spatially distributed in the cortex converges into a more compact representation in the basal ganglia.

  2. The sequential price of anarchy for atomic congestion games

    NARCIS (Netherlands)

    de Jong, Jasper; Uetz, Marc Jochen; Liu, Tie-Yan; Qi, Qi; Ye, Yinyu

    2014-01-01

    In situations without central coordination, the price of anarchy relates the quality of any Nash equilibrium to the quality of a global optimum. Instead of assuming that all players choose their actions simultaneously, we consider games where players choose their actions sequentially. The sequential

  3. Risk, unexpected uncertainty, and estimation uncertainty: Bayesian learning in unstable settings.

    Directory of Open Access Journals (Sweden)

    Elise Payzan-LeNestour

    Full Text Available Recently, evidence has emerged that humans approach learning using Bayesian updating rather than (model-free reinforcement algorithms in a six-arm restless bandit problem. Here, we investigate what this implies for human appreciation of uncertainty. In our task, a Bayesian learner distinguishes three equally salient levels of uncertainty. First, the Bayesian perceives irreducible uncertainty or risk: even knowing the payoff probabilities of a given arm, the outcome remains uncertain. Second, there is (parameter estimation uncertainty or ambiguity: payoff probabilities are unknown and need to be estimated. Third, the outcome probabilities of the arms change: the sudden jumps are referred to as unexpected uncertainty. We document how the three levels of uncertainty evolved during the course of our experiment and how it affected the learning rate. We then zoom in on estimation uncertainty, which has been suggested to be a driving force in exploration, in spite of evidence of widespread aversion to ambiguity. Our data corroborate the latter. We discuss neural evidence that foreshadowed the ability of humans to distinguish between the three levels of uncertainty. Finally, we investigate the boundaries of human capacity to implement Bayesian learning. We repeat the experiment with different instructions, reflecting varying levels of structural uncertainty. Under this fourth notion of uncertainty, choices were no better explained by Bayesian updating than by (model-free reinforcement learning. Exit questionnaires revealed that participants remained unaware of the presence of unexpected uncertainty and failed to acquire the right model with which to implement Bayesian updating.

  4. Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation

    Science.gov (United States)

    Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.

    2018-03-01

    A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.

  5. Characterization of XR-RV3 GafChromic{sup ®} films in standard laboratory and in clinical conditions and means to evaluate uncertainties and reduce errors

    Energy Technology Data Exchange (ETDEWEB)

    Farah, J., E-mail: jad.farah@irsn.fr; Clairand, I.; Huet, C. [External Dosimetry Department, Institut de Radioprotection et de Sûreté Nucléaire (IRSN), BP-17, 92260 Fontenay-aux-Roses (France); Trianni, A. [Medical Physics Department, Udine University Hospital S. Maria della Misericordia (AOUD), p.le S. Maria della Misericordia, 15, 33100 Udine (Italy); Ciraj-Bjelac, O. [Vinca Institute of Nuclear Sciences (VINCA), P.O. Box 522, 11001 Belgrade (Serbia); De Angelis, C. [Department of Technology and Health, Istituto Superiore di Sanità (ISS), Viale Regina Elena 299, 00161 Rome (Italy); Delle Canne, S. [Fatebenefratelli San Giovanni Calibita Hospital (FBF), UOC Medical Physics - Isola Tiberina, 00186 Rome (Italy); Hadid, L.; Waryn, M. J. [Radiology Department, Hôpital Jean Verdier (HJV), Avenue du 14 Juillet, 93140 Bondy Cedex (France); Jarvinen, H.; Siiskonen, T. [Radiation and Nuclear Safety Authority (STUK), P.O. Box 14, 00881 Helsinki (Finland); Negri, A. [Veneto Institute of Oncology (IOV), Via Gattamelata 64, 35124 Padova (Italy); Novák, L. [National Radiation Protection Institute (NRPI), Bartoškova 28, 140 00 Prague 4 (Czech Republic); Pinto, M. [Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti (ENEA-INMRI), C.R. Casaccia, Via Anguillarese 301, I-00123 Santa Maria di Galeria (RM) (Italy); Knežević, Ž. [Ruđer Bošković Institute (RBI), Bijenička c. 54, 10000 Zagreb (Croatia)

    2015-07-15

    Purpose: To investigate the optimal use of XR-RV3 GafChromic{sup ®} films to assess patient skin dose in interventional radiology while addressing the means to reduce uncertainties in dose assessment. Methods: XR-Type R GafChromic films have been shown to represent the most efficient and suitable solution to determine patient skin dose in interventional procedures. As film dosimetry can be associated with high uncertainty, this paper presents the EURADOS WG 12 initiative to carry out a comprehensive study of film characteristics with a multisite approach. The considered sources of uncertainties include scanner, film, and fitting-related errors. The work focused on studying film behavior with clinical high-dose-rate pulsed beams (previously unavailable in the literature) together with reference standard laboratory beams. Results: First, the performance analysis of six different scanner models has shown that scan uniformity perpendicular to the lamp motion axis and that long term stability are the main sources of scanner-related uncertainties. These could induce errors of up to 7% on the film readings unless regularly checked and corrected. Typically, scan uniformity correction matrices and reading normalization to the scanner-specific and daily background reading should be done. In addition, the analysis on multiple film batches has shown that XR-RV3 films have generally good uniformity within one batch (<1.5%), require 24 h to stabilize after the irradiation and their response is roughly independent of dose rate (<5%). However, XR-RV3 films showed large variations (up to 15%) with radiation quality both in standard laboratory and in clinical conditions. As such, and prior to conducting patient skin dose measurements, it is mandatory to choose the appropriate calibration beam quality depending on the characteristics of the x-ray systems that will be used clinically. In addition, yellow side film irradiations should be preferentially used since they showed a lower

  6. [Influence of Uncertainty and Uncertainty Appraisal on Self-management in Hemodialysis Patients].

    Science.gov (United States)

    Jang, Hyung Suk; Lee, Chang Suk; Yang, Young Hee

    2015-04-01

    This study was done to examine the relation of uncertainty, uncertainty appraisal, and self-management in patients undergoing hemodialysis, and to identify factors influencing self-management. A convenience sample of 92 patients receiving hemodialysis was selected. Data were collected using a structured questionnaire and medical records. The collected data were analyzed using descriptive statistics, t-test, ANOVA, Pearson correlations and multiple regression analysis with the SPSS/WIN 20.0 program. The participants showed a moderate level of uncertainty with the highest score being for ambiguity among the four uncertainty subdomains. Scores for uncertainty danger or opportunity appraisals were under the mid points. The participants were found to perform a high level of self-management such as diet control, management of arteriovenous fistula, exercise, medication, physical management, measurements of body weight and blood pressure, and social activity. The self-management of participants undergoing hemodialysis showed a significant relationship with uncertainty and uncertainty appraisal. The significant factors influencing self-management were uncertainty, uncertainty opportunity appraisal, hemodialysis duration, and having a spouse. These variables explained 32.8% of the variance in self-management. The results suggest that intervention programs to reduce the level of uncertainty and to increase the level of uncertainty opportunity appraisal among patients would improve the self-management of hemodialysis patients.

  7. Campbell and moment measures for finite sequential spatial processes

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette)

    2006-01-01

    textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector

  8. Particle Swarm Optimization and Uncertainty Assessment in Inverse Problems

    Directory of Open Access Journals (Sweden)

    José L. G. Pallero

    2018-01-01

    Full Text Available Most inverse problems in the industry (and particularly in geophysical exploration are highly underdetermined because the number of model parameters too high to achieve accurate data predictions and because the sampling of the data space is scarce and incomplete; it is always affected by different kinds of noise. Additionally, the physics of the forward problem is a simplification of the reality. All these facts result in that the inverse problem solution is not unique; that is, there are different inverse solutions (called equivalent, compatible with the prior information that fits the observed data within similar error bounds. In the case of nonlinear inverse problems, these equivalent models are located in disconnected flat curvilinear valleys of the cost-function topography. The uncertainty analysis consists of obtaining a representation of this complex topography via different sampling methodologies. In this paper, we focus on the use of a particle swarm optimization (PSO algorithm to sample the region of equivalence in nonlinear inverse problems. Although this methodology has a general purpose, we show its application for the uncertainty assessment of the solution of a geophysical problem concerning gravity inversion in sedimentary basins, showing that it is possible to efficiently perform this task in a sampling-while-optimizing mode. Particularly, we explain how to use and analyze the geophysical models sampled by exploratory PSO family members to infer different descriptors of nonlinear uncertainty.

  9. Sequential Dependencies in Driving

    Science.gov (United States)

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  10. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  11. CABAS: A freely available PC program for fitting calibration curves in chromosome aberration dosimetry

    International Nuclear Information System (INIS)

    Deperas, J.; Szluiska, M.; Deperas-Kaminska, M.; Edwards, A.; Lloyd, D.; Lindholm, C.; Romm, H.; Roy, L.; Moss, R.; Morand, J.; Wojcik, A.

    2007-01-01

    The aim of biological dosimetry is to estimate the dose and the associated uncertainty to which an accident victim was exposed. This process requires the use of the maximum-likelihood method for fitting a calibration curve, a procedure that is not implemented in most statistical computer programs. Several laboratories have produced their own programs, but these are frequently not user-friendly and not available to outside users. We developed a software for fitting a linear-quadratic dose-response relationship by the method of maximum-likelihood and for estimating a dose from the number of aberrations observed. The program called as CABAS consists of the main curve-fitting and dose estimating module and modules for calculating the dose in cases of partial body exposure, for estimating the minimum number of cells necessary to detect a given dose of radiation and for calculating the dose in the case of a protracted exposure. (authors)

  12. Prediction of Global Damage and Reliability Based Upon Sequential Identification and Updating of RC Structures Subject to Earthquakes

    DEFF Research Database (Denmark)

    Nielsen, Søren R.K.; Skjærbæk, P. S.; Köylüoglu, H. U.

    The paper deals with the prediction of global damage and future structural reliability with special emphasis on sensitivity, bias and uncertainty of these predictions dependent on the statistically equivalent realizations of the future earthquake. The predictions are based on a modified Clough......-Johnston single-degree-of-freedom (SDOF) oscillator with three parameters which are calibrated to fit the displacement response and the damage development in the past earthquake....

  13. Instrument uncertainty predictions

    International Nuclear Information System (INIS)

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  14. Uncertainty in hydrological signatures

    Science.gov (United States)

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  15. Learning of state-space models with highly informative observations: A tempered sequential Monte Carlo solution

    Science.gov (United States)

    Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik

    2018-05-01

    Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.

  16. New method to incorporate Type B uncertainty into least-squares procedures in radionuclide metrology

    International Nuclear Information System (INIS)

    Han, Jubong; Lee, K.B.; Lee, Jong-Man; Park, Tae Soon; Oh, J.S.; Oh, Pil-Jei

    2016-01-01

    We discuss a new method to incorporate Type B uncertainty into least-squares procedures. The new method is based on an extension of the likelihood function from which a conventional least-squares function is derived. The extended likelihood function is the product of the original likelihood function with additional PDFs (Probability Density Functions) that characterize the Type B uncertainties. The PDFs are considered to describe one's incomplete knowledge on correction factors being called nuisance parameters. We use the extended likelihood function to make point and interval estimations of parameters in the basically same way as the least-squares function used in the conventional least-squares method is derived. Since the nuisance parameters are not of interest and should be prevented from appearing in the final result, we eliminate such nuisance parameters by using the profile likelihood. As an example, we present a case study for a linear regression analysis with a common component of Type B uncertainty. In this example we compare the analysis results obtained from using our procedure with those from conventional methods. - Highlights: • A new method proposed to incorporate Type B uncertainty into least-squares method. • The method constructed from the likelihood function and PDFs of Type B uncertainty. • A case study performed to compare results from the new and the conventional method. • Fitted parameters are consistent but with larger uncertainties in the new method.

  17. A probabilistic approach for representation of interval uncertainty

    International Nuclear Information System (INIS)

    Zaman, Kais; Rangavajhala, Sirisha; McDonald, Mark P.; Mahadevan, Sankaran

    2011-01-01

    In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.

  18. Nuclear collective flow from gaussian fits to triple differential distributions

    International Nuclear Information System (INIS)

    Gosset, J.; Babinet, R.; Cavata, C.; Marco, M. de; Demoulins, M.; Fanet, H.; Fodor, Z.; L'Hote, D.; Lucas, B.

    1990-01-01

    A simple characterization of triple differential cross sections is needed for a systematic study of the nuclear matter collective flow in relativistic nucleus-nucleus collisions. Our analysis is based upon a fitting procedure, so that the triple differential distributions need not be measured in the whole momentum space. If the detector acceptance eliminates most spectator particles or if it is artificially restricted for doing so, this method leads to a flow characterization of the participant nuclear matter. The center-of-mass triple-differential momentum distributions are fitted to a simple analytical shape, namely an anisotropic Gaussian distribution. The adjusted parameters (flow angle and aspect ratios) are corrected for uncertainty in the event-by-event determination of the reaction plane azimuth (finite-number effects). Results are presented for neon-nucleus and argon-nucleus collisions at incident energy between 400 and 800 MeV per nucleon. Flow is already significant for light systems, and depends clearly upon the impact parameter

  19. A new software routine that automates the fitting of protein X-ray crystallographic electron-density maps.

    Science.gov (United States)

    Levitt, D G

    2001-07-01

    The classical approach to building the amino-acid residues into the initial electron-density map requires days to weeks of a skilled investigator's time. Automating this procedure should not only save time, but has the potential to provide a more accurate starting model for input to refinement programs. The new software routine MAID builds the protein structure into the electron-density map in a series of sequential steps. The first step is the fitting of the secondary alpha-helix and beta-sheet structures. These 'fits' are then used to determine the local amino-acid sequence assignment. These assigned fits are then extended through the loop regions and fused with the neighboring sheet or helix. The program was tested on the unaveraged 2.5 A selenomethionine multiple-wavelength anomalous dispersion (SMAD) electron-density map that was originally used to solve the structure of the 291-residue protein human heart short-chain L-3-hydroxyacyl-CoA dehydrogenase (SHAD). Inputting just the map density and the amino-acid sequence, MAID fitted 80% of the residues with an r.m.s.d. error of 0.43 A for the main-chain atoms and 1.0 A for all atoms without any user intervention. When tested on a higher quality 1.9 A SMAD map, MAID correctly fitted 100% (418) of the residues. A major advantage of the MAID fitting procedure is that it maintains ideal bond lengths and angles and constrains phi/psi angles to the appropriate Ramachandran regions. Recycling the output of this new routine through a partial structure-refinement program may have the potential to completely automate the fitting of electron-density maps.

  20. Fitting PAC spectra with stochastic models: PolyPacFit

    Energy Technology Data Exchange (ETDEWEB)

    Zacate, M. O., E-mail: zacatem1@nku.edu [Northern Kentucky University, Department of Physics and Geology (United States); Evenson, W. E. [Utah Valley University, College of Science and Health (United States); Newhouse, R.; Collins, G. S. [Washington State University, Department of Physics and Astronomy (United States)

    2010-04-15

    PolyPacFit is an advanced fitting program for time-differential perturbed angular correlation (PAC) spectroscopy. It incorporates stochastic models and provides robust options for customization of fits. Notable features of the program include platform independence and support for (1) fits to stochastic models of hyperfine interactions, (2) user-defined constraints among model parameters, (3) fits to multiple spectra simultaneously, and (4) any spin nuclear probe.

  1. Framework for sequential approximate optimization

    NARCIS (Netherlands)

    Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.

    2004-01-01

    An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python

  2. A Survey of Multi-Objective Sequential Decision-Making

    OpenAIRE

    Roijers, D.M.; Vamplew, P.; Whiteson, S.; Dazeley, R.

    2013-01-01

    Sequential decision-making problems with multiple objectives arise naturally in practice and pose unique challenges for research in decision-theoretic planning and learning, which has largely focused on single-objective settings. This article surveys algorithms designed for sequential decision-making problems with multiple objectives. Though there is a growing body of literature on this subject, little of it makes explicit under what circumstances special methods are needed to solve multi-obj...

  3. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Science.gov (United States)

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  4. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    Directory of Open Access Journals (Sweden)

    Fangjun Qin

    2018-05-01

    Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  5. Uncertainty and measurement

    International Nuclear Information System (INIS)

    Landsberg, P.T.

    1990-01-01

    This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)

  6. Optimal dynamic capacity allocation of HVDC interconnections for cross-border exchange of balancing services in presence of uncertainty

    DEFF Research Database (Denmark)

    Delikaraoglou, Stefanos; Pinson, Pierre; Eriksson, Robert

    2015-01-01

    The deployment of large shares of stochastic renewable energy, e.g., wind power, may bring important economic and environmental benefits to the power system. Nonetheless, their efficient integration depends on the ability of the power system to cope with their inherent variability and the uncerta......The deployment of large shares of stochastic renewable energy, e.g., wind power, may bring important economic and environmental benefits to the power system. Nonetheless, their efficient integration depends on the ability of the power system to cope with their inherent variability...... and the uncertainty arising from their partial predictability. Considering that the existing setup of the European electricity markets promotes the spatial coordination of neighbouring power systems only on the day-ahead market stage, regional system operators have to rely mainly on their internal balancing resources....... Nevertheless, enforcing a tighter coordination between the reserves and energy trading floors may improve considerably the expected system cost compared to a sequential market design. Aiming to provide some insights for improvement of the sequential market-clearing, we analyse the effect of explicit...

  7. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  8. Impact of Diagrams on Recalling Sequential Elements in Expository Texts.

    Science.gov (United States)

    Guri-Rozenblit, Sarah

    1988-01-01

    Examines the instructional effectiveness of abstract diagrams on recall of sequential relations in social science textbooks. Concludes that diagrams assist significantly the recall of sequential relations in a text and decrease significantly the rate of order mistakes. (RS)

  9. Source Data Impacts on Epistemic Uncertainty for Launch Vehicle Fault Tree Models

    Science.gov (United States)

    Al Hassan, Mohammad; Novack, Steven; Ring, Robert

    2016-01-01

    Launch vehicle systems are designed and developed using both heritage and new hardware. Design modifications to the heritage hardware to fit new functional system requirements can impact the applicability of heritage reliability data. Risk estimates for newly designed systems must be developed from generic data sources such as commercially available reliability databases using reliability prediction methodologies, such as those addressed in MIL-HDBK-217F. Failure estimates must be converted from the generic environment to the specific operating environment of the system in which it is used. In addition, some qualification of applicability for the data source to the current system should be made. Characterizing data applicability under these circumstances is crucial to developing model estimations that support confident decisions on design changes and trade studies. This paper will demonstrate a data-source applicability classification method for suggesting epistemic component uncertainty to a target vehicle based on the source and operating environment of the originating data. The source applicability is determined using heuristic guidelines while translation of operating environments is accomplished by applying statistical methods to MIL-HDK-217F tables. The paper will provide one example for assigning environmental factors uncertainty when translating between operating environments for the microelectronic part-type components. The heuristic guidelines will be followed by uncertainty-importance routines to assess the need for more applicable data to reduce model uncertainty.

  10. IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING

    Energy Technology Data Exchange (ETDEWEB)

    Erwin, Peter [Max-Planck-Insitut für Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching, GermanyAND (Germany); Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München (Germany)

    2015-02-01

    I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithms include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.

  11. Quantum Probability Zero-One Law for Sequential Terminal Events

    Science.gov (United States)

    Rehder, Wulf

    1980-07-01

    On the basis of the Jauch-Piron quantum probability calculus a zero-one law for sequential terminal events is proven, and the significance of certain crucial axioms in the quantum probability calculus is discussed. The result shows that the Jauch-Piron set of axioms is appropriate for the non-Boolean algebra of sequential events.

  12. A path-level exact parallelization strategy for sequential simulation

    Science.gov (United States)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  13. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  14. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    International Nuclear Information System (INIS)

    Wang, Y.

    2013-01-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  15. Automatic reconstruction of fault networks from seismicity catalogs including location uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.

    2013-07-01

    Within the framework of plate tectonics, the deformation that arises from the relative movement of two plates occurs across discontinuities in the earth's crust, known as fault zones. Active fault zones are the causal locations of most earthquakes, which suddenly release tectonic stresses within a very short time. In return, fault zones slowly grow by accumulating slip due to such earthquakes by cumulated damage at their tips, and by branching or linking between pre-existing faults of various sizes. Over the last decades, a large amount of knowledge has been acquired concerning the overall phenomenology and mechanics of individual faults and earthquakes: A deep physical and mechanical understanding of the links and interactions between and among them is still missing, however. One of the main issues lies in our failure to always succeed in assigning an earthquake to its causative fault. Using approaches based in pattern-recognition theory, more insight into the relationship between earthquakes and fault structure can be gained by developing an automatic fault network reconstruction approach using high resolution earthquake data sets at largely different scales and by considering individual event uncertainties. This thesis introduces the Anisotropic Clustering of Location Uncertainty Distributions (ACLUD) method to reconstruct active fault networks on the basis of both earthquake locations and their estimated individual uncertainties. This method consists in fitting a given set of hypocenters with an increasing amount of finite planes until the residuals of the fit compare with location uncertainties. After a massive search through the large solution space of possible reconstructed fault networks, six different validation procedures are applied in order to select the corresponding best fault network. Two of the validation steps (cross-validation and Bayesian Information Criterion (BIC)) process the fit residuals, while the four others look for solutions that

  16. Fitness club

    CERN Multimedia

    Fitness club

    2011-01-01

    General fitness Classes Enrolments are open for general fitness classes at CERN taking place on Monday, Wednesday, and Friday lunchtimes in the Pump Hall (building 216). There are shower facilities for both men and women. It is possible to pay for 1, 2 or 3 classes per week for a minimum of 1 month and up to 6 months. Check out our rates and enrol at: http://cern.ch/club-fitness Hope to see you among us! CERN Fitness Club fitness.club@cern.ch  

  17. Uncertainty as Knowledge: Constraints on Policy Choices Provided by Analysis of Uncertainty

    Science.gov (United States)

    Lewandowsky, S.; Risbey, J.; Smithson, M.; Newell, B. R.

    2012-12-01

    Uncertainty forms an integral part of climate science, and it is often cited in connection with arguments against mitigative action. We argue that an analysis of uncertainty must consider existing knowledge as well as uncertainty, and the two must be evaluated with respect to the outcomes and risks associated with possible policy options. Although risk judgments are inherently subjective, an analysis of the role of uncertainty within the climate system yields two constraints that are robust to a broad range of assumptions. Those constraints are that (a) greater uncertainty about the climate system is necessarily associated with greater expected damages from warming, and (b) greater uncertainty translates into a greater risk of the failure of mitigation efforts. These ordinal constraints are unaffected by subjective or cultural risk-perception factors, they are independent of the discount rate, and they are independent of the magnitude of the estimate for climate sensitivity. The constraints mean that any appeal to uncertainty must imply a stronger, rather than weaker, need to cut greenhouse gas emissions than in the absence of uncertainty.

  18. Kernel-density estimation and approximate Bayesian computation for flexible epidemiological model fitting in Python.

    Science.gov (United States)

    Irvine, Michael A; Hollingsworth, T Déirdre

    2018-05-26

    Fitting complex models to epidemiological data is a challenging problem: methodologies can be inaccessible to all but specialists, there may be challenges in adequately describing uncertainty in model fitting, the complex models may take a long time to run, and it can be difficult to fully capture the heterogeneity in the data. We develop an adaptive approximate Bayesian computation scheme to fit a variety of epidemiologically relevant data with minimal hyper-parameter tuning by using an adaptive tolerance scheme. We implement a novel kernel density estimation scheme to capture both dispersed and multi-dimensional data, and directly compare this technique to standard Bayesian approaches. We then apply the procedure to a complex individual-based simulation of lymphatic filariasis, a human parasitic disease. The procedure and examples are released alongside this article as an open access library, with examples to aid researchers to rapidly fit models to data. This demonstrates that an adaptive ABC scheme with a general summary and distance metric is capable of performing model fitting for a variety of epidemiological data. It also does not require significant theoretical background to use and can be made accessible to the diverse epidemiological research community. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Lineup Composition, Suspect Position, and the Sequential Lineup Advantage

    Science.gov (United States)

    Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.

    2008-01-01

    N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…

  20. Trial Sequential Analysis in systematic reviews with meta-analysis

    Directory of Open Access Journals (Sweden)

    Jørn Wetterslev

    2017-03-01

    Full Text Available Abstract Background Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors and too many false negative conclusions (type II errors. Methods We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. Results The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D2 measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in

  1. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

    Science.gov (United States)

    Gross, Kenny C.

    1994-01-01

    Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as "background" gases, further reducing the number of trial node combinations. Lastly, a "fuzzy" set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements.

  2. Expert system for identification of simultaneous and sequential reactor fuel failures with gas tagging

    International Nuclear Information System (INIS)

    Gross, K.C.

    1994-01-01

    Failure of a fuel element in a nuclear reactor core is determined by a gas tagging failure detection system and method. Failures are catalogued and characterized after the event so that samples of the reactor's cover gas are taken at regular intervals and analyzed by mass spectroscopy. Employing a first set of systematic heuristic rules which are applied in a transformed node space allows the number of node combinations which must be processed within a barycentric algorithm to be substantially reduced. A second set of heuristic rules treats the tag nodes of the most recent one or two leakers as ''background'' gases, further reducing the number of trial node combinations. Lastly, a ''fuzzy'' set theory formalism minimizes experimental uncertainties in the identification of the most likely volumes of tag gases. This approach allows for the identification of virtually any number of sequential leaks and up to five simultaneous gas leaks from fuel elements. 14 figs

  3. Assessment of Uncertainty in the Determination of Activation Energy for Polymeric Materials

    Science.gov (United States)

    Darby, Stephania P.; Landrum, D. Brian; Coleman, Hugh W.

    1998-01-01

    An assessment of the experimental uncertainty in obtaining the kinetic activation energy from thermogravimetric analysis (TGA) data is presented. A neat phenolic resin, Borden SC1O08, was heated at three heating rates to obtain weight loss vs temperature data. Activation energy was calculated by two methods: the traditional Flynn and Wall method based on the slope of log(q) versus 1/T, and a modification of this method where the ordinate and abscissa are reversed in the linear regression. The modified method produced a more accurate curve fit of the data, was more sensitive to data nonlinearity, and gave a value of activation energy 75 percent greater than the original method. An uncertainty analysis using the modified method yielded a 60 percent uncertainty in the average activation energy. Based on this result, the activation energy for a carbon-phenolic material was doubled and used to calculate the ablation rate In a typical solid rocket environment. Doubling the activation energy increased surface recession by 3 percent. Current TGA data reduction techniques that use the traditional Flynn and Wall approach to calculate activation energy should be changed to the modified method.

  4. Interlaboratory study of a liquid chromatography method for erythromycin: determination of uncertainty.

    Science.gov (United States)

    Dehouck, P; Vander Heyden, Y; Smeyers-Verbeke, J; Massart, D L; Marini, R D; Chiap, P; Hubert, Ph; Crommen, J; Van de Wauw, W; De Beer, J; Cox, R; Mathieu, G; Reepmeyer, J C; Voigt, B; Estevenon, O; Nicolas, A; Van Schepdael, A; Adams, E; Hoogmartens, J

    2003-08-22

    Erythromycin is a mixture of macrolide antibiotics produced by Saccharopolyspora erythreas during fermentation. A new method for the analysis of erythromycin by liquid chromatography has previously been developed. It makes use of an Astec C18 polymeric column. After validation in one laboratory, the method was now validated in an interlaboratory study. Validation studies are commonly used to test the fitness of the analytical method prior to its use for routine quality testing. The data derived in the interlaboratory study can be used to make an uncertainty statement as well. The relationship between validation and uncertainty statement is not clear for many analysts and there is a need to show how the existing data, derived during validation, can be used in practice. Eight laboratories participated in this interlaboratory study. The set-up allowed the determination of the repeatability variance, s(2)r and the between-laboratory variance, s(2)L. Combination of s(2)r and s(2)L results in the reproducibility variance s(2)R. It has been shown how these data can be used in future by a single laboratory that wants to make an uncertainty statement concerning the same analysis.

  5. Uncertainties of predictions from parton distribution functions. I. The Lagrange multiplier method

    International Nuclear Information System (INIS)

    Stump, D.; Pumplin, J.; Brock, R.; Casey, D.; Huston, J.; Kalk, J.; Lai, H. L.; Tung, W. K.

    2002-01-01

    We apply the Lagrange multiplier method to study the uncertainties of physical predictions due to the uncertainties of parton distribution functions (PDF's), using the cross section σ W for W production at a hadron collider as an archetypal example. An effective χ 2 function based on the CTEQ global QCD analysis is used to generate a series of PDF's, each of which represents the best fit to the global data for some specified value of σ W . By analyzing the likelihood of these 'alterative hypotheses', using available information on errors from the individual experiments, we estimate that the fractional uncertainty of σ W due to current experimental input to the PDF analysis is approximately ±4% at the Fermilab Tevatron, and ±8-10% at the CERN Large Hadron Collider. We give sets of PDF's corresponding to these up and down variations of σ W . We also present similar results on Z production at the colliders. Our method can be applied to any combination of physical variables in precision QCD phenomenology, and it can be used to generate benchmarks for testing the accuracy of approximate methods based on the error matrix

  6. Heat accumulation during sequential cortical bone drilling.

    Science.gov (United States)

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  7. Uncertainty in social dilemmas

    OpenAIRE

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...

  8. Uncertainty theory

    CERN Document Server

    Liu, Baoding

    2015-01-01

    When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...

  9. Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.

    Science.gov (United States)

    Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M

    2011-02-01

    To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.

  10. Estimates of uncertainties in analysis of positron lifetime spectra for metals

    International Nuclear Information System (INIS)

    Eldrup, M.; Huang, Y.M.; McKee, B.T.A.

    1978-01-01

    The effects of uncertainties and errors in various constraints used in the analysis of multi-component life-time spectra of positrons annihilating in metals containing defects have been investigated in detail using computer simulated decay spectra and subsequent analysis. It is found that the errors in the fitted values of the main component lifetimes and intensities introduced from incorrect values of the instrumental resolution function and off the source-surface components can easily exceed the statistic uncertainties. The effect of an incorrect resolution function may be reduced by excluding the peak regions of the spectra from the analysis. The influence of using incorrect source-surface components in the analysis may on the other hand be reduced by including the peak regions of the spectra. A main conclusion of the work is that extreme caution should be exercised to avoid introducing large errors through the constraints used in the analysis of experimental lifetime data. (orig.) [de

  11. Dihydroazulene photoswitch operating in sequential tunneling regime

    DEFF Research Database (Denmark)

    Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg

    2012-01-01

    to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...

  12. A Trust-region-based Sequential Quadratic Programming Algorithm

    DEFF Research Database (Denmark)

    Henriksen, Lars Christian; Poulsen, Niels Kjølstad

    This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....

  13. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    Science.gov (United States)

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  14. Subpixel edge localization with reduced uncertainty by violating the Nyquist criterion

    Science.gov (United States)

    Heidingsfelder, Philipp; Gao, Jun; Wang, Kun; Ott, Peter

    2014-12-01

    In this contribution, the extent to which the Nyquist criterion can be violated in optical imaging systems with a digital sensor, e.g., a digital microscope, is investigated. In detail, we analyze the subpixel uncertainty of the detected position of a step edge, the edge of a stripe with a varying width, and that of a periodic rectangular pattern for varying pixel pitches of the sensor, thus also in aliased conditions. The analysis includes the investigation of different algorithms of edge localization based on direct fitting or based on the derivative of the edge profile, such as the common centroid method. In addition to the systematic error of these algorithms, the influence of the photon noise (PN) is included in the investigation. A simplified closed form solution for the uncertainty of the edge position caused by the PN is derived. The presented results show that, in the vast majority of cases, the pixel pitch can exceed the Nyquist sampling distance by about 50% without an increase of the uncertainty of edge localization. This allows one to increase the field-of-view without increasing the resolution of the sensor and to decrease the size of the setup by reducing the magnification. Experimental results confirm the simulation results.

  15. The EURACOS activation experiments: preliminary uncertainty analysis

    International Nuclear Information System (INIS)

    Yeivin, Y.

    1982-01-01

    A sequence of counting rates of an irradiated sulphur pellet, r(tsub(i)), measured at different times after the end of the irradiation, are fitted to r(t)=Aexp(-lambda t)+B. A standard adjustment procedure is applied to determine the parameters A and B, their standard deviations and correlation, and chi square. It is demonstrated that if the counting-rate uncertainties are entirely due to the counting statistics, the experimental data are totally inconsistent with the ''theoretical'' model. However, assuming an additional systematic error of approximalety 1%, and eliminating a few ''bad'' data, produces a data set quite consistent with the model. The dependence of chi square on the assumed systematic error and the data elimination procedure are discussed in great detail. A review of the adjustment procedure is appended to the report

  16. Moving Beyond 2% Uncertainty: A New Framework for Quantifying Lidar Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer F.; Clifton, Andrew

    2017-03-08

    Remote sensing of wind using lidar is revolutionizing wind energy. However, current generations of wind lidar are ascribed a climatic value of uncertainty, which is based on a poor description of lidar sensitivity to external conditions. In this presentation, we show how it is important to consider the complete lidar measurement process to define the measurement uncertainty, which in turn offers the ability to define a much more granular and dynamic measurement uncertainty. This approach is a progression from the 'white box' lidar uncertainty method.

  17. Synthetic Aperture Sequential Beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke

    2008-01-01

    A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...

  18. Uncertainty quantification and sensitivity analysis of an arterial wall mechanics model for evaluation of vascular drug therapies.

    Science.gov (United States)

    Heusinkveld, Maarten H G; Quicken, Sjeng; Holtackers, Robert J; Huberts, Wouter; Reesink, Koen D; Delhaas, Tammo; Spronck, Bart

    2018-02-01

    Quantification of the uncertainty in constitutive model predictions describing arterial wall mechanics is vital towards non-invasive assessment of vascular drug therapies. Therefore, we perform uncertainty quantification to determine uncertainty in mechanical characteristics describing the vessel wall response upon loading. Furthermore, a global variance-based sensitivity analysis is performed to pinpoint measurements that are most rewarding to be measured more precisely. We used previously published carotid diameter-pressure and intima-media thickness (IMT) data (measured in triplicate), and Holzapfel-Gasser-Ogden models. A virtual data set containing 5000 diastolic and systolic diameter-pressure points, and IMT values was generated by adding measurement error to the average of the measured data. The model was fitted to single-exponential curves calculated from the data, obtaining distributions of constitutive parameters and constituent load bearing parameters. Additionally, we (1) simulated vascular drug treatment to assess the relevance of model uncertainty and (2) evaluated how increasing the number of measurement repetitions influences model uncertainty. We found substantial uncertainty in constitutive parameters. Simulating vascular drug treatment predicted a 6% point reduction in collagen load bearing ([Formula: see text]), approximately 50% of its uncertainty. Sensitivity analysis indicated that the uncertainty in [Formula: see text] was primarily caused by noise in distension and IMT measurements. Spread in [Formula: see text] could be decreased by 50% when increasing the number of measurement repetitions from 3 to 10. Model uncertainty, notably that in [Formula: see text], could conceal effects of vascular drug therapy. However, this uncertainty could be reduced by increasing the number of measurement repetitions of distension and wall thickness measurements used for model parameterisation.

  19. Property Uncertainty Analysis and Methods for Optimal Working Fluids of Thermodynamic Cycles

    DEFF Research Database (Denmark)

    Frutiger, Jerome

    in the context of an industrial organic Rankine cycle, used for the recovery of waste heat from an engine of a marine container ship. The study illustrates that the model structure is vital for the uncertainties of equations of state and suggests that uncertainty becomes a criterion (along with e.g. goodness......-of-fit or ease of use) for the selection of an equation of state for a specific application. Furthermore, two studies on the identification of suitable working fluids for thermodynamic cycles are presented. The first one selects and assesses working fluid candidates for an organic Rankine cycle system to recover......There is an increasing interest in recovering industrial waste heat at low tempera-tures (70-250◦C). Thermodynamic cycles, such as heat pumps or organic Rankine cycles, can recover this heat and transfer it to other process streams or convert it into electricity. The working fluid, circulating...

  20. Evaluation Using Sequential Trials Methods.

    Science.gov (United States)

    Cohen, Mark E.; Ralls, Stephen A.

    1986-01-01

    Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)

  1. Attack Trees with Sequential Conjunction

    NARCIS (Netherlands)

    Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando

    2015-01-01

    We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of

  2. Quantification of variability and uncertainty in lawn and garden equipment NOx and total hydrocarbon emission factors.

    Science.gov (United States)

    Frey, H Christopher; Bammi, Sachin

    2002-04-01

    Variability refers to real differences in emissions among multiple emission sources at any given time or over time for any individual emission source. Variability in emissions can be attributed to variation in fuel or feedstock composition, ambient temperature, design, maintenance, or operation. Uncertainty refers to lack of knowledge regarding the true value of emissions. Sources of uncertainty include small sample sizes, bias or imprecision in measurements, nonrepresentativeness, or lack of data. Quantitative methods for characterizing both variability and uncertainty are demonstrated and applied to case studies of emission factors for lawn and garden (L&G) equipment engines. Variability was quantified using empirical and parametric distributions. Bootstrap simulation was used to characterize confidence intervals for the fitted distributions. The 95% confidence intervals for the mean grams per brake horsepower/hour (g/hp-hr) emission factors for two-stroke engine total hydrocarbon (THC) and NOx emissions were from -30 to +41% and from -45 to +75%, respectively. The confidence intervals for four-stroke engines were from -33 to +46% for THCs and from -27 to +35% for NOx. These quantitative measures of uncertainty convey information regarding the quality of the emission factors and serve as a basis for calculation of uncertainty in emission inventories (EIs).

  3. Development of Property Models with Uncertainty Estimate for Process Design under Uncertainty

    DEFF Research Database (Denmark)

    Hukkerikar, Amol; Sarup, Bent; Abildskov, Jens

    more reliable predictions with a new and improved set of model parameters for GC (group contribution) based and CI (atom connectivity index) based models and to quantify the uncertainties in the estimated property values from a process design point-of-view. This includes: (i) parameter estimation using....... The comparison of model prediction uncertainties with reported range of measurement uncertainties is presented for the properties with related available data. The application of the developed methodology to quantify the effect of these uncertainties on the design of different unit operations (distillation column......, the developed methodology can be used to quantify the sensitivity of process design to uncertainties in property estimates; obtain rationally the risk/safety factors in process design; and identify additional experimentation needs in order to reduce most critical uncertainties....

  4. A sequential Monte Carlo model of the combined GB gas and electricity network

    International Nuclear Information System (INIS)

    Chaudry, Modassar; Wu, Jianzhong; Jenkins, Nick

    2013-01-01

    A Monte Carlo model of the combined GB gas and electricity network was developed to determine the reliability of the energy infrastructure. The model integrates the gas and electricity network into a single sequential Monte Carlo simulation. The model minimises the combined costs of the gas and electricity network, these include gas supplies, gas storage operation and electricity generation. The Monte Carlo model calculates reliability indices such as loss of load probability and expected energy unserved for the combined gas and electricity network. The intention of this tool is to facilitate reliability analysis of integrated energy systems. Applications of this tool are demonstrated through a case study that quantifies the impact on the reliability of the GB gas and electricity network given uncertainties such as wind variability, gas supply availability and outages to energy infrastructure assets. Analysis is performed over a typical midwinter week on a hypothesised GB gas and electricity network in 2020 that meets European renewable energy targets. The efficacy of doubling GB gas storage capacity on the reliability of the energy system is assessed. The results highlight the value of greater gas storage facilities in enhancing the reliability of the GB energy system given various energy uncertainties. -- Highlights: •A Monte Carlo model of the combined GB gas and electricity network was developed. •Reliability indices are calculated for the combined GB gas and electricity system. •The efficacy of doubling GB gas storage capacity on reliability of the energy system is assessed. •Integrated reliability indices could be used to assess the impact of investment in energy assets

  5. The impact of eyewitness identifications from simultaneous and sequential lineups.

    Science.gov (United States)

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  6. Properties of simultaneous and sequential two-nucleon transfer

    International Nuclear Information System (INIS)

    Pinkston, W.T.; Satchler, G.R.

    1982-01-01

    Approximate forms of the first- and second-order distorted-wave Born amplitudes are used to study the overall structure, particularly the selection rules, of the amplitudes for simultaneous and sequential transfer of two nucleons. The role of the spin-state assumed for the intermediate deuterons in sequential (t, p) reactions is stressed. The similarity of one-step and two-step amplitudes for (α, d) reactions is exhibited, and the consequent absence of any obvious J-dependence in their interference is noted. (orig.)

  7. Sequential contrast-enhanced MR imaging of the penis.

    Science.gov (United States)

    Kaneko, K; De Mouy, E H; Lee, B E

    1994-04-01

    To determine the enhancement patterns of the penis at magnetic resonance (MR) imaging. Sequential contrast material-enhanced MR images of the penis in a flaccid state were obtained in 16 volunteers (12 with normal penile function and four with erectile dysfunction). Subjects with normal erectile function showed gradual and centrifugal enhancement of the corpora cavernosa, while those with erectile dysfunction showed poor enhancement with abnormal progression. Sequential contrast-enhanced MR imaging provides additional morphologic information for the evaluation of erectile dysfunction.

  8. Fitting and benchmarking of Monte Carlo output parameters for iridium-192 high dose rate brachytherapy source

    International Nuclear Information System (INIS)

    Acquah, F.G.

    2011-01-01

    Brachytherapy, the use of radioactive sources for the treatment of tumours is an important tool in radiation oncology. Accurate calculations of dose delivered to malignant and normal tissues are the main responsibility of the Medical Physics staff. With the use of Treatment Planning System (TPS) computers now becoming a standard practice in the Radiation Oncology Departments, Independent calculations to certify the results of these commercial TPSs are important part of a good quality management system for brachytherapy implants. There are inherent errors in the dose distributions produced by these TPSs due to its failure to account for heterogeneity in the calculation algorithms and Monte Carlo (MC) method seems to be the panacea for these corrections. In this study, a fit functional form using MC output parameters was performed to reduce dose calculation uncertainty using the Matlab software curve fitting applications. This includes the modification of the AAPM TG-43 parameters to accommodate the new developments for a rapid brachytherapy dose rate calculation. Analytical computations were performed to hybridize the anisotropy function, F(r,θ) and radial dose function, g(r) into a single new function f(r,θ) for the Nucletron microSelectron High Dose Rate 'new or v2' (mHDRv2) 192 Ir brachytherapy source. In order to minimize computation time and to improve the accuracy of manual calculations, the dosimetry function f(r,θ) used fewer parameters and formulas for the fit. Using MC outputs as the standard, the percentage errors for the fits were calculated and used to evaluate the average and maximum uncertainties. Dose rate deviation between the MC data and fit were also quantified as errors(E), which showed minimal values. These results showed that the dosimetry parameters from this study as compared to those of MC outputs parameters were in good agreement and better than the results obtained from literature. The work confirms a lot of promise in building robust

  9. Synthesizing genetic sequential logic circuit with clock pulse generator.

    Science.gov (United States)

    Chuang, Chia-Hua; Lin, Chun-Liang

    2014-05-28

    Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.

  10. Sequential weak continuity of null Lagrangians at the boundary

    Czech Academy of Sciences Publication Activity Database

    Kalamajska, A.; Kraemer, S.; Kružík, Martin

    2014-01-01

    Roč. 49, 3/4 (2014), s. 1263-1278 ISSN 0944-2669 R&D Projects: GA ČR GAP201/10/0357 Institutional support: RVO:67985556 Keywords : null Lagrangians * nonhomogeneous nonlinear mappings * sequential weak/in measure continuity Subject RIV: BA - General Mathematics Impact factor: 1.518, year: 2014 http://library.utia.cas.cz/separaty/2013/MTR/kruzik-sequential weak continuity of null lagrangians at the boundary.pdf

  11. Sequential modelling of the effects of mass drug treatments on anopheline-mediated lymphatic filariasis infection in Papua New Guinea.

    Directory of Open Access Journals (Sweden)

    Brajendra K Singh

    Full Text Available Lymphatic filariasis (LF has been targeted by the WHO for global eradication leading to the implementation of large scale intervention programs based on annual mass drug administrations (MDA worldwide. Recent work has indicated that locality-specific bio-ecological complexities affecting parasite transmission may complicate the prediction of LF extinction endpoints, casting uncertainty on the achievement of this initiative. One source of difficulty is the limited quantity and quality of data used to parameterize models of parasite transmission, implying the important need to update initially-derived parameter values. Sequential analysis of longitudinal data following annual MDAs will also be important to gaining new understanding of the persistence dynamics of LF. Here, we apply a Bayesian statistical-dynamical modelling framework that enables assimilation of information in human infection data recorded from communities in Papua New Guinea that underwent annual MDAs, into our previously developed model of parasite transmission, in order to examine these questions in LF ecology and control.Biological parameters underlying transmission obtained by fitting the model to longitudinal data remained stable throughout the study period. This enabled us to reliably reconstruct the observed baseline data in each community. Endpoint estimates also showed little variation. However, the updating procedure showed a shift towards higher and less variable values for worm kill but not for any other drug-related parameters. An intriguing finding is that the stability in key biological parameters could be disrupted by a significant reduction in the vector biting rate prevailing in a locality.Temporal invariance of biological parameters in the face of intervention perturbations indicates a robust adaptation of LF transmission to local ecological conditions. The results imply that understanding the mechanisms that underlie locally adapted transmission dynamics will

  12. Optimism in the face of uncertainty supported by a statistically-designed multi-armed bandit algorithm.

    Science.gov (United States)

    Kamiura, Moto; Sano, Kohei

    2017-10-01

    The principle of optimism in the face of uncertainty is known as a heuristic in sequential decision-making problems. Overtaking method based on this principle is an effective algorithm to solve multi-armed bandit problems. It was defined by a set of some heuristic patterns of the formulation in the previous study. The objective of the present paper is to redefine the value functions of Overtaking method and to unify the formulation of them. The unified Overtaking method is associated with upper bounds of confidence intervals of expected rewards on statistics. The unification of the formulation enhances the universality of Overtaking method. Consequently we newly obtain Overtaking method for the exponentially distributed rewards, numerically analyze it, and show that it outperforms UCB algorithm on average. The present study suggests that the principle of optimism in the face of uncertainty should be regarded as the statistics-based consequence of the law of large numbers for the sample mean of rewards and estimation of upper bounds of expected rewards, rather than as a heuristic, in the context of multi-armed bandit problems. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Accelerating Sequential Gaussian Simulation with a constant path

    Science.gov (United States)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  14. The Effect of Uncertainties on the Operating Temperature of U-Mo/Al Dispersion Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Sweidana, Faris B.; Mistarihia, Qusai M.; Ryu Ho Jin [KAIST, Daejeon (Korea, Republic of); Yim, Jeong Sik [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this study, uncertainty and combined uncertainty studies have been carried out to evaluate the uncertainty of the parameters affecting the operational temperature of U-Mo/Al fuel. The uncertainties related to the thermal conductivity of fuel meat, which consists of the effects of thermal diffusivity, density and specific heat capacity, the interaction layer (IL) that forms between the dispersed fuel and the matrix, fuel plate dimensions, heat flux, heat transfer coefficient and the outer cladding temperature were considered. As the development of low-enriched uranium (LEU) fuels has been pursued for research reactors to replace the use of highly-enriched uranium (HEU) for the improvement of proliferation resistance of fuels and fuel cycle, U-Mo particles dispersed in an Al matrix (UMo/Al) is a promising fuel for conversion of the research reactors that currently use HEU fuels to LEUfueled reactors due to its high density and good irradiation stability. Several models have been developed for the estimation of the thermal conductivity of U–Mo fuel, mainly based on the best fit of the very few measured data without providing uncertainty ranges. The purpose of this study is to provide a reasonable estimation of the upper bounds and lower bounds of fuel temperatures with burnup through the evaluation of the uncertainties in the thermal conductivity of irradiated U-Mo/Al dispersion fuel. The combined uncertainty study using RSS method evaluated the effect of applying all the uncertainty values of all the parameters on the operational temperature of U-Mo/Al fuel. The overall influence on the value of the operational temperature is 16.58 .deg. C at the beginning of life and it increases as the burnup increases to reach 18.74 .deg. C at a fuel meat fission density of 3.50E+21 fission/cm{sup 3}. Further studies are needed to evaluate the behavior more accurately by including other parameters uncertainties such as the interaction layer thermal conductivity.

  15. Calculation of uncertainties associated to environmental radioactivity measurements and their functions. Practical Procedure; Calculo de la incertidumbre asociada al recuento en medidas de radiactividad ambiental y funciones basadas en ella. Procedimiento practico

    Energy Technology Data Exchange (ETDEWEB)

    Gasco Leonarte, C; Anton Mateos, M. P.

    1995-07-01

    This report summarizes the procedure used to calculate the uncertainties associated to environmental radioactivity measurements, focusing on those obtained by radiochemical separation in which tracers have been added. Uncertainties linked to activity concentration calculations, isotopic rat iso, inventories, sequential leaching data, chronology dating by using C.R.S. model and duplicate analysis are described in detail. The objective of this article is to serve as a guide to people not familiarized with this kind of calculations, showing clear practical examples. The input of the formulas and all the data needed to achieve these calculations into the Lotus 1, 2, 3 WTN is outlined as well. (Author) 13 refs.

  16. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    Science.gov (United States)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  17. Results from the Application of Uncertainty Methods in the CSNI Uncertainty Methods Study (UMS)

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Within licensing procedures there is the incentive to replace the conservative requirements for code application by a - best estimate - concept supplemented by an uncertainty analysis to account for predictive uncertainties of code results. Methods have been developed to quantify these uncertainties. The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced -best estimate- thermal-hydraulic codes. Most of the methods identify and combine input uncertainties. The major differences between the predictions of the methods came from the choice of uncertain parameters and the quantification of the input uncertainties, i.e. the wideness of the uncertainty ranges. Therefore, suitable experimental and analytical information has to be selected to specify these uncertainty ranges or distributions. After the closure of the Uncertainty Method Study (UMS) and after the report was issued comparison calculations of experiment LSTF-SB-CL-18 were performed by University of Pisa using different versions of the RELAP 5 code. It turned out that the version used by two of the participants calculated a 170 K higher peak clad temperature compared with other versions using the same input deck. This may contribute to the differences of the upper limit of the uncertainty ranges.

  18. Information Seeking in Uncertainty Management Theory: Exposure to Information About Medical Uncertainty and Information-Processing Orientation as Predictors of Uncertainty Management Success.

    Science.gov (United States)

    Rains, Stephen A; Tukachinsky, Riva

    2015-01-01

    Uncertainty management theory outlines the processes through which individuals cope with health-related uncertainty. Information seeking has been frequently documented as an important uncertainty management strategy. The reported study investigates exposure to specific types of medical information during a search, and one's information-processing orientation as predictors of successful uncertainty management (i.e., a reduction in the discrepancy between the level of uncertainty one feels and the level one desires). A lab study was conducted in which participants were primed to feel more or less certain about skin cancer and then were allowed to search the World Wide Web for skin cancer information. Participants' search behavior was recorded and content analyzed. The results indicate that exposure to two health communication constructs that pervade medical forms of uncertainty (i.e., severity and susceptibility) and information-processing orientation predicted uncertainty management success.

  19. Conditional uncertainty principle

    Science.gov (United States)

    Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun

    2018-04-01

    We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.

  20. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    Science.gov (United States)

    Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-10-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.

  1. Accuracy Improvement of Boron Meter Adopting New Fitting Function and Multi-Detector

    Directory of Open Access Journals (Sweden)

    Chidong Kong

    2016-12-01

    Full Text Available This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  2. Accuracy improvement of boron meter adopting new fitting function and multi-detector

    Energy Technology Data Exchange (ETDEWEB)

    Kong, Chidong; Lee, Hyun Suk; Tak, Tae Woo; Lee, Deok Jung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); KIm, Si Hwan; Lyou, Seok Jean [Users Incorporated Company, Hansin S-MECA, Daejeon (Korea, Republic of)

    2016-12-15

    This paper introduces a boron meter with improved accuracy compared with other commercially available boron meters. Its design includes a new fitting function and a multi-detector. In pressurized water reactors (PWRs) in Korea, many boron meters have been used to continuously monitor boron concentration in reactor coolant. However, it is difficult to use the boron meters in practice because the measurement uncertainty is high. For this reason, there has been a strong demand for improvement in their accuracy. In this work, a boron meter evaluation model was developed, and two approaches were considered to improve the boron meter accuracy: the first approach uses a new fitting function and the second approach uses a multi-detector. With the new fitting function, the boron concentration error was decreased from 3.30 ppm to 0.73 ppm. With the multi-detector, the count signals were contaminated with noise such as field measurement data, and analyses were repeated 1,000 times to obtain average and standard deviations of the boron concentration errors. Finally, using the new fitting formulation and multi-detector together, the average error was decreased from 5.95 ppm to 1.83 ppm and its standard deviation was decreased from 0.64 ppm to 0.26 ppm. This result represents a great improvement of the boron meter accuracy.

  3. Model uncertainty and multimodel inference in reliability estimation within a longitudinal framework.

    Science.gov (United States)

    Alonso, Ariel; Laenen, Annouschka

    2013-05-01

    Laenen, Alonso, and Molenberghs (2007) and Laenen, Alonso, Molenberghs, and Vangeneugden (2009) proposed a method to assess the reliability of rating scales in a longitudinal context. The methodology is based on hierarchical linear models, and reliability coefficients are derived from the corresponding covariance matrices. However, finding a good parsimonious model to describe complex longitudinal data is a challenging task. Frequently, several models fit the data equally well, raising the problem of model selection uncertainty. When model uncertainty is high one may resort to model averaging, where inferences are based not on one but on an entire set of models. We explored the use of different model building strategies, including model averaging, in reliability estimation. We found that the approach introduced by Laenen et al. (2007, 2009) combined with some of these strategies may yield meaningful results in the presence of high model selection uncertainty and when all models are misspecified, in so far as some of them manage to capture the most salient features of the data. Nonetheless, when all models omit prominent regularities in the data, misleading results may be obtained. The main ideas are further illustrated on a case study in which the reliability of the Hamilton Anxiety Rating Scale is estimated. Importantly, the ambit of model selection uncertainty and model averaging transcends the specific setting studied in the paper and may be of interest in other areas of psychometrics. © 2012 The British Psychological Society.

  4. Analytical probabilistic proton dose calculation and range uncertainties

    Science.gov (United States)

    Bangert, M.; Hennig, P.; Oelfke, U.

    2014-03-01

    We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.

  5. Evaluation of the uncertainty in an EBT3 film dosimetry system utilizing net optical density.

    Science.gov (United States)

    Marroquin, Elsa Y León; Herrera González, José A; Camacho López, Miguel A; Barajas, José E Villarreal; García-Garduño, Olivia A

    2016-09-08

    Radiochromic film has become an important tool to verify dose distributions for intensity-modulated radiotherapy (IMRT) and quality assurance (QA) procedures. A new radiochromic film model, EBT3, has recently become available, whose composition and thickness of the sensitive layer are the same as those of previous EBT2 films. However, a matte polyester layer was added to EBT3 to prevent the formation of Newton's rings. Furthermore, the symmetrical design of EBT3 allows the user to eliminate side-orientation dependence. This film and the flatbed scanner, Epson Perfection V750, form a dosimetry system whose intrinsic characteristics were studied in this work. In addition, uncertainties associated with these intrinsic characteristics and the total uncertainty of the dosimetry system were determined. The analysis of the response of the radiochromic film (net optical density) and the fitting of the experimental data to a potential function yielded an uncertainty of 2.6%, 4.3%, and 4.1% for the red, green, and blue channels, respectively. In this work, the dosimetry system presents an uncertainty in resolving the dose of 1.8% for doses greater than 0.8 Gy and less than 6 Gy for red channel. The films irradiated between 0 and 120 Gy show differences in the response when scanned in portrait or landscape mode; less uncertainty was found when using the portrait mode. The response of the film depended on the position on the bed of the scanner, contributing an uncertainty of 2% for the red, 3% for the green, and 4.5% for the blue when placing the film around the center of the bed of scanner. Furthermore, the uniformity and reproducibility radiochromic film and reproducibility of the response of the scanner contribute less than 1% to the overall uncertainty in dose. Finally, the total dose uncertainty was 3.2%, 4.9%, and 5.2% for red, green, and blue channels, respectively. The above uncertainty values were obtained by mini-mizing the contribution to the total dose uncertainty

  6. Volume measurement system for plutonium nitrate solution and its uncertainty to be used for nuclear materials accountancy proved by demonstration over fifteen years

    International Nuclear Information System (INIS)

    Hosoma, Takashi

    2010-10-01

    An accurate volume measurement system for plutonium nitrate solution stored in an accountability tank with dip-tubes has been developed and demonstrated over fifteen years at the Plutonium Conversion Development Facility of the Japan Atomic Energy Agency. As a result of calibrations during the demonstration, it was proved that measurement uncertainty practically achieved and maintained was less than 0.1% (systematic character) and 0.15% (random) as one sigma which was half of the current target uncertainty admitted internationally. It was also proved that discrepancy between measured density and analytically determined density was less than 0.002 g·cm -3 as one sigma. These uncertainties include effects by long term use of the accountability tank where cumulative plutonium throughput is six tons. The system consists of high precision differential pressure transducers and a dead-weight tester, sequentially controlled valves for periodical zero adjustment, dampers to reduce pressure oscillation and a procedure to correct measurement biases. The sequence was also useful to carry out maintenances safely without contamination. Longevity of the transducer was longer than 15 years. Principles and essentials to determine solution volume and weight of plutonium, measurement biases and corrections, accurate pressure measurement system, maintenances and diagnostics, operational experiences, evaluation of measurement uncertainty are described. (author)

  7. Uncertainty and Cognitive Control

    Directory of Open Access Journals (Sweden)

    Faisal eMushtaq

    2011-10-01

    Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  8. Fitness Club

    CERN Multimedia

    Fitness Club

    2012-01-01

    Open to All: http://cern.ch/club-fitness  fitness.club@cern.ch Boxing Your supervisor makes your life too tough ! You really need to release the pressure you've been building up ! Come and join the fit-boxers. We train three times a week in Bd 216, classes for beginners and advanced available. Visit our website cern.ch/Boxing General Fitness Escape from your desk with our general fitness classes, to strengthen your heart, muscles and bones, improve you stamina, balance and flexibility, achieve new goals, be more productive and experience a sense of well-being, every Monday, Wednesday and Friday lunchtime, Tuesday mornings before work and Thursday evenings after work – join us for one of our monthly fitness workshops. Nordic Walking Enjoy the great outdoors; Nordic Walking is a great way to get your whole body moving and to significantly improve the condition of your muscles, heart and lungs. It will boost your energy levels no end. Pilates A body-conditioning technique de...

  9. Sequential Extraction Versus Comprehensive Characterization of Heavy Metal Species in Brownfield Soils

    Energy Technology Data Exchange (ETDEWEB)

    Dahlin, Cheryl L.; Williamson, Connie A.; Collins, W. Keith; Dahlin, David C.

    2002-06-01

    The applicability of sequential extraction as a means to determine species of heavy-metals was examined by a study on soil samples from two Superfund sites: the National Lead Company site in Pedricktown, NJ, and the Roebling Steel, Inc., site in Florence, NJ. Data from a standard sequential extraction procedure were compared to those from a comprehensive study that combined optical- and scanning-electron microscopy, X-ray diffraction, and chemical analyses. The study shows that larger particles of contaminants, encapsulated contaminants, and/or man-made materials such as slags, coke, metals, and plastics are subject to incasement, non-selectivity, and redistribution in the sequential extraction process. The results indicate that standard sequential extraction procedures that were developed for characterizing species of contaminants in river sediments may be unsuitable for stand-alone determinative evaluations of contaminant species in industrial-site materials. However, if employed as part of a comprehensive, site-specific characterization study, sequential extraction could be a very useful tool.

  10. Aleatoric and epistemic uncertainties in sampling based nuclear data uncertainty and sensitivity analyses

    International Nuclear Information System (INIS)

    Zwermann, W.; Krzykacz-Hausmann, B.; Gallner, L.; Klein, M.; Pautz, A.; Velkov, K.

    2012-01-01

    Sampling based uncertainty and sensitivity analyses due to epistemic input uncertainties, i.e. to an incomplete knowledge of uncertain input parameters, can be performed with arbitrary application programs to solve the physical problem under consideration. For the description of steady-state particle transport, direct simulations of the microscopic processes with Monte Carlo codes are often used. This introduces an additional source of uncertainty, the aleatoric sampling uncertainty, which is due to the randomness of the simulation process performed by sampling, and which adds to the total combined output sampling uncertainty. So far, this aleatoric part of uncertainty is minimized by running a sufficiently large number of Monte Carlo histories for each sample calculation, thus making its impact negligible as compared to the impact from sampling the epistemic uncertainties. Obviously, this process may cause high computational costs. The present paper shows that in many applications reliable epistemic uncertainty results can also be obtained with substantially lower computational effort by performing and analyzing two appropriately generated series of samples with much smaller number of Monte Carlo histories each. The method is applied along with the nuclear data uncertainty and sensitivity code package XSUSA in combination with the Monte Carlo transport code KENO-Va to various critical assemblies and a full scale reactor calculation. It is shown that the proposed method yields output uncertainties and sensitivities equivalent to the traditional approach, with a high reduction of computing time by factors of the magnitude of 100. (authors)

  11. Uncertainty, probability and information-gaps

    International Nuclear Information System (INIS)

    Ben-Haim, Yakov

    2004-01-01

    This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model. Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success? We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems

  12. Fundamental uncertainty limit of optical flow velocimetry according to Heisenberg's uncertainty principle.

    Science.gov (United States)

    Fischer, Andreas

    2016-11-01

    Optical flow velocity measurements are important for understanding the complex behavior of flows. Although a huge variety of methods exist, they are either based on a Doppler or a time-of-flight measurement principle. Doppler velocimetry evaluates the velocity-dependent frequency shift of light scattered at a moving particle, whereas time-of-flight velocimetry evaluates the traveled distance of a scattering particle per time interval. Regarding the aim of achieving a minimal measurement uncertainty, it is unclear if one principle allows to achieve lower uncertainties or if both principles can achieve equal uncertainties. For this reason, the natural, fundamental uncertainty limit according to Heisenberg's uncertainty principle is derived for Doppler and time-of-flight measurement principles, respectively. The obtained limits of the velocity uncertainty are qualitatively identical showing, e.g., a direct proportionality for the absolute value of the velocity to the power of 32 and an indirect proportionality to the square root of the scattered light power. Hence, both measurement principles have identical potentials regarding the fundamental uncertainty limit due to the quantum mechanical behavior of photons. This fundamental limit can be attained (at least asymptotically) in reality either with Doppler or time-of-flight methods, because the respective Cramér-Rao bounds for dominating photon shot noise, which is modeled as white Poissonian noise, are identical with the conclusions from Heisenberg's uncertainty principle.

  13. Imitation of the sequential structure of actions by chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Whiten, A

    1998-09-01

    Imitation was studied experimentally by allowing chimpanzees (Pan troglodytes) to observe alternative patterns of actions for opening a specially designed "artificial fruit." Like problematic foods primates deal with naturally, with the test fruit several defenses had to be removed to gain access to an edible core, but the sequential order and method of defense removal could be systematically varied. Each subject repeatedly observed 1 of 2 alternative techniques for removing each defense and 1 of 2 alternative sequential patterns of defense removal. Imitation of sequential organization emerged after repeated cycles of demonstration and attempts at opening the fruit. Imitation in chimpanzees may thus have some power to produce cultural convergence, counter to the supposition that individual learning processes corrupt copied actions. Imitation of sequential organization was accompanied by imitation of some aspects of the techniques that made up the sequence.

  14. FITS: a function-fitting program

    Energy Technology Data Exchange (ETDEWEB)

    Balestrini, S.J.; Chezem, C.G.

    1982-01-01

    FITS is an iterating computer program that adjusts the parameters of a function to fit a set of data points according to the least squares criterion and then lists and plots the results. The function can be programmed or chosen from a library that is provided. The library can be expanded to include up to 99 functions. A general plotting routine, contained in the program but useful in its own right, is described separately in an Appendix.

  15. Organic food consumption in Taiwan: Motives, involvement, and purchase intention under the moderating role of uncertainty.

    Science.gov (United States)

    Teng, Chih-Ching; Lu, Chi-Heng

    2016-10-01

    Despite the progressive development of the organic food sector in Taiwan, little is known about how consumers' consumption motives will influence organic food decision through various degrees of involvement and whether or not consumers with various degrees of uncertainty will vary in their intention to buy organic foods. The current study aims to examine the effect of consumption motives on behavioral intention related to organic food consumption under the mediating role of involvement as well as the moderating role of uncertainty. Research data were collected from organic food consumers in Taiwan via a questionnaire survey, eventually obtaining 457 valid questionnaires for analysis. This study tested the overall model fit and hypotheses through structural equation modeling method (SEM). The results show that consumer involvement significantly mediates the effects of health consciousness and ecological motives on organic food purchase intention, but not applied to food safety concern. Moreover, the moderating effect of uncertainty is statistical significance, indicating that the relationship between involvement and purchase intention becomes weaker in the condition of consumers with higher degree of uncertainty. Several implications and suggestions are also discussed for organic food providers and marketers. Copyright © 2016. Published by Elsevier Ltd.

  16. Estimation and Uncertainty Analysis of Flammability Properties of Chemicals using Group-Contribution Property Models

    DEFF Research Database (Denmark)

    Frutiger, Jerome; Abildskov, Jens; Sin, Gürkan

    Process safety studies and assessments rely on accurate property data. Flammability data like the lower and upper flammability limit (LFL and UFL) play an important role in quantifying the risk of fire and explosion. If experimental values are not available for the safety analysis due to cost...... or time constraints, property prediction models like group contribution (GC) models can estimate flammability data. The estimation needs to be accurate, reliable and as less time consuming as possible. However, GC property prediction methods frequently lack rigorous uncertainty analysis. Hence....... In this study, the MG-GC-factors are estimated using a systematic data and model evaluation methodology in the following way: 1) Data. Experimental flammability data is used from AIChE DIPPR 801 Database. 2) Initialization and sequential parameter estimation. An approximation using linear algebra provides...

  17. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information.

    Science.gov (United States)

    Price, Malcolm J; Welton, Nicky J; Briggs, Andrew H; Ades, A E

    2011-01-01

    Standard approaches to estimation of Markov models with data from randomized controlled trials tend either to make a judgment about which transition(s) treatments act on, or they assume that treatment has a separate effect on every transition. An alternative is to fit a series of models that assume that treatment acts on specific transitions. Investigators can then choose among alternative models using goodness-of-fit statistics. However, structural uncertainty about any chosen parameterization will remain and this may have implications for the resulting decision and the need for further research. We describe a Bayesian approach to model estimation, and model selection. Structural uncertainty about which parameterization to use is accounted for using model averaging and we developed a formula for calculating the expected value of perfect information (EVPI) in averaged models. Marginal posterior distributions are generated for each of the cost-effectiveness parameters using Markov Chain Monte Carlo simulation in WinBUGS, or Monte-Carlo simulation in Excel (Microsoft Corp., Redmond, WA). We illustrate the approach with an example of treatments for asthma using aggregate-level data from a connected network of four treatments compared in three pair-wise randomized controlled trials. The standard errors of incremental net benefit using structured models is reduced by up to eight- or ninefold compared to the unstructured models, and the expected loss attaching to decision uncertainty by factors of several hundreds. Model averaging had considerable influence on the EVPI. Alternative structural assumptions can alter the treatment decision and have an overwhelming effect on model uncertainty and expected value of information. Structural uncertainty can be accounted for by model averaging, and the EVPI can be calculated for averaged models. Copyright © 2011 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights

  18. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  19. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  20. Fitness Club

    CERN Multimedia

    Fitness Club

    2011-01-01

    The CERN Fitness Club is organising Zumba Classes on the first Wednesday of each month, starting 7 September (19.00 – 20.00). What is Zumba®? It’s an exhilarating, effective, easy-to-follow, Latin-inspired, calorie-burning dance fitness-party™ that’s moving millions of people toward joy and health. Above all it’s great fun and an excellent work out. Price: 22 CHF/person Sign-up via the following form: https://espace.cern.ch/club-fitness/Lists/Zumba%20Subscription/NewForm.aspx For more info: fitness.club@cern.ch

  1. A determination of the fragmentation functions of pions, kaons, and protons with faithful uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Bertone, Valerio; Hartland, Nathan P.; Rojo, Juan [VU University, Department of Physics and Astronomy, Amsterdam (Netherlands); Nikhef Theory Group, Amsterdam (Netherlands); Carrazza, Stefano [CERN, Theoretical Physics Department, Geneva (Switzerland); Nocera, Emanuele R. [University of Oxford, Rudolf Peierls Centre for Theoretical Physics, Oxford (United Kingdom); Collaboration: The NNPDF Collaboration

    2017-08-15

    We present NNFF1.0, a new determination of the fragmentation functions (FFs) of charged pions, charged kaons, and protons/antiprotons from an analysis of single-inclusive hadron production data in electron-positron annihilation. This determination, performed at leading, next-to-leading, and next-to-next-to-leading order in perturbative QCD, is based on the NNPDF methodology, a fitting framework designed to provide a statistically sound representation of FF uncertainties and to minimise any procedural bias. We discuss novel aspects of the methodology used in this analysis, namely an optimised parametrisation of FFs and a more efficient χ{sup 2} minimisation strategy, and validate the FF fitting procedure by means of closure tests. We then present the NNFF1.0 sets, and discuss their fit quality, their perturbative convergence, and their stability upon variations of the kinematic cuts and the fitted dataset. We find that the systematic inclusion of higher-order QCD corrections significantly improves the description of the data, especially in the small-z region. We compare the NNFF1.0 sets to other recent sets of FFs, finding in general a reasonable agreement, but also important differences. Together with existing sets of unpolarised and polarised parton distribution functions (PDFs), FFs and PDFs are now available from a common fitting framework for the first time. (orig.)

  2. A Variation on Uncertainty Principle and Logarithmic Uncertainty Principle for Continuous Quaternion Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    Mawardi Bahri

    2017-01-01

    Full Text Available The continuous quaternion wavelet transform (CQWT is a generalization of the classical continuous wavelet transform within the context of quaternion algebra. First of all, we show that the directional quaternion Fourier transform (QFT uncertainty principle can be obtained using the component-wise QFT uncertainty principle. Based on this method, the directional QFT uncertainty principle using representation of polar coordinate form is easily derived. We derive a variation on uncertainty principle related to the QFT. We state that the CQWT of a quaternion function can be written in terms of the QFT and obtain a variation on uncertainty principle related to the CQWT. Finally, we apply the extended uncertainty principles and properties of the CQWT to establish logarithmic uncertainty principles related to generalized transform.

  3. New approach to the adjustment of group cross sections fitting integral measurements - 2

    International Nuclear Information System (INIS)

    Chao, Y.A.

    1980-01-01

    The method developed in the first paper concerning group cross sections fitting integral measurements is generalized to cover the case when the source of the extracted negligence discrepancy cannot be identified and the theoretical relation between the integral and differential measurements is also subject to uncertainty. The question of how to divide in such a case the negligence discrepancy between the integral and differential data is resolved. Application to a specific problem with real experimental data is shown as a demonstration of the method. 4 refs

  4. Documentscape: Intertextuality, Sequentiality & Autonomy at Work

    DEFF Research Database (Denmark)

    Christensen, Lars Rune; Bjørn, Pernille

    2014-01-01

    On the basis of an ethnographic field study, this article introduces the concept of documentscape to the analysis of document-centric work practices. The concept of documentscape refers to the entire ensemble of documents in their mutual intertextual interlocking. Providing empirical data from...... a global software development case, we show how hierarchical structures and sequentiality across the interlocked documents are critical to how actors make sense of the work of others and what to do next in a geographically distributed setting. Furthermore, we found that while each document is created...... as part of a quasi-sequential order, this characteristic does not make the document, as a single entity, into a stable object. Instead, we found that the documents were malleable and dynamic while suspended in intertextual structures. Our concept of documentscape points to how the hierarchical structure...

  5. On the relationship between aerosol model uncertainty and radiative forcing uncertainty.

    Science.gov (United States)

    Lee, Lindsay A; Reddington, Carly L; Carslaw, Kenneth S

    2016-05-24

    The largest uncertainty in the historical radiative forcing of climate is caused by the interaction of aerosols with clouds. Historical forcing is not a directly measurable quantity, so reliable assessments depend on the development of global models of aerosols and clouds that are well constrained by observations. However, there has been no systematic assessment of how reduction in the uncertainty of global aerosol models will feed through to the uncertainty in the predicted forcing. We use a global model perturbed parameter ensemble to show that tight observational constraint of aerosol concentrations in the model has a relatively small effect on the aerosol-related uncertainty in the calculated forcing between preindustrial and present-day periods. One factor is the low sensitivity of present-day aerosol to natural emissions that determine the preindustrial aerosol state. However, the major cause of the weak constraint is that the full uncertainty space of the model generates a large number of model variants that are equally acceptable compared to present-day aerosol observations. The narrow range of aerosol concentrations in the observationally constrained model gives the impression of low aerosol model uncertainty. However, these multiple "equifinal" models predict a wide range of forcings. To make progress, we need to develop a much deeper understanding of model uncertainty and ways to use observations to constrain it. Equifinality in the aerosol model means that tuning of a small number of model processes to achieve model-observation agreement could give a misleading impression of model robustness.

  6. Decision-making under great uncertainty

    International Nuclear Information System (INIS)

    Hansson, S.O.

    1992-01-01

    Five types of decision-uncertainty are distinguished: uncertainty of consequences, of values, of demarcation, of reliance, and of co-ordination. Strategies are proposed for each type of uncertainty. The general conclusion is that it is meaningful for decision theory to treat cases with greater uncertainty than the textbook case of 'decision-making under uncertainty'. (au)

  7. The relationships of social support, uncertainty, self-efficacy, and commitment to prenatal psychosocial adaptation.

    Science.gov (United States)

    Hui Choi, W H; Lee, G L; Chan, Celia H Y; Cheung, Ray Y H; Lee, Irene L Y; Chan, Cecilia L W

    2012-12-01

    To report a study of the relations of prenatal psychosocial adaptation, social support, demographic and obstetric characteristics, uncertainty, information-seeking behaviour, motherhood normalization, self-efficacy, and commitment to pregnancy. Prenatal psychosocial assessment is recommended to identify psychosocial risk factors early to prevent psychiatric morbidities of mothers and children. However, knowledge on psychosocial adaptation and its explanatory variables is inconclusive. This study was non-experimental, with a cross-sectional, correlational, prospective design. The study investigated Hong Kong Chinese women during late pregnancy. Convenience sampling methods were used, with 550 women recruited from the low-risk clinics of three public hospitals. Data was collected between January-April 2007. A self-reported questionnaire was used, consisting of a number of measurements derived from an integrated framework of the Life Transition Theory and Theory of Uncertainty in Illness. Explanatory variables of psychosocial adaptation were identified using a structural equation modelling programme. The four explanatory variables of the psychosocial adaptation were social support, uncertainty, self-efficacy, and commitment to pregnancy. In the established model, which had good fit indices, greater psychosocial adaptation was associated with higher social support, higher self-efficacy, higher commitment to pregnancy, and lower uncertainty. The findings give clinicians and midwives guidance in the aspects to focus on when providing psychosocial assessment in routine prenatal screening. Since there are insufficient reliable screening tools to assist that assessment, midwives should receive adequate training, and effective screening instruments have to be identified. The explanatory role of uncertainty found in this study should encourage inquiries into the relationship between uncertainty and psychosocial adaptation in pregnancy. © 2012 Blackwell Publishing Ltd.

  8. A Procedure for the Sequential Determination of Radionuclides in Environmental Samples. Liquid Scintillation Counting and Alpha Spectrometry for 90Sr, 241Am and Pu Radioisotopes

    International Nuclear Information System (INIS)

    2014-01-01

    Since 2004, IAEA activities related to the terrestrial environment have aimed at the development of a set of procedures to determine radionuclides in environmental samples. Reliable, comparable and ‘fit for purpose’ results are an essential requirement for any decision based on analytical measurements. For the analyst, tested and validated analytical procedures are extremely important tools for the production of analytical data. For maximum utility, such procedures should be comprehensive, clearly formulated and readily available for reference to both the analyst and the customer. This publication describes a combined procedure for the sequential determination of 90 Sr, 241 Am and Pu radioisotopes in environmental samples. The method is based on the chemical separation of strontium, americium and plutonium using ion exchange chromatography, extraction chromatography and precipitation followed by alpha spectrometric and liquid scintillation counting detection. The method was tested and validated in terms of repeatability and trueness in accordance with International Organization for Standardization (ISO) guidelines using reference materials and proficiency test samples. Reproducibility tests were performed later at the IAEA Terrestrial Environment Laboratory. The calculations of the massic activity, uncertainty budget, decision threshold and detection limit are also described in this publication. The procedure is introduced for the determination of 90 Sr, 241 Am and Pu radioisotopes in environmental samples such as soil, sediment, air filter and vegetation samples. It is expected to be of general use to a wide range of laboratories, including the Analytical Laboratories for the Measurement of Environmental Radioactivity (ALMERA) network for routine environmental monitoring purposes

  9. Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador

    Directory of Open Access Journals (Sweden)

    Chicaiza E.G.

    2017-06-01

    Full Text Available Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA, trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.

  10. Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador

    Science.gov (United States)

    Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.

    2017-06-01

    Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.

  11. DS02 uncertainty analysis

    International Nuclear Information System (INIS)

    Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.

    2005-01-01

    In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)

  12. Sequential series for nuclear reactions

    International Nuclear Information System (INIS)

    Izumo, Ko

    1975-01-01

    A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)

  13. Application of a Novel Dose-Uncertainty Model for Dose-Uncertainty Analysis in Prostate Intensity-Modulated Radiotherapy

    International Nuclear Information System (INIS)

    Jin Hosang; Palta, Jatinder R.; Kim, You-Hyun; Kim, Siyong

    2010-01-01

    Purpose: To analyze dose uncertainty using a previously published dose-uncertainty model, and to assess potential dosimetric risks existing in prostate intensity-modulated radiotherapy (IMRT). Methods and Materials: The dose-uncertainty model provides a three-dimensional (3D) dose-uncertainty distribution in a given confidence level. For 8 retrospectively selected patients, dose-uncertainty maps were constructed using the dose-uncertainty model at the 95% CL. In addition to uncertainties inherent to the radiation treatment planning system, four scenarios of spatial errors were considered: machine only (S1), S1 + intrafraction, S1 + interfraction, and S1 + both intrafraction and interfraction errors. To evaluate the potential risks of the IMRT plans, three dose-uncertainty-based plan evaluation tools were introduced: confidence-weighted dose-volume histogram, confidence-weighted dose distribution, and dose-uncertainty-volume histogram. Results: Dose uncertainty caused by interfraction setup error was more significant than that of intrafraction motion error. The maximum dose uncertainty (95% confidence) of the clinical target volume (CTV) was smaller than 5% of the prescribed dose in all but two cases (13.9% and 10.2%). The dose uncertainty for 95% of the CTV volume ranged from 1.3% to 2.9% of the prescribed dose. Conclusions: The dose uncertainty in prostate IMRT could be evaluated using the dose-uncertainty model. Prostate IMRT plans satisfying the same plan objectives could generate a significantly different dose uncertainty because a complex interplay of many uncertainty sources. The uncertainty-based plan evaluation contributes to generating reliable and error-resistant treatment plans.

  14. Incorporating Wind Power Forecast Uncertainties Into Stochastic Unit Commitment Using Neural Network-Based Prediction Intervals.

    Science.gov (United States)

    Quan, Hao; Srinivasan, Dipti; Khosravi, Abbas

    2015-09-01

    Penetration of renewable energy resources, such as wind and solar power, into power systems significantly increases the uncertainties on system operation, stability, and reliability in smart grids. In this paper, the nonparametric neural network-based prediction intervals (PIs) are implemented for forecast uncertainty quantification. Instead of a single level PI, wind power forecast uncertainties are represented in a list of PIs. These PIs are then decomposed into quantiles of wind power. A new scenario generation method is proposed to handle wind power forecast uncertainties. For each hour, an empirical cumulative distribution function (ECDF) is fitted to these quantile points. The Monte Carlo simulation method is used to generate scenarios from the ECDF. Then the wind power scenarios are incorporated into a stochastic security-constrained unit commitment (SCUC) model. The heuristic genetic algorithm is utilized to solve the stochastic SCUC problem. Five deterministic and four stochastic case studies incorporated with interval forecasts of wind power are implemented. The results of these cases are presented and discussed together. Generation costs, and the scheduled and real-time economic dispatch reserves of different unit commitment strategies are compared. The experimental results show that the stochastic model is more robust than deterministic ones and, thus, decreases the risk in system operations of smart grids.

  15. Particle precipitation: How the spectrum fit impacts atmospheric chemistry

    Science.gov (United States)

    Wissing, J. M.; Nieder, H.; Yakovchouk, O. S.; Sinnhuber, M.

    2016-11-01

    Particle precipitation causes atmospheric ionization. Modeled ionization rates are widely used in atmospheric chemistry/climate simulations of the upper atmosphere. As ionization rates are based on particle measurements some assumptions concerning the energy spectrum are required. While detectors measure particles binned into certain energy ranges only, the calculation of a ionization profile needs a fit for the whole energy spectrum. Therefore the following assumptions are needed: (a) fit function (e.g. power-law or Maxwellian), (b) energy range, (c) amount of segments in the spectral fit, (d) fixed or variable positions of intersections between these segments. The aim of this paper is to quantify the impact of different assumptions on ionization rates as well as their consequences for atmospheric chemistry modeling. As the assumptions about the particle spectrum are independent from the ionization model itself the results of this paper are not restricted to a single ionization model, even though the Atmospheric Ionization Module OSnabrück (AIMOS, Wissing and Kallenrode, 2009) is used here. We include protons only as this allows us to trace changes in the chemistry model directly back to the different assumptions without the need to interpret superposed ionization profiles. However, since every particle species requires a particle spectrum fit with the mentioned assumptions the results are generally applicable to all precipitating particles. The reader may argue that the selection of assumptions of the particle fit is of minor interest, but we would like to emphasize on this topic as it is a major, if not the main, source of discrepancies between different ionization models (and reality). Depending on the assumptions single ionization profiles may vary by a factor of 5, long-term calculations may show systematic over- or underestimation in specific altitudes and even for ideal setups the definition of the energy-range involves an intrinsic 25% uncertainty for the

  16. A node linkage approach for sequential pattern mining.

    Directory of Open Access Journals (Sweden)

    Osvaldo Navarro

    Full Text Available Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT, has better performance and scalability in comparison with state of the art algorithms.

  17. Sequential Change-Point Detection via Online Convex Optimization

    Directory of Open Access Journals (Sweden)

    Yang Cao

    2018-02-01

    Full Text Available Sequential change-point detection when the distribution parameters are unknown is a fundamental problem in statistics and machine learning. When the post-change parameters are unknown, we consider a set of detection procedures based on sequential likelihood ratios with non-anticipating estimators constructed using online convex optimization algorithms such as online mirror descent, which provides a more versatile approach to tackling complex situations where recursive maximum likelihood estimators cannot be found. When the underlying distributions belong to a exponential family and the estimators satisfy the logarithm regret property, we show that this approach is nearly second-order asymptotically optimal. This means that the upper bound for the false alarm rate of the algorithm (measured by the average-run-length meets the lower bound asymptotically up to a log-log factor when the threshold tends to infinity. Our proof is achieved by making a connection between sequential change-point and online convex optimization and leveraging the logarithmic regret bound property of online mirror descent algorithm. Numerical and real data examples validate our theory.

  18. Sequential decoders for large MIMO systems

    KAUST Repository

    Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim

    2014-01-01

    the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity

  19. Humor Styles and the Intolerance of Uncertainty Model of Generalized Anxiety

    Directory of Open Access Journals (Sweden)

    Nicholas A. Kuiper

    2014-08-01

    Full Text Available Past research suggests that sense of humor may play a role in anxiety. The present study builds upon this work by exploring how individual differences in various humor styles, such as affiliative, self-enhancing, and self-defeating humor, may fit within a contemporary research model of anxiety. In this model, intolerance of uncertainty is a fundamental personality characteristic that heightens excessive worry, thus increasing anxiety. We further propose that greater intolerance of uncertainty may also suppress the use of adaptive humor (affiliate and self-enhancing, and foster the increased use of maladaptive self-defeating humor. Initial correlational analyses provide empirical support for these proposals. In addition, we found that excessive worry and affiliative humor both served as significant mediators. In particular, heightened intolerance of uncertainty lead to both excessive worry and a reduction in affiliative humor use, which, in turn, increased anxiety. We also explored potential humor mediating effects for each of the individual worry content domains in this model. These analyses confirmed the importance of affiliative humor as a mediator for worry pertaining to a wide range of content domains (e.g., relationships, lack of confidence, the future and work. These findings were then discussed in terms of a combined model that considers how humor styles may impact the social sharing of positive and negative emotions.

  20. A Bayesian belief network approach for assessing uncertainty in conceptual site models at contaminated sites

    Science.gov (United States)

    Thomsen, Nanna I.; Binning, Philip J.; McKnight, Ursula S.; Tuxen, Nina; Bjerg, Poul L.; Troldborg, Mads

    2016-05-01

    A key component in risk assessment of contaminated sites is in the formulation of a conceptual site model (CSM). A CSM is a simplified representation of reality and forms the basis for the mathematical modeling of contaminant fate and transport at the site. The CSM should therefore identify the most important site-specific features and processes that may affect the contaminant transport behavior at the site. However, the development of a CSM will always be associated with uncertainties due to limited data and lack of understanding of the site conditions. CSM uncertainty is often found to be a major source of model error and it should therefore be accounted for when evaluating uncertainties in risk assessments. We present a Bayesian belief network (BBN) approach for constructing CSMs and assessing their uncertainty at contaminated sites. BBNs are graphical probabilistic models that are effective for integrating quantitative and qualitative information, and thus can strengthen decisions when empirical data are lacking. The proposed BBN approach facilitates a systematic construction of multiple CSMs, and then determines the belief in each CSM using a variety of data types and/or expert opinion at different knowledge levels. The developed BBNs combine data from desktop studies and initial site investigations with expert opinion to assess which of the CSMs are more likely to reflect the actual site conditions. The method is demonstrated on a Danish field site, contaminated with chlorinated ethenes. Four different CSMs are developed by combining two contaminant source zone interpretations (presence or absence of a separate phase contamination) and two geological interpretations (fractured or unfractured clay till). The beliefs in each of the CSMs are assessed sequentially based on data from three investigation stages (a screening investigation, a more detailed investigation, and an expert consultation) to demonstrate that the belief can be updated as more information

  1. Ruminations On NDA Measurement Uncertainty Compared TO DA Uncertainty

    International Nuclear Information System (INIS)

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-01-01

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  2. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    Energy Technology Data Exchange (ETDEWEB)

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  3. Measurement uncertainties in science and technology

    CERN Document Server

    Grabe, Michael

    2014-01-01

    This book recasts the classical Gaussian error calculus from scratch, the inducements concerning both random and unknown systematic errors. The idea of this book is to create a formalism being fit to localize the true values of physical quantities considered – true with respect to the set of predefined physical units. Remarkably enough, the prevailingly practiced forms of error calculus do not feature this property which however proves in every respect, to be physically indispensable. The amended formalism, termed Generalized Gaussian Error Calculus by the author, treats unknown systematic errors as biases and brings random errors to bear via enhanced confidence intervals as laid down by students. The significantly extended second edition thoroughly restructures and systematizes the text as a whole and illustrates the formalism by numerous numerical examples. They demonstrate the basic principles of how to understand uncertainties to localize the true values of measured values - a perspective decisive in vi...

  4. Fitness cost

    DEFF Research Database (Denmark)

    Nielsen, Karen L.; Pedersen, Thomas M.; Udekwu, Klas I.

    2012-01-01

    phage types, predominantly only penicillin resistant. We investigated whether isolates of this epidemic were associated with a fitness cost, and we employed a mathematical model to ask whether these fitness costs could have led to the observed reduction in frequency. Bacteraemia isolates of S. aureus...... from Denmark have been stored since 1957. We chose 40 S. aureus isolates belonging to phage complex 83A, clonal complex 8 based on spa type, ranging in time of isolation from 1957 to 1980 and with varyous antibiograms, including both methicillin-resistant and -susceptible isolates. The relative fitness...... of each isolate was determined in a growth competition assay with a reference isolate. Significant fitness costs of 215 were determined for the MRSA isolates studied. There was a significant negative correlation between number of antibiotic resistances and relative fitness. Multiple regression analysis...

  5. Comment on: "Cell Therapy for Heart Disease: Trial Sequential Analyses of Two Cochrane Reviews"

    DEFF Research Database (Denmark)

    Castellini, Greta; Nielsen, Emil Eik; Gluud, Christian

    2017-01-01

    Trial Sequential Analysis is a frequentist method to help researchers control the risks of random errors in meta-analyses (1). Fisher and colleagues used Trial Sequential Analysis on cell therapy for heart diseases (2). The present article discusses the usefulness of Trial Sequential Analysis and...

  6. Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty

    Science.gov (United States)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    observations and several years of archived forecasts, overall empirical error distributions termed 'overall error' were for each gauge derived for a range of relevant forecast lead times. b) The error distributions vary strongly with the hydrometeorological situation, therefore a subdivision into the hydrological cases 'low flow, 'rising flood', 'flood', flood recession' was introduced. c) For the sake of numerical compression, theoretical distributions were fitted to the empirical distributions using the method of moments. Here, the normal distribution was generally best suited. d) Further data compression was achieved by representing the distribution parameters as a function (second-order polynome) of lead time. In general, the 'overall error' obtained from the above procedure is most useful in regions where large human impact occurs and where the influence of the meteorological forecast is limited. In upstream regions however, forecast uncertainty is strongly dependent on the current predictability of the atmosphere, which is contained in the spread of an ensemble forecast. Including this dynamically in the hydrological forecast uncertainty estimation requires prior elimination of the contribution of the weather forecast to the 'overall error'. This was achieved by calculating long series of hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The resulting error distribution is termed 'model error' and can be applied on hydrological ensemble forecasts, where ensemble rainfall forecasts are used as forcing. The concept will be illustrated by examples (good and bad ones) covering a wide range of catchment sizes, hydrometeorological regimes and quality of hydrological model calibration. The methodology to combine the static and dynamic shares of uncertainty will be presented in part II of this study.

  7. Efficient sequential and parallel algorithms for record linkage.

    Science.gov (United States)

    Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar

    2014-01-01

    Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.

  8. Embracing uncertainty in applied ecology.

    Science.gov (United States)

    Milner-Gulland, E J; Shea, K

    2017-12-01

    Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.

  9. Decision-Making under Criteria Uncertainty

    Science.gov (United States)

    Kureychik, V. M.; Safronenkova, I. B.

    2018-05-01

    Uncertainty is an essential part of a decision-making procedure. The paper deals with the problem of decision-making under criteria uncertainty. In this context, decision-making under uncertainty, types and conditions of uncertainty were examined. The decision-making problem under uncertainty was formalized. A modification of the mathematical decision support method under uncertainty via ontologies was proposed. A critical distinction of the developed method is ontology usage as its base elements. The goal of this work is a development of a decision-making method under criteria uncertainty with the use of ontologies in the area of multilayer board designing. This method is oriented to improvement of technical-economic values of the examined domain.

  10. Physical Uncertainty Bounds (PUB)

    Energy Technology Data Exchange (ETDEWEB)

    Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  11. Assessment of measurement result uncertainty in determination of 210Pb with the focus on matrix composition effect in gamma-ray spectrometry

    International Nuclear Information System (INIS)

    Iurian, A.R.; Pitois, A.; Kis-Benedek, G.; Migliori, A.; Padilla-Alvarez, R.; Ceccatelli, A.

    2016-01-01

    Reference materials were used to assess measurement result uncertainty in determination of 210 Pb by gamma-ray spectrometry, liquid scintillation counting, or indirectly by alpha-particle spectrometry, using its daughter 210 Po in radioactive equilibrium. Combined standard uncertainties of 210 Pb massic activities obtained by liquid scintillation counting are in the range 2–12%, depending on matrices and massic activity values. They are in the range 1–3% for the measurement of its daughter 210 Po using alpha-particle spectrometry. Three approaches (direct computation of counting efficiency and efficiency transfer approaches based on the computation and, respectively, experimental determination of the efficiency transfer factors) were applied for the evaluation of 210 Pb using gamma-ray spectrometry. Combined standard uncertainties of gamma-ray spectrometry results were found in the range 2–17%. The effect of matrix composition on self-attenuation was investigated and a detailed assessment of uncertainty components was performed. - Highlights: • Confirmed 210 Pb certified values by LSC and alpha-particle spectrometry ( 210 Po). • Assessed 210 Po measurement result uncertainty by alpha-particle spectrometry. • Matrix composition effect on gamma-ray spectrometry measurement result uncertainty. • Assessment of 210 Pb measurement result uncertainty by gamma-ray spectrometry. • Comparison of techniques and approaches: ‘fit-for-purpose’ considerations.

  12. Automatic synthesis of sequential control schemes

    International Nuclear Information System (INIS)

    Klein, I.

    1993-01-01

    Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This

  13. Measurement Uncertainty Relations for Discrete Observables: Relative Entropy Formulation

    Science.gov (United States)

    Barchielli, Alberto; Gregoratti, Matteo; Toigo, Alessandro

    2018-02-01

    We introduce a new information-theoretic formulation of quantum measurement uncertainty relations, based on the notion of relative entropy between measurement probabilities. In the case of a finite-dimensional system and for any approximate joint measurement of two target discrete observables, we define the entropic divergence as the maximal total loss of information occurring in the approximation at hand. For fixed target observables, we study the joint measurements minimizing the entropic divergence, and we prove the general properties of its minimum value. Such a minimum is our uncertainty lower bound: the total information lost by replacing the target observables with their optimal approximations, evaluated at the worst possible state. The bound turns out to be also an entropic incompatibility degree, that is, a good information-theoretic measure of incompatibility: indeed, it vanishes if and only if the target observables are compatible, it is state-independent, and it enjoys all the invariance properties which are desirable for such a measure. In this context, we point out the difference between general approximate joint measurements and sequential approximate joint measurements; to do this, we introduce a separate index for the tradeoff between the error of the first measurement and the disturbance of the second one. By exploiting the symmetry properties of the target observables, exact values, lower bounds and optimal approximations are evaluated in two different concrete examples: (1) a couple of spin-1/2 components (not necessarily orthogonal); (2) two Fourier conjugate mutually unbiased bases in prime power dimension. Finally, the entropic incompatibility degree straightforwardly generalizes to the case of many observables, still maintaining all its relevant properties; we explicitly compute it for three orthogonal spin-1/2 components.

  14. Theoretical issues in PDF determination and associated uncertainties

    CERN Document Server

    Ball, Richard D.; Del Debbio, Luigi; Forte, Stefano; Guffanti, Alberto; Rojo, Juan; Ubiali, Maria

    2013-01-01

    We study several sources of theoretical uncertainty in the determination of parton distributions (PDFs) which may affect current PDF sets used for precision physics at the Large Hadron Collider, and explain discrepancies between them. We consider in particular the use of fixed-flavor versus variable-flavor number renormalization schemes, higher twist corrections, and nuclear corrections. We perform our study in the framework of the NNPDF2.3 global PDF determination, by quantifying in each case the impact of different theoretical assumptions on the output PDFs. We also study in each case the implications for benchmark cross sections at the LHC. We find that the impact in a global fit of a fixed-flavor number scheme is substantial, the impact of higher twists is negligible, and the impact of nuclear corrections is moderate and circumscribed.

  15. Fault detection in multiply-redundant measurement systems via sequential testing

    International Nuclear Information System (INIS)

    Ray, A.

    1988-01-01

    The theory and application of a sequential test procedure for fault detection and isolation. The test procedure is suited for development of intelligent instrumentation in strategic processes like aircraft and nuclear plants where redundant measurements are usually available for individual critical variables. The test procedure consists of: (1) a generic redundancy management procedure which is essentially independent of the fault detection strategy and measurement noise statistics, and (2) a modified version of sequential probability ratio test algorithm for fault detection and isolation, which functions within the framework of this redundancy management procedure. The sequential test procedure is suitable for real-time applications using commercially available microcomputers and its efficacy has been verified by online fault detection in an operating nuclear reactor. 15 references

  16. The FitTrack Index as fitness indicator: A pilot study | van Rensburg ...

    African Journals Online (AJOL)

    Conclusions: These results suggest that the web-based FitTrack Index may be considered an appropriate tool to evaluate exercise capacity and cardiovascular fitness in healthy individuals following an aerobic training programme. Keywords: Aerobic fitness, Exercise ability, Recreational fitness, Cardiovascular fitness, ...

  17. Methods and uncertainty estimations of 3-D structural modelling in crystalline rocks: a case study

    Science.gov (United States)

    Schneeberger, Raphael; de La Varga, Miguel; Egli, Daniel; Berger, Alfons; Kober, Florian; Wellmann, Florian; Herwegh, Marco

    2017-09-01

    Exhumed basement rocks are often dissected by faults, the latter controlling physical parameters such as rock strength, porosity, or permeability. Knowledge on the three-dimensional (3-D) geometry of the fault pattern and its continuation with depth is therefore of paramount importance for applied geology projects (e.g. tunnelling, nuclear waste disposal) in crystalline bedrock. The central Aar massif (Central Switzerland) serves as a study area where we investigate the 3-D geometry of the Alpine fault pattern by means of both surface (fieldwork and remote sensing) and underground ground (mapping of the Grimsel Test Site) information. The fault zone pattern consists of planar steep major faults (kilometre scale) interconnected with secondary relay faults (hectometre scale). Starting with surface data, we present a workflow for structural 3-D modelling of the primary faults based on a comparison of three extrapolation approaches based on (a) field data, (b) Delaunay triangulation, and (c) a best-fitting moment of inertia analysis. The quality of these surface-data-based 3-D models is then tested with respect to the fit of the predictions with the underground appearance of faults. All three extrapolation approaches result in a close fit ( > 10 %) when compared with underground rock laboratory mapping. Subsequently, we performed a statistical interpolation based on Bayesian inference in order to validate and further constrain the uncertainty of the extrapolation approaches. This comparison indicates that fieldwork at the surface is key for accurately constraining the geometry of the fault pattern and enabling a proper extrapolation of major faults towards depth. Considerable uncertainties, however, persist with respect to smaller-sized secondary structures because of their limited spatial extensions and unknown reoccurrence intervals.

  18. FITS: a function-fitting program

    Energy Technology Data Exchange (ETDEWEB)

    Balestrini, S.J.; Chezem, C.G.

    1982-08-01

    FITS is an iterating computer program that adjusts the parameters of a function to fit a set of data points according to the least squares criterion and then lists and plots the results. The function can be programmed or chosen from a library that is provided. The library can be expanded to include up to 99 functions. A general plotting routine, contained in the program but useful in its own right, is described separately in Appendix A. An example problem file and its solution is given in Appendix B.

  19. Sequential designs for sensitivity analysis of functional inputs in computer experiments

    International Nuclear Information System (INIS)

    Fruth, J.; Roustant, O.; Kuhnt, S.

    2015-01-01

    Computer experiments are nowadays commonly used to analyze industrial processes aiming at achieving a wanted outcome. Sensitivity analysis plays an important role in exploring the actual impact of adjustable parameters on the response variable. In this work we focus on sensitivity analysis of a scalar-valued output of a time-consuming computer code depending on scalar and functional input parameters. We investigate a sequential methodology, based on piecewise constant functions and sequential bifurcation, which is both economical and fully interpretable. The new approach is applied to a sheet metal forming problem in three sequential steps, resulting in new insights into the behavior of the forming process over time. - Highlights: • Sensitivity analysis method for functional and scalar inputs is presented. • We focus on the discovery of most influential parts of the functional domain. • We investigate economical sequential methodology based on piecewise constant functions. • Normalized sensitivity indices are introduced and investigated theoretically. • Successful application to sheet metal forming on two functional inputs

  20. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William; Dietiker, Jean-François; Li, Tingwen; Sarkar, Avik; Sun, Xin

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of their inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.

  1. Systematic uncertainties in long-baseline neutrino oscillations for large θ₁₃

    Energy Technology Data Exchange (ETDEWEB)

    Coloma, Pilar; Huber, Patrick; Kopp, Joachim; Winter, Walter

    2013-02-01

    We study the physics potential of future long-baseline neutrino oscillation experiments at large θ₁₃, focusing especially on systematic uncertainties. We discuss superbeams, \\bbeams, and neutrino factories, and for the first time compare these experiments on an equal footing with respect to systematic errors. We explicitly simulate near detectors for all experiments, we use the same implementation of systematic uncertainties for all experiments, and we fully correlate the uncertainties among detectors, oscillation channels, and beam polarizations as appropriate. As our primary performance indicator, we use the achievable precision in the measurement of the CP violating phase $\\deltacp$. We find that a neutrino factory is the only instrument that can measure $\\deltacp$ with a precision similar to that of its quark sector counterpart. All neutrino beams operating at peak energies ≳2 GeV are quite robust with respect to systematic uncertainties, whereas especially \\bbeams and \\thk suffer from large cross section uncertainties in the quasi-elastic regime, combined with their inability to measure the appearance signal cross sections at the near detector. A noteworthy exception is the combination of a γ =100 \\bbeam with an \\spl-based superbeam, in which all relevant cross sections can be measured in a self-consistent way. This provides a performance, second only to the neutrino factory. For other superbeam experiments such as \\lbno and the setups studied in the context of the \\lbne reconfiguration effort, statistics turns out to be the bottleneck. In almost all cases, the near detector is not critical to control systematics since the combined fit of appearance and disappearance data already constrains the impact of systematics to be small provided that the three active flavor oscillation framework is valid.

  2. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  3. Methodologies of Uncertainty Propagation Calculation

    International Nuclear Information System (INIS)

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  4. Patterned Arrays of Functional Lateral Heterostructures via Sequential Template-Directed Printing.

    Science.gov (United States)

    Li, Yifan; Su, Meng; Li, Zheng; Huang, Zhandong; Li, Fengyu; Pan, Qi; Ren, Wanjie; Hu, Xiaotian; Song, Yanlin

    2018-04-30

    The precise integration of microscale dots and lines with controllable interfacing connections is highly important for the fabrication of functional devices. To date, the solution-processible methods are used to fabricate the heterogeneous micropatterns for different materials. However, for increasingly miniaturized and multifunctional devices, it is extremely challenging to engineer the uncertain kinetics of a solution on the microstructures surfaces, resulting in uncontrollable interface connections and poor device performance. Here, a sequential template-directed printing process is demonstrated for the fabrication of arrayed microdots connected by microwires through the regulation of the Rayleigh-Taylor instability of material solution or suspension. Flexibility in the control of fluidic behaviors can realize precise interface connection between the micropatterns, including the microwires traversing, overlapping or connecting the microdots. Moreover, various morphologies such as circular, rhombic, or star-shaped microdots as well as straight, broken or curved microwires can be achieved. The lateral heterostructure printed with two different quantum dots displays bright dichromatic photoluminescence. The ammonia gas sensor printed by polyaniline and silver nanoparticles exhibits a rapid response time. This strategy can construct heterostructures in a facile manner by eliminating the uncertainty of the multimaterials interface connection, which will be promising for the development of novel lateral functional devices. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. On the origin of reproducible sequential activity in neural circuits

    Science.gov (United States)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  6. Physical Fitness Assessment.

    Science.gov (United States)

    Valdes, Alice

    This document presents baseline data on physical fitness that provides an outline for assessing the physical fitness of students. It consists of 4 tasks and a 13-item questionnaire on fitness-related behaviors. The fitness test evaluates cardiorespiratory endurance by a steady state jog; muscular strength and endurance with a two-minute bent-knee…

  7. The state of the art of the impact of sampling uncertainty on measurement uncertainty

    Science.gov (United States)

    Leite, V. J.; Oliveira, E. C.

    2018-03-01

    The measurement uncertainty is a parameter that marks the reliability and can be divided into two large groups: sampling and analytical variations. Analytical uncertainty is a controlled process, performed in the laboratory. The same does not occur with the sampling uncertainty, which, because it faces several obstacles and there is no clarity on how to perform the procedures, has been neglected, although it is admittedly indispensable to the measurement process. This paper aims at describing the state of the art of sampling uncertainty and at assessing its relevance to measurement uncertainty.

  8. Estimating Uncertainties of Ship Course and Speed in Early Navigations using ICOADS3.0

    Science.gov (United States)

    Chan, D.; Huybers, P. J.

    2017-12-01

    Information on ship position and its uncertainty is potentially important for mapping out climatologists and changes in SSTs. Using the 2-hourly ship reports from the International Comprehensive Ocean Atmosphere Dataset 3.0 (ICOADS 3.0), we estimate the uncertainties of ship course, ship speed, and latitude/longitude corrections during 1870-1900. After reviewing the techniques used in early navigations, we build forward navigation model that uses dead reckoning technique, celestial latitude corrections, and chronometer longitude corrections. The modeled ship tracks exhibit jumps in longitude and latitude, when a position correction is applied. These jumps are also seen in ICOADS3.0 observations. In this model, position error at the end of each day increases following a 2D random walk; the latitudinal/longitude errors are reset when a latitude/longitude correction is applied.We fit the variance of the magnitude of latitude/longitude corrections in the observation against model outputs, and estimate that the standard deviation of uncertainty is 5.5 degree for ship course, 32% for ship speed, 22km for latitude correction, and 27km for longitude correction. The estimates here are informative priors for Bayesian methods that quantify position errors of individual tracks.

  9. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    M.N.M. van Lieshout (Marie-Colette); V. Capasso

    2009-01-01

    htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects

  10. Sequential models for coarsening and missingness

    NARCIS (Netherlands)

    Gill, R.D.; Robins, J.M.

    1997-01-01

    In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we

  11. Validation and uncertainty quantification of detector response functions for a 1″×2″ NaI collimated detector intended for inverse radioisotope source mapping applications

    Science.gov (United States)

    Nelson, N.; Azmy, Y.; Gardner, R. P.; Mattingly, J.; Smith, R.; Worrall, L. G.; Dewji, S.

    2017-11-01

    Detector response functions (DRFs) are often used for inverse analysis. We compute the DRF of a sodium iodide (NaI) nuclear material holdup field detector using the code named g03 developed by the Center for Engineering Applications of Radioisotopes (CEAR) at NC State University. Three measurement campaigns were performed in order to validate the DRF's constructed by g03: on-axis detection of calibration sources, off-axis measurements of a highly enriched uranium (HEU) disk, and on-axis measurements of the HEU disk with steel plates inserted between the source and the detector to provide attenuation. Furthermore, this work quantifies the uncertainty of the Monte Carlo simulations used in and with g03, as well as the uncertainties associated with each semi-empirical model employed in the full DRF representation. Overall, for the calibration source measurements, the response computed by the DRF for the prediction of the full-energy peak region of responses was good, i.e. within two standard deviations of the experimental response. In contrast, the DRF tended to overestimate the Compton continuum by about 45-65% due to inadequate tuning of the electron range multiplier fit variable that empirically represents physics associated with electron transport that is not modeled explicitly in g03. For the HEU disk measurements, computed DRF responses tended to significantly underestimate (more than 20%) the secondary full-energy peaks (any peak of lower energy than the highest-energy peak computed) due to scattering in the detector collimator and aluminum can, which is not included in the g03 model. We ran a sufficiently large number of histories to ensure for all of the Monte Carlo simulations that the statistical uncertainties were lower than their experimental counterpart's Poisson uncertainties. The uncertainties associated with least-squares fits to the experimental data tended to have parameter relative standard deviations lower than the peak channel relative standard

  12. The sequential trauma score - a new instrument for the sequential mortality prediction in major trauma*

    Directory of Open Access Journals (Sweden)

    Huber-Wagner S

    2010-05-01

    Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma

  13. Sequential Foreign Investments, Regional Technology Platforms and the Evolution of Japanese Multinationals in East Asia

    OpenAIRE

    Song, Jaeyong

    2001-01-01

    IVABSTRACTIn this paper, we investigate the firm-level mechanisms that underlie the sequential foreign direct investment (FDI) decisions of multinational corporations (MNCs). To understand inter-firm heterogeneity in the sequential FDI behaviors of MNCs, we develop a firm capability-based model of sequential FDI decisions. In the setting of Japanese electronics MNCs in East Asia, we empirically examine how prior investments in firm capabilities affect sequential investments into existingprodu...

  14. Uncertainties in hydrogen combustion

    International Nuclear Information System (INIS)

    Stamps, D.W.; Wong, C.C.; Nelson, L.S.

    1988-01-01

    Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references

  15. Uncertainty in artificial intelligence

    CERN Document Server

    Kanal, LN

    1986-01-01

    How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.

  16. Retrieval of sea surface velocities using sequential ocean colour monitor (OCM) data

    Digital Repository Service at National Institute of Oceanography (India)

    Prasad, J.S.; Rajawat, A.S.; Pradhan, Y.; Chauhan, O.S.; Nayak, S.R.

    velocities has been developed. The method is based on matching suspended sediment dispersion patterns, in sequential two time lapsed images. The pattern matching is performed on atmospherically corrected and geo-referenced sequential pair of images by Maximum...

  17. Probabilistic learning of nonlinear dynamical systems using sequential Monte Carlo

    Science.gov (United States)

    Schön, Thomas B.; Svensson, Andreas; Murray, Lawrence; Lindsten, Fredrik

    2018-05-01

    Probabilistic modeling provides the capability to represent and manipulate uncertainty in data, models, predictions and decisions. We are concerned with the problem of learning probabilistic models of dynamical systems from measured data. Specifically, we consider learning of probabilistic nonlinear state-space models. There is no closed-form solution available for this problem, implying that we are forced to use approximations. In this tutorial we will provide a self-contained introduction to one of the state-of-the-art methods-the particle Metropolis-Hastings algorithm-which has proven to offer a practical approximation. This is a Monte Carlo based method, where the particle filter is used to guide a Markov chain Monte Carlo method through the parameter space. One of the key merits of the particle Metropolis-Hastings algorithm is that it is guaranteed to converge to the "true solution" under mild assumptions, despite being based on a particle filter with only a finite number of particles. We will also provide a motivating numerical example illustrating the method using a modeling language tailored for sequential Monte Carlo methods. The intention of modeling languages of this kind is to open up the power of sophisticated Monte Carlo methods-including particle Metropolis-Hastings-to a large group of users without requiring them to know all the underlying mathematical details.

  18. A fast and accurate online sequential learning algorithm for feedforward networks.

    Science.gov (United States)

    Liang, Nan-Ying; Huang, Guang-Bin; Saratchandran, P; Sundararajan, N

    2006-11-01

    In this paper, we develop an online sequential learning algorithm for single hidden layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes in a unified framework. The algorithm is referred to as online sequential extreme learning machine (OS-ELM) and can learn data one-by-one or chunk-by-chunk (a block of data) with fixed or varying chunk size. The activation functions for additive nodes in OS-ELM can be any bounded nonconstant piecewise continuous functions and the activation functions for RBF nodes can be any integrable piecewise continuous functions. In OS-ELM, the parameters of hidden nodes (the input weights and biases of additive nodes or the centers and impact factors of RBF nodes) are randomly selected and the output weights are analytically determined based on the sequentially arriving data. The algorithm uses the ideas of ELM of Huang et al. developed for batch learning which has been shown to be extremely fast with generalization performance better than other batch training methods. Apart from selecting the number of hidden nodes, no other control parameters have to be manually chosen. Detailed performance comparison of OS-ELM is done with other popular sequential learning algorithms on benchmark problems drawn from the regression, classification and time series prediction areas. The results show that the OS-ELM is faster than the other sequential algorithms and produces better generalization performance.

  19. Characterisation of a reference site for quantifying uncertainties related to soil sampling

    International Nuclear Information System (INIS)

    Barbizzi, Sabrina; Zorzi, Paolo de; Belli, Maria; Pati, Alessandra; Sansone, Umberto; Stellato, Luisa; Barbina, Maria; Deluisa, Andrea; Menegon, Sandro; Coletti, Valter

    2004-01-01

    An integrated approach to quality assurance in soil sampling remains to be accomplished. - The paper reports a methodology adopted to face problems related to quality assurance in soil sampling. The SOILSAMP project, funded by the Environmental Protection Agency of Italy (APAT), is aimed at (i) establishing protocols for soil sampling in different environments; (ii) assessing uncertainties associated with different soil sampling methods in order to select the 'fit-for-purpose' method; (iii) qualifying, in term of trace elements spatial variability, a reference site for national and international inter-comparison exercises. Preliminary results and considerations are illustrated

  20. Sequential spatial processes for image analysis

    NARCIS (Netherlands)

    Lieshout, van M.N.M.; Capasso, V.

    2009-01-01

    We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video