WorldWideScience

Sample records for sparsity gap uncertainty

  1. Uncertainty, probability and information-gaps

    Ben-Haim, Yakov

    2004-01-01

    This paper discusses two main ideas. First, we focus on info-gap uncertainty, as distinct from probability. Info-gap theory is especially suited for modelling and managing uncertainty in system models: we invest all our knowledge in formulating the best possible model; this leaves the modeller with very faulty and fragmentary information about the variation of reality around that optimal model. Second, we examine the interdependence between uncertainty modelling and decision-making. Good uncertainty modelling requires contact with the end-use, namely, with the decision-making application of the uncertainty model. The most important avenue of uncertainty-propagation is from initial data- and model-uncertainties into uncertainty in the decision-domain. Two questions arise. Is the decision robust to the initial uncertainties? Is the decision prone to opportune windfall success? We apply info-gap robustness and opportunity functions to the analysis of representation and propagation of uncertainty in several of the Sandia Challenge Problems

  2. On Uncertainty and the WTA-WTP Gap

    Douglas D. Davis; Robert J. Reilly

    2012-01-01

    We correct an analysis by Isik (2004) regarding the effects of uncertainty on the WTA-WTP gap. Isik presents as his primary result a proposition that the introduction of uncertainty regarding environmental quality improvements causes WTA to increase and WTP to decrease by identical amounts relative to a certainty condition where WTA=WTP. These conclusions are incorrect. In fact, WTP may equal WTA even with uncertainty, and increases in the uncertainty of environmental quality improvements cau...

  3. Output gap uncertainty and real-time monetary policy

    Francesco Grigoli

    2015-12-01

    Full Text Available Output gap estimates are subject to a wide range of uncertainty owing principally to the difficulty in distinguishing between cycle and trend in real time. We show that country desks tend to overestimate economic slack, especially during recessions, and that uncertainty in initial output gap estimates persists several years. Only a small share of output gap revisions is predictable based on output dynamics, data quality, and policy frameworks. We also show that for a group of Latin American inflation targeters the prescriptions from monetary policy rules are subject to large changes due to revised output gap estimates. These explain a sizable proportion of the deviation of inflation from target, suggesting this information is not accounted for in real-time policy decisions.

  4. Uncertainty as Information: Narrowing the Science-policy Gap

    G. A. Bradshaw

    2000-07-01

    Full Text Available Conflict and indecision are hallmarks of environmental policy formulation. Some argue that the requisite information and certainty fall short of scientific standards for decision making; others argue that science is not the issue and that indecisiveness reflects a lack of political willpower. One of the most difficult aspects of translating science into policy is scientific uncertainty. Whereas scientists are familiar with uncertainty and complexity, the public and policy makers often seek certainty and deterministic solutions. We assert that environmental policy is most effective if scientific uncertainty is incorporated into a rigorous decision-theoretic framework as knowledge, not ignorance. The policies that best utilize scientific findings are defined here as those that accommodate the full scope of scientifically based predictions.

  5. Uncertainty relations and topological-band insulator transitions in 2D gapped Dirac materials

    Romera, E; Calixto, M

    2015-01-01

    Uncertainty relations are studied for a characterization of topological-band insulator transitions in 2D gapped Dirac materials isostructural with graphene. We show that the relative or Kullback–Leibler entropy in position and momentum spaces, and the standard variance-based uncertainty relation give sharp signatures of topological phase transitions in these systems. (paper)

  6. Managing Uncertainty in Water Infrastructure Design Using Info-gap Robustness

    Irias, X.; Cicala, D.

    2013-12-01

    Info-gap theory, a tool for managing deep uncertainty, can be of tremendous value for design of water systems in areas of high seismic risk. Maintaining reliable water service in those areas is subject to significant uncertainties including uncertainty of seismic loading, unknown seismic performance of infrastructure, uncertain costs of innovative seismic-resistant construction, unknown costs to repair seismic damage, unknown societal impacts from downtime, and more. Practically every major earthquake that strikes a population center reveals additional knowledge gaps. In situations of such deep uncertainty, info-gap can offer advantages over traditional approaches, whether deterministic approaches that use empirical safety factors to address the uncertainties involved, or probabilistic methods that attempt to characterize various stochastic properties and target a compromise between cost and reliability. The reason is that in situations of deep uncertainty, it may not be clear what safety factor would be reasonable, or even if any safety factor is sufficient to address the uncertainties, and we may lack data to characterize the situation probabilistically. Info-gap is a tool that recognizes up front that our best projection of the future may be wrong. Thus, rather than seeking a solution that is optimal for that projection, info-gap seeks a solution that works reasonably well for all plausible conditions. In other words, info-gap seeks solutions that are robust in the face of uncertainty. Info-gap has been used successfully across a wide range of disciplines including climate change science, project management, and structural design. EBMUD is currently using info-gap to help it gain insight into possible solutions for providing reliable water service to an island community within its service area. The island, containing about 75,000 customers, is particularly vulnerable to water supply disruption from earthquakes, since it has negligible water storage and is

  7. Probabilistic models for structured sparsity

    Andersen, Michael Riis

    sparse solutions to linear inverse problems. In this part, the sparsity promoting prior known as the spike-and-slab prior (Mitchell and Beauchamp, 1988) is generalized to the structured sparsity setting. An expectation propagation algorithm is derived for approximate posterior inference. The proposed...

  8. Decision Making Under Uncertainty - Bridging the Gap Between End User Needs and Science Capability

    Verdon-Kidd, D. C.; Kiem, A.; Austin, E. K.

    2012-12-01

    Successful adaptation outcomes depend on decision making based on the best available climate science information. However, a fundamental barrier exists, namely the 'gap' between information that climate science can currently provide and the information that is practically useful for end users and decision makers. This study identifies the major contributing factors to the 'gap' from an Australian perspective and provides recommendations as to ways in which the 'gap' may be narrowed. This was achieved via a literature review, online survey (targeted to providers of climate information and end users of that information), workshop (where both climate scientists and end users came together to discuss key issues) and focus group. The study confirmed that uncertainty in climate science is a key barrier to adaptation. The issue of uncertainty was found to be multi-faceted, with issues identified in terms of communication of uncertainty, misunderstanding of uncertainty and the lack of tools/methods to deal with uncertainty. There were also key differences in terms of expectations for the future - most end users were of the belief that uncertainty associated with future climate projections would reduce within the next five to 10 years, however producers of climate science information were well aware that this would most likely not be the case. This is a concerning finding as end users may delay taking action on adaptation and risk planning until the uncertainties are reduced - a situation which may never eventuate or may occur after the optimal time for action. Improved communication and packaging of climate information was another key theme that was highlighted in this study. Importantly, it was made clear that improved communication is not just about more glossy brochures and presentations by climate scientists, rather there is a role for a program or group to fill this role (coined a 'knowledge broker' during the workshop and focus group). The role of the 'knowledge

  9. Probabilistic distributions of pin gaps within a wire-spaced fuel subassembly and sensitivities of the related uncertainties to pin gap

    Sakai, K.; Hishida, H.

    1978-01-01

    Probabilistic fuel pin gap distributions within a wire-spaced fuel subassembly and sensitivities of the related uncertainties to fuel pin gaps are discussed. The analyses consist mainly of expressing a local fuel pin gap in terms of sensitivity functions of the related uncertainties and calculating the corresponding probabilistic distribution through taking all the possible combinations of the distribution of uncertainties. The results of illustrative calculations show that with the reliability level of 0.9987, the maximum deviation of the pin gap at the cladding hot spot of a center fuel subassembly is 8.05% from its nominal value and the corresponding probabilistic pin gap distribution is shifted to the narrower side due to the external confinement of a pin bundle with a wrapper tube. (Auth.)

  10. Lyme disease ecology in a changing world: Consensus, uncertainty and critical gaps for improving control

    Kilpatrick, A. Marm; Dobson, Andrew D.M.; Levi, Taal; Salkeld, Daniel J.; Swei, Andrea; Ginsberg, Howard; Kjemtrup, Anne; Padgett, Kerry A.; Jensen, Per A.; Fish, Durland; Ogden, Nick H.; Diuk-Wasser, Maria A.

    2017-01-01

    Lyme disease is the most common tick-borne disease in temperate regions of North America, Europe and Asia, and the number of reported cases has increased in many regions as landscapes have been altered. Although there has been extensive work on the ecology and epidemiology of this disease in both Europe and North America, substantial uncertainty exists about fundamental aspects that determine spatial and temporal variation in both disease risk and human incidence, which hamper effective and efficient prevention and control. Here we describe areas of consensus that can be built on, identify areas of uncertainty and outline research needed to fill these gaps to facilitate predictive models of disease risk and the development of novel disease control strategies. Key areas of uncertainty include: (i) the precise influence of deer abundance on tick abundance, (ii) how tick populations are regulated, (iii) assembly of host communities and tick-feeding patterns across different habitats, (iv) reservoir competence of host species, and (v) pathogenicity for humans of different genotypes of Borrelia burgdorferi. Filling these knowledge gaps will improve Lyme disease prevention and control and provide general insights into the drivers and dynamics of this emblematic multi-host–vector-borne zoonotic disease.

  11. Rapidity gap survival in central exclusive diffraction: Dynamical mechanisms and uncertainties

    Strikman, Mark; Weiss, Christian

    2009-01-01

    We summarize our understanding of the dynamical mechanisms governing rapidity gap survival in central exclusive diffraction, pp -> p + H + p (H = high-mass system), and discuss the uncertainties in present estimates of the survival probability. The main suppression of diffractive scattering is due to inelastic soft spectator interactions at small pp impact parameters and can be described in a mean-field approximation (independent hard and soft interactions). Moderate extra suppression results from fluctuations of the partonic configurations of the colliding protons. At LHC energies absorptive interactions of hard spectator partons associated with the gg -> H process reach the black-disk regime and cause substantial additional suppression, pushing the survival probability below 0.01.

  12. Research on the effects of geometrical and material uncertainties on the band gap of the undulated beam

    Li, Yi; Xu, Yanlong

    2017-09-01

    Considering uncertain geometrical and material parameters, the lower and upper bounds of the band gap of an undulated beam with periodically arched shape are studied by the Monte Carlo Simulation (MCS) and interval analysis based on the Taylor series. Given the random variations of the overall uncertain variables, scatter plots from the MCS are used to analyze the qualitative sensitivities of the band gap respect to these uncertainties. We find that the influence of uncertainty of the geometrical parameter on the band gap of the undulated beam is stronger than that of the material parameter. And this conclusion is also proved by the interval analysis based on the Taylor series. Our methodology can give a strategy to reduce the errors between the design and practical values of the band gaps by improving the accuracy of the specially selected uncertain design variables of the periodical structures.

  13. Making Invasion models useful for decision makers; incorporating uncertainty, knowledge gaps, and decision-making preferences

    Denys Yemshanov; Frank H Koch; Mark Ducey

    2015-01-01

    Uncertainty is inherent in model-based forecasts of ecological invasions. In this chapter, we explore how the perceptions of that uncertainty can be incorporated into the pest risk assessment process. Uncertainty changes a decision maker’s perceptions of risk; therefore, the direct incorporation of uncertainty may provide a more appropriate depiction of risk. Our...

  14. Info-Gap robustness pathway method for transitioning of urban drainage systems under deep uncertainties.

    Zischg, Jonatan; Goncalves, Mariana L R; Bacchin, Taneha Kuzniecow; Leonhardt, Günther; Viklander, Maria; van Timmeren, Arjan; Rauch, Wolfgang; Sitzenfrei, Robert

    2017-09-01

    In the urban water cycle, there are different ways of handling stormwater runoff. Traditional systems mainly rely on underground piped, sometimes named 'gray' infrastructure. New and so-called 'green/blue' ambitions aim for treating and conveying the runoff at the surface. Such concepts are mainly based on ground infiltration and temporal storage. In this work a methodology to create and compare different planning alternatives for stormwater handling on their pathways to a desired system state is presented. Investigations are made to assess the system performance and robustness when facing the deeply uncertain spatial and temporal developments in the future urban fabric, including impacts caused by climate change, urbanization and other disruptive events, like shifts in the network layout and interactions of 'gray' and 'green/blue' structures. With the Info-Gap robustness pathway method, three planning alternatives are evaluated to identify critical performance levels at different stages over time. This novel methodology is applied to a real case study problem where a city relocation process takes place during the upcoming decades. In this case study it is shown that hybrid systems including green infrastructures are more robust with respect to future uncertainties, compared to traditional network design.

  15. Radioecological consequences of potential accident in Norwegian coastal waters. Uncertainties and knowledges gaps in methodology

    Iosjpe, M.; Reistad, O.; Brown, J.; Jaworska, A.; Amundsen, I.

    2007-01-01

    Complete text of publication follows. A potential accident involving the transport of spent nuclear fuel along the Norwegian coastline has been choosen for evaluation of a dose assessment methodology. The accident scenario assumes that the release the radioactivity takes place under water and that there is free exchange of water between the spent fuel and the sea. The inventory has been calculated using the ORIGEN programme. Radioecological consequences are provided by the NRPA compartment model which includes the processes of advection of radioactivity between compartments and water-sediment interactions. The contamination of biota is further calculated from the radionuclide concentrations in filtered seawater in the different water regions. Doses to man are calculated on the basis of radionuclide concentrations in marine organisms, water and sediment and dose conversion factors. Collective dose-rates to man, doses to the critical groups, concentration of radionuclides in biota/sea-foods and doses to marine organisms were calculated through the evaluation of radioecological consequences after accidents. Results of calculations indicate that concentrations of radionuclides for some marine organisms can exceed guideline levels. At the same time collective dose rates to man as well as doses to a critical group are not higher than guideline levels. Comparison of results from calculations with provisional benchmark values suggests that doses to biota are in most cases unlikely to be of concern. However, to some marine organisms can be much higher than the screening dose of 10 μGyh over long periods. It is apparent that water-sediment distribution coefficients and concentration factors constitute the main sources of uncertainties in the present case. It is important to note that knowledge gaps concerning the influence of relatively low dose to populations of marine organisms over long time periods (many generations) substantially constrain the accessor's ability to

  16. Local sparsity enhanced compressed sensing magnetic resonance imaging in uniform discrete curvelet domain

    Yang, Bingxin; Yuan, Min; Ma, Yide; Zhang, Jiuwen; Zhan, Kun

    2015-01-01

    Compressed sensing(CS) has been well applied to speed up imaging by exploring image sparsity over predefined basis functions or learnt dictionary. Firstly, the sparse representation is generally obtained in a single transform domain by using wavelet-like methods, which cannot produce optimal sparsity considering sparsity, data adaptivity and computational complexity. Secondly, most state-of-the-art reconstruction models seldom consider composite regularization upon the various structural features of images and transform coefficients sub-bands. Therefore, these two points lead to high sampling rates for reconstructing high-quality images. In this paper, an efficient composite sparsity structure is proposed. It learns adaptive dictionary from lowpass uniform discrete curvelet transform sub-band coefficients patches. Consistent with the sparsity structure, a novel composite regularization reconstruction model is developed to improve reconstruction results from highly undersampled k-space data. It is established via minimizing spatial image and lowpass sub-band coefficients total variation regularization, transform sub-bands coefficients l 1 sparse regularization and constraining k-space measurements fidelity. A new augmented Lagrangian method is then introduced to optimize the reconstruction model. It updates representation coefficients of lowpass sub-band coefficients over dictionary, transform sub-bands coefficients and k-space measurements upon the ideas of constrained split augmented Lagrangian shrinkage algorithm. Experimental results on in vivo data show that the proposed method obtains high-quality reconstructed images. The reconstructed images exhibit the least aliasing artifacts and reconstruction error among current CS MRI methods. The proposed sparsity structure can fit and provide hierarchical sparsity for magnetic resonance images simultaneously, bridging the gap between predefined sparse representation methods and explicit dictionary. The new augmented

  17. Uncertainties

    To reflect this uncertainty in the climate scenarios, the use of AOGCMs that explicitly simulate the carbon cycle and chemistry of all the substances are needed. The Hadley Centre has developed a version of the climate model that allows the effect of climate change on the carbon cycle and its feedback into climate, to be ...

  18. Uncertainty

    Silva, T.A. da

    1988-01-01

    The comparison between the uncertainty method recommended by International Atomic Energy Agency (IAEA) and the and the International Weight and Measure Commitee (CIPM) are showed, for the calibration of clinical dosimeters in the secondary standard Dosimetry Laboratory (SSDL). (C.G.C.) [pt

  19. Sparsity and spectral properties of dual frames

    Krahmer, Felix; Kutyniok, Gitta; Lemvig, Jakob

    2013-01-01

    We study sparsity and spectral properties of dual frames of a given finite frame. We show that any finite frame has a dual with no more than $n^2$ non-vanishing entries, where $n$ denotes the ambient dimension, and that for most frames no sparser dual is possible. Moreover, we derive an expressio...

  20. Sparsity in Linear Predictive Coding of Speech

    Giacobello, Daniele

    of the effectiveness of their application in audio processing. The second part of the thesis deals with introducing sparsity directly in the linear prediction analysis-by-synthesis (LPAS) speech coding paradigm. We first propose a novel near-optimal method to look for a sparse approximate excitation using a compressed...... one with direct applications to coding but also consistent with the speech production model of voiced speech, where the excitation of the all-pole filter can be modeled as an impulse train, i.e., a sparse sequence. Introducing sparsity in the LP framework will also bring to de- velop the concept...... sensing formulation. Furthermore, we define a novel re-estimation procedure to adapt the predictor coefficients to the given sparse excitation, balancing the two representations in the context of speech coding. Finally, the advantages of the compact parametric representation of a segment of speech, given...

  1. Gaps in knowledge and data driving uncertainty in models of photosynthesis.

    Dietze, Michael C

    2014-02-01

    Regional and global models of the terrestrial biosphere depend critically on models of photosynthesis when predicting impacts of global change. This paper focuses on identifying the primary data needs of these models, what scales drive uncertainty, and how to improve measurements. Overall, there is a need for an open, cross-discipline database on leaf-level photosynthesis in general, and response curves in particular. The parameters in photosynthetic models are not constant through time, space, or canopy position but there is a need for a better understanding of whether relationships with drivers, such as leaf nitrogen, are themselves scale dependent. Across time scales, as ecosystem models become more sophisticated in their representations of succession they needs to be able to approximate sunfleck responses to capture understory growth and survival. At both high and low latitudes, photosynthetic data are inadequate in general and there is a particular need to better understand thermal acclimation. Simple models of acclimation suggest that shifts in optimal temperature are important. However, there is little advantage to synoptic-scale responses and circadian rhythms may be more beneficial than acclimation over shorter timescales. At high latitudes, there is a need for a better understanding of low-temperature photosynthetic limits, while at low latitudes the need is for a better understanding of phosphorus limitations on photosynthesis. In terms of sampling, measuring multivariate photosynthetic response surfaces are potentially more efficient and more accurate than traditional univariate response curves. Finally, there is a need for greater community involvement in model validation and model-data synthesis.

  2. Accelerated Simulation of Kinetic Transport Using Variational Principles and Sparsity

    Caflisch, Russel [Univ. of California, Los Angeles, CA (United States)

    2017-06-30

    This project is centered on the development and application of techniques of sparsity and compressed sensing for variational principles, PDEs and physics problems, in particular for kinetic transport. This included derivation of sparse modes for elliptic and parabolic problems coming from variational principles. The research results of this project are on methods for sparsity in differential equations and their applications and on application of sparsity ideas to kinetic transport of plasmas.

  3. Sparsity regularization for parameter identification problems

    Jin, Bangti; Maass, Peter

    2012-01-01

    The investigation of regularization schemes with sparsity promoting penalty terms has been one of the dominant topics in the field of inverse problems over the last years, and Tikhonov functionals with ℓ p -penalty terms for 1 ⩽ p ⩽ 2 have been studied extensively. The first investigations focused on regularization properties of the minimizers of such functionals with linear operators and on iteration schemes for approximating the minimizers. These results were quickly transferred to nonlinear operator equations, including nonsmooth operators and more general function space settings. The latest results on regularization properties additionally assume a sparse representation of the true solution as well as generalized source conditions, which yield some surprising and optimal convergence rates. The regularization theory with ℓ p sparsity constraints is relatively complete in this setting; see the first part of this review. In contrast, the development of efficient numerical schemes for approximating minimizers of Tikhonov functionals with sparsity constraints for nonlinear operators is still ongoing. The basic iterated soft shrinkage approach has been extended in several directions and semi-smooth Newton methods are becoming applicable in this field. In particular, the extension to more general non-convex, non-differentiable functionals by variational principles leads to a variety of generalized iteration schemes. We focus on such iteration schemes in the second part of this review. A major part of this survey is devoted to applying sparsity constrained regularization techniques to parameter identification problems for partial differential equations, which we regard as the prototypical setting for nonlinear inverse problems. Parameter identification problems exhibit different levels of complexity and we aim at characterizing a hierarchy of such problems. The operator defining these inverse problems is the parameter-to-state mapping. We first summarize some

  4. Line-averaging measurement methods to estimate the gap in the CO2 balance closure – possibilities, challenges, and uncertainties

    A. Ziemann

    2017-11-01

    Full Text Available An imbalance of surface energy fluxes using the eddy covariance (EC method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM and open-path Fourier-transform infrared spectroscopy (OP-FTIR will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs. A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m−2 s−1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately

  5. Line-averaging measurement methods to estimate the gap in the CO2 balance closure - possibilities, challenges, and uncertainties

    Ziemann, Astrid; Starke, Manuela; Schütze, Claudia

    2017-11-01

    An imbalance of surface energy fluxes using the eddy covariance (EC) method is observed in global measurement networks although all necessary corrections and conversions are applied to the raw data. Mainly during nighttime, advection can occur, resulting in a closing gap that consequently should also affect the CO2 balances. There is the crucial need for representative concentration and wind data to measure advective fluxes. Ground-based remote sensing techniques are an ideal tool as they provide the spatially representative CO2 concentration together with wind components within the same voxel structure. For this purpose, the presented SQuAd (Spatially resolved Quantification of the Advection influence on the balance closure of greenhouse gases) approach applies an integrated method combination of acoustic and optical remote sensing. The innovative combination of acoustic travel-time tomography (A-TOM) and open-path Fourier-transform infrared spectroscopy (OP-FTIR) will enable an upscaling and enhancement of EC measurements. OP-FTIR instrumentation offers the significant advantage of real-time simultaneous measurements of line-averaged concentrations for CO2 and other greenhouse gases (GHGs). A-TOM is a scalable method to remotely resolve 3-D wind and temperature fields. The paper will give an overview about the proposed SQuAd approach and first results of experimental tests at the FLUXNET site Grillenburg in Germany. Preliminary results of the comprehensive experiments reveal a mean nighttime horizontal advection of CO2 of about 10 µmol m-2 s-1 estimated by the spatially integrating and representative SQuAd method. Additionally, uncertainties in determining CO2 concentrations using passive OP-FTIR and wind speed applying A-TOM are systematically quantified. The maximum uncertainty for CO2 concentration was estimated due to environmental parameters, instrumental characteristics, and retrieval procedure with a total amount of approximately 30 % for a single

  6. Distributed Unmixing of Hyperspectral Datawith Sparsity Constraint

    Khoshsokhan, S.; Rajabi, R.; Zayyani, H.

    2017-09-01

    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.

  7. Analysis of Coherent Phonon Signals by Sparsity-promoting Dynamic Mode Decomposition

    Murata, Shin; Aihara, Shingo; Tokuda, Satoru; Iwamitsu, Kazunori; Mizoguchi, Kohji; Akai, Ichiro; Okada, Masato

    2018-05-01

    We propose a method to decompose normal modes in a coherent phonon (CP) signal by sparsity-promoting dynamic mode decomposition. While the CP signals can be modeled as the sum of finite number of damped oscillators, the conventional method such as Fourier transform adopts continuous bases in a frequency domain. Thus, the uncertainty of frequency appears and it is difficult to estimate the initial phase. Moreover, measurement artifacts are imposed on the CP signal and deforms the Fourier spectrum. In contrast, the proposed method can separate the signal from the artifact precisely and can successfully estimate physical properties of the normal modes.

  8. Sparsity reconstruction in electrical impedance tomography: An experimental evaluation

    Gehre, Matthias; Kluth, Tobias; Lipponen, Antti; Jin, Bangti; Seppä nen, Aku; Kaipio, Jari P.; Maass, Peter

    2012-01-01

    smoothness regularization approach. The results verify that the adoption of ℓ1-type constraints can enhance the quality of EIT reconstructions: in most of the test cases the reconstructions with sparsity constraints are both qualitatively and quantitatively

  9. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  10. Sparsity reconstruction in electrical impedance tomography: An experimental evaluation

    Gehre, Matthias

    2012-02-01

    We investigate the potential of sparsity constraints in the electrical impedance tomography (EIT) inverse problem of inferring the distributed conductivity based on boundary potential measurements. In sparsity reconstruction, inhomogeneities of the conductivity are a priori assumed to be sparse with respect to a certain basis. This prior information is incorporated into a Tikhonov-type functional by including a sparsity-promoting ℓ1-penalty term. The functional is minimized with an iterative soft shrinkage-type algorithm. In this paper, the feasibility of the sparsity reconstruction approach is evaluated by experimental data from water tank measurements. The reconstructions are computed both with sparsity constraints and with a more conventional smoothness regularization approach. The results verify that the adoption of ℓ1-type constraints can enhance the quality of EIT reconstructions: in most of the test cases the reconstructions with sparsity constraints are both qualitatively and quantitatively more feasible than that with the smoothness constraint. © 2011 Elsevier B.V. All rights reserved.

  11. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Jørgensen, Jakob S.; Coban, Sophia B.; Lionheart, William R. B.; McDonald, Samuel A.; Withers, Philip J.

    2017-12-01

    Sparsity regularization (SR) such as total variation (TV) minimization allows accurate image reconstruction in x-ray computed tomography (CT) from fewer projections than analytical methods. Exactly how few projections suffice and how this number may depend on the image remain poorly understood. Compressive sensing connects the critical number of projections to the image sparsity, but does not cover CT, however empirical results suggest a similar connection. The present work establishes for real CT data a connection between gradient sparsity and the sufficient number of projections for accurate TV-regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels, number of projections and noise levels to allow the systematic assessment of parameters affecting performance of SR reconstruction algorithms6. Using the SparseBeads data, TV-regularized reconstruction quality was assessed as a function of numbers of projections and gradient sparsity. The critical number of projections for satisfactory TV-regularized reconstruction increased almost linearly with the gradient sparsity. This establishes a quantitative guideline from which one may predict how few projections to acquire based on expected sample sparsity level as an aid in planning of dose- or time-critical experiments. The results are expected to hold for samples of similar characteristics, i.e. consisting of few, distinct phases with relatively simple structure. Such cases are plentiful in porous media, composite materials, foams, as well as non-destructive testing and metrology. For samples of other characteristics the proposed methodology may be used to investigate similar relations.

  12. Topology Identification of Coupling Map Lattice under Sparsity Condition

    Jiangni Yu

    2015-01-01

    Full Text Available Coupling map lattice is an efficient mathematical model for studying complex systems. This paper studies the topology identification of coupled map lattice (CML under the sparsity condition. We convert the identification problem into the problem of solving the underdetermined linear equations. The l1 norm method is used to solve the underdetermined equations. The requirement of data characters and sampling times are discussed in detail. We find that the high entropy and small coupling coefficient data are suitable for the identification. When the measurement time is more than 2.86 times sparsity, the accuracy of identification can reach an acceptable level. And when the measurement time reaches 4 times sparsity, we can receive a fairly good accuracy.

  13. SparseBeads data: benchmarking sparsity-regularized computed tomography

    Jørgensen, Jakob Sauer; Coban, Sophia B.; Lionheart, William R. B.

    2017-01-01

    -regularized reconstruction. A collection of 48 x-ray CT datasets called SparseBeads was designed for benchmarking SR reconstruction algorithms. Beadpacks comprising glass beads of five different sizes as well as mixtures were scanned in a micro-CT scanner to provide structured datasets with variable image sparsity levels...

  14. Near-field acoustic imaging based on Laplacian sparsity

    Fernandez Grande, Efren; Daudet, Laurent

    2016-01-01

    We present a sound source identification method for near-field acoustic imaging of extended sources. The methodology is based on a wave superposition method (or equivalent source method) that promotes solutions with sparse higher order spatial derivatives. Instead of promoting direct sparsity......, and the validity of the wave extrapolation used for the reconstruction is examined. It is shown that this methodology can overcome conventional limits of spatial sampling, and is therefore valid for wide-band acoustic imaging of extended sources....

  15. Controlled wavelet domain sparsity for x-ray tomography

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  16. Sparsity enabled cluster reduced-order models for control

    Kaiser, Eurika; Morzyński, Marek; Daviller, Guillaume; Kutz, J. Nathan; Brunton, Bingni W.; Brunton, Steven L.

    2018-01-01

    Characterizing and controlling nonlinear, multi-scale phenomena are central goals in science and engineering. Cluster-based reduced-order modeling (CROM) was introduced to exploit the underlying low-dimensional dynamics of complex systems. CROM builds a data-driven discretization of the Perron-Frobenius operator, resulting in a probabilistic model for ensembles of trajectories. A key advantage of CROM is that it embeds nonlinear dynamics in a linear framework, which enables the application of standard linear techniques to the nonlinear system. CROM is typically computed on high-dimensional data; however, access to and computations on this full-state data limit the online implementation of CROM for prediction and control. Here, we address this key challenge by identifying a small subset of critical measurements to learn an efficient CROM, referred to as sparsity-enabled CROM. In particular, we leverage compressive measurements to faithfully embed the cluster geometry and preserve the probabilistic dynamics. Further, we show how to identify fewer optimized sensor locations tailored to a specific problem that outperform random measurements. Both of these sparsity-enabled sensing strategies significantly reduce the burden of data acquisition and processing for low-latency in-time estimation and control. We illustrate this unsupervised learning approach on three different high-dimensional nonlinear dynamical systems from fluids with increasing complexity, with one application in flow control. Sparsity-enabled CROM is a critical facilitator for real-time implementation on high-dimensional systems where full-state information may be inaccessible.

  17. Refocusing criterion via sparsity measurements in digital holography.

    Memmolo, Pasquale; Paturzo, Melania; Javidi, Bahram; Netti, Paolo A; Ferraro, Pietro

    2014-08-15

    Several automatic approaches have been proposed in the past to compute the refocus distance in digital holography (DH). However most of them are based on a maximization or minimization of a suitable amplitude image contrast measure, regarded as a function of the reconstruction distance parameter. Here we show that, by using the sparsity measure coefficient regarded as a refocusing criterion in the holographic reconstruction, it is possible to recover the focus plane and, at the same time, establish the degree of sparsity of digital holograms, when samples of the diffraction Fresnel propagation integral are used as a sparse signal representation. We employ a sparsity measurement coefficient known as Gini's index thus showing for the first time, to the best of our knowledge, its application in DH, as an effective refocusing criterion. Demonstration is provided for different holographic configurations (i.e., lens and lensless apparatus) and for completely different objects (i.e., a thin pure phase microscopic object as an in vitro cell, and macroscopic puppets) preparation.

  18. Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion

    Tiezhao, B.; Ning, J.; Jianwei, M.

    2017-12-01

    Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.

  19. A New Reverberator Based on Variable Sparsity Convolution

    Holm-Rasmussen, Bo; Lehtonen, Heidi-Maria; Välimäki, Vesa

    2013-01-01

    FIR filter coefficients are selected from a velvet noise sequence, which consists of ones, minus ones, and zeros only. In this application, it is sufficient perceptually to use very sparse velvet noise sequences having only about 0.1 to 0.2% non-zero elements, with increasing sparsity along...... the impulse response. The algorithm yields a parametric approximation of the late part of the impulse response, which is more than 100 times more efficient computationally than the direct convolution. The computational load of the proposed algorithm is comparable to that of FFT-based partitioned convolution...

  20. Distribution Agnostic Structured Sparsity Recovery: Algorithms and Applications

    Masood, Mudassir

    2015-05-01

    Compressed sensing has been a very active area of research and several elegant algorithms have been developed for the recovery of sparse signals in the past few years. However, most of these algorithms are either computationally expensive or make some assumptions that are not suitable for all real world problems. Recently, focus has shifted to Bayesian-based approaches that are able to perform sparse signal recovery at much lower complexity while invoking constraint and/or a priori information about the data. While Bayesian approaches have their advantages, these methods must have access to a priori statistics. Usually, these statistics are unknown and are often difficult or even impossible to predict. An effective workaround is to assume a distribution which is typically considered to be Gaussian, as it makes many signal processing problems mathematically tractable. Seemingly attractive, this assumption necessitates the estimation of the associated parameters; which could be hard if not impossible. In the thesis, we focus on this aspect of Bayesian recovery and present a framework to address the challenges mentioned above. The proposed framework allows Bayesian recovery of sparse signals but at the same time is agnostic to the distribution of the unknown sparse signal components. The algorithms based on this framework are agnostic to signal statistics and utilize a priori statistics of additive noise and the sparsity rate of the signal, which are shown to be easily estimated from data if not available. In the thesis, we propose several algorithms based on this framework which utilize the structure present in signals for improved recovery. In addition to the algorithm that considers just the sparsity structure of sparse signals, tools that target additional structure of the sparsity recovery problem are proposed. These include several algorithms for a) block-sparse signal estimation, b) joint reconstruction of several common support sparse signals, and c

  1. Detecting management and fertilization effects on the carbon balance of winter oilseed rape with manual closed chamber measurements: Can we outrange gap-filling uncertainty and spatiotemporal variability?

    Huth, Vytas; Moffat, Antje Maria; Calmet, Anna; Andres, Monique; Laufer, Judit; Pehle, Natalia; Rach, Bernd; Gundlach, Laura; Augustin, Jürgen

    2017-04-01

    detect the flux differences between specific management practices - with additional chamber measurements installed close to the eddy tower as a reference linking the two techniques. In our experiment, we studied the effect of four different treatments of fertilization (mineral versus organic) and tillage (no-till versus mulch-till versus ploughing) on the NEE of rapeseed cropping for the climatic seasons 2013 to 2015. We compared the NEE of the treatments to the "background" NEE measured by the eddy covariance technique in the nearby reference field for the years 2013 and 2014. With this data, we estimated the uncertainty resulting from gap filling discontinuous chamber measurements and relate it to the observed effects of the four different treatments on the NEE. Here, we present first results on the applicability of the manual-chamber technique to derive the relatively small effects of rapeseed cropping on NEE and SC within a short period of three years of study.

  2. How does structured sparsity work in abnormal event detection?

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    the training, which is the due to the fact that abnormal videos are limited or even unavailable in advance in most video surveillance applications. As a result, there could be only one label in the training data which hampers supervised learning; 2) Even though there are multiple types of normal behaviors, how...... many normal patterns lie in the whole surveillance data is still unknown. This is because there is huge amount of video surveillance data and only a small proportion is used in algorithm learning, consequently, the normal patterns in the training data could be incomplete. As a result, any sparse...... structure learned from the training data could have a high bias and ruin the precision of abnormal event detection. Therefore, we in the paper propose an algorithm to solve the abnormality detection problem by sparse representation, in which local structured sparsity is preserved in coefficients. To better...

  3. Efficient regularization with wavelet sparsity constraints in photoacoustic tomography

    Frikel, Jürgen; Haltmeier, Markus

    2018-02-01

    In this paper, we consider the reconstruction problem of photoacoustic tomography (PAT) with a flat observation surface. We develop a direct reconstruction method that employs regularization with wavelet sparsity constraints. To that end, we derive a wavelet-vaguelette decomposition (WVD) for the PAT forward operator and a corresponding explicit reconstruction formula in the case of exact data. In the case of noisy data, we combine the WVD reconstruction formula with soft-thresholding, which yields a spatially adaptive estimation method. We demonstrate that our method is statistically optimal for white random noise if the unknown function is assumed to lie in any Besov-ball. We present generalizations of this approach and, in particular, we discuss the combination of PAT-vaguelette soft-thresholding with a total variation (TV) prior. We also provide an efficient implementation of the PAT-vaguelette transform that leads to fast image reconstruction algorithms supported by numerical results.

  4. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    Zhu, Xiaofeng; Xie, Qing; Zhu, Yonghua; Liu, Xingyi; Zhang, Shichao

    2015-01-01

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple

  5. An investigation of p-normed sparsity evaluation for SS-LBP-based edge extraction

    Chen-yi, Zhao [School of Physics, Northeast Normal University, Changchun (China); Jia-ning, Sun, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun (China); Shuang, Qiao, E-mail: qiaos810@nenu.edu.cn [School of Physics, Northeast Normal University, Changchun (China)

    2017-02-01

    SS-LBP is a very efficient tool for fast edge extraction in digital radiography. In this paper, we introduce a p-normed sparsity evaluation strategy to improve and generalize the existing SS-LBP framework. To illustrate the feasibility of the proposed approach, several experimental results are presented. Comparisons show that 1-normed sparsity evaluation is more effective and robust in practical applications.

  6. Exploiting Data Sparsity for Large-Scale Matrix Computations

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  7. Exploiting Data Sparsity for Large-Scale Matrix Computations

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  8. Calibrationless Parallel Magnetic Resonance Imaging: A Joint Sparsity Model

    Angshul Majumdar

    2013-12-01

    Full Text Available State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets—eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used—Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods—CS SENSE and l1SPIRiT and two calibration free techniques—Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  9. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.

  10. Sparsity-Based Super Resolution for SEM Images.

    Tsiper, Shahar; Dicker, Or; Kaizerman, Idan; Zohar, Zeev; Segev, Mordechai; Eldar, Yonina C

    2017-09-13

    The scanning electron microscope (SEM) is an electron microscope that produces an image of a sample by scanning it with a focused beam of electrons. The electrons interact with the atoms in the sample, which emit secondary electrons that contain information about the surface topography and composition. The sample is scanned by the electron beam point by point, until an image of the surface is formed. Since its invention in 1942, the capabilities of SEMs have become paramount in the discovery and understanding of the nanometer world, and today it is extensively used for both research and in industry. In principle, SEMs can achieve resolution better than one nanometer. However, for many applications, working at subnanometer resolution implies an exceedingly large number of scanning points. For exactly this reason, the SEM diagnostics of microelectronic chips is performed either at high resolution (HR) over a small area or at low resolution (LR) while capturing a larger portion of the chip. Here, we employ sparse coding and dictionary learning to algorithmically enhance low-resolution SEM images of microelectronic chips-up to the level of the HR images acquired by slow SEM scans, while considerably reducing the noise. Our methodology consists of two steps: an offline stage of learning a joint dictionary from a sequence of LR and HR images of the same region in the chip, followed by a fast-online super-resolution step where the resolution of a new LR image is enhanced. We provide several examples with typical chips used in the microelectronics industry, as well as a statistical study on arbitrary images with characteristic structural features. Conceptually, our method works well when the images have similar characteristics, as microelectronics chips do. This work demonstrates that employing sparsity concepts can greatly improve the performance of SEM, thereby considerably increasing the scanning throughput without compromising on analysis quality and resolution.

  11. Kronecker-Basis-Representation Based Tensor Sparsity and Its Applications to Tensor Recovery.

    Xie, Qi; Zhao, Qian; Meng, Deyu; Xu, Zongben

    2017-08-02

    It is well known that the sparsity/low-rank of a vector/matrix can be rationally measured by nonzero-entries-number ($l_0$ norm)/nonzero- singular-values-number (rank), respectively. However, data from real applications are often generated by the interaction of multiple factors, which obviously cannot be sufficiently represented by a vector/matrix, while a high order tensor is expected to provide more faithful representation to deliver the intrinsic structure underlying such data ensembles. Unlike the vector/matrix case, constructing a rational high order sparsity measure for tensor is a relatively harder task. To this aim, in this paper we propose a measure for tensor sparsity, called Kronecker-basis-representation based tensor sparsity measure (KBR briefly), which encodes both sparsity insights delivered by Tucker and CANDECOMP/PARAFAC (CP) low-rank decompositions for a general tensor. Then we study the KBR regularization minimization (KBRM) problem, and design an effective ADMM algorithm for solving it, where each involved parameter can be updated with closed-form equations. Such an efficient solver makes it possible to extend KBR to various tasks like tensor completion and tensor robust principal component analysis. A series of experiments, including multispectral image (MSI) denoising, MSI completion and background subtraction, substantiate the superiority of the proposed methods beyond state-of-the-arts.

  12. Seismic data two-step recovery approach combining sparsity-promoting and hyperbolic Radon transform methods

    Wang, Hanchuang; Chen, Shengchang; Ren, Haoran; Liang, Donghui; Zhou, Huamin; She, Deping

    2015-01-01

    In current research of seismic data recovery problems, the sparsity-promoting method usually produces an insufficient recovery result at the locations of null traces. The HRT (hyperbolic Radon transform) method can be applied to problems of seismic data recovery with approximately hyperbolic events. Influenced by deviations of hyperbolic characteristics between real and ideal travel-time curves, some spurious events are usually introduced and the recovery effect of intermediate and far-offset traces is worse than that of near-offset traces. Sparsity-promoting recovery is primarily dependent on the sparsity of seismic data in the sparse transform domain (i.e. on the local waveform characteristics), whereas HRT recovery is severely affected by the global characteristics of the seismic events. Inspired by the above conclusion, a two-step recovery approach combining sparsity-promoting and time-invariant HRT methods is proposed, which is based on both local and global characteristics of the seismic data. Two implementation strategies are presented in detail, and the selection criteria of the relevant strategies is also discussed. Numerical examples of synthetic and real data verify that the new approach can achieve a better recovery effect by simultaneously overcoming the shortcomings of sparsity-promoting recovery and HRT recovery. (paper)

  13. Uncertainty calculations made easier

    Hogenbirk, A.

    1994-07-01

    The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

  14. Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems

    Charara, Ali M.

    2018-05-24

    Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and

  15. Empirical average-case relation between undersampling and sparsity in X-ray CT

    Jørgensen, Jakob Sauer; Sidky, Emil Y.; Hansen, Per Christian

    2015-01-01

    In X-ray computed tomography (CT) it is generally acknowledged that reconstruction methods exploiting image sparsity allow reconstruction from a significantly reduced number of projections. The use of such reconstruction methods is inspired by recent progress in compressed sensing (CS). However......, the CS framework provides neither guarantees of accurate CT reconstruction, nor any relation between sparsity and a sufficient number of measurements for recovery, i.e., perfect reconstruction from noise-free data. We consider reconstruction through 1-norm minimization, as proposed in CS, from data...... obtained using a standard CT fan-beam sampling pattern. In empirical simulation studies we establish quantitatively a relation between the image sparsity and the sufficient number of measurements for recovery within image classes motivated by tomographic applications. We show empirically that the specific...

  16. Sparsity-Based Representation for Classification Algorithms and Comparison Results for Transient Acoustic Signals

    2016-05-01

    Powder Mill Road Adelphi, MD 20783-1138 Authors’ emails : <ducminh174@gmail.com> , <tung-duong.tran-luu.civ@mail.mil>. In this report, we propose a...to share common sparsity patterns. Yuan and Yan7 investigated a multitask model for visual classifi- cation, which also assumes that multiple

  17. Spatiotemporal Fusion of Remote Sensing Images with Structural Sparsity and Semi-Coupled Dictionary Learning

    Jingbo Wei

    2016-12-01

    Full Text Available Fusion of remote sensing images with different spatial and temporal resolutions is highly needed by diverse earth observation applications. A small number of spatiotemporal fusion methods using sparse representation appear to be more promising than traditional linear mixture methods in reflecting abruptly changing terrestrial content. However, one of the main difficulties is that the results of sparse representation have reduced expressional accuracy; this is due in part to insufficient prior knowledge. For remote sensing images, the cluster and joint structural sparsity of the sparse coefficients could be employed as a priori knowledge. In this paper, a new optimization model is constructed with the semi-coupled dictionary learning and structural sparsity to predict the unknown high-resolution image from known images. Specifically, the intra-block correlation and cluster-structured sparsity are considered for single-channel reconstruction, and the inter-band similarity of joint-structured sparsity is considered for multichannel reconstruction, and both are implemented with block sparse Bayesian learning. The detailed optimization steps are given iteratively. In the experimental procedure, the red, green, and near-infrared bands of Landsat-7 and Moderate Resolution Imaging Spectrometer (MODIS satellites are put to fusion with root mean square errors to check the prediction accuracy. It can be concluded from the experiment that the proposed methods can produce higher quality than state-of-the-art methods.

  18. A Psychoacoustic-Based Multiple Audio Object Coding Approach via Intra-Object Sparsity

    Maoshen Jia

    2017-12-01

    Full Text Available Rendering spatial sound scenes via audio objects has become popular in recent years, since it can provide more flexibility for different auditory scenarios, such as 3D movies, spatial audio communication and virtual classrooms. To facilitate high-quality bitrate-efficient distribution for spatial audio objects, an encoding scheme based on intra-object sparsity (approximate k-sparsity of the audio object itself is proposed in this paper. The statistical analysis is presented to validate the notion that the audio object has a stronger sparseness in the Modified Discrete Cosine Transform (MDCT domain than in the Short Time Fourier Transform (STFT domain. By exploiting intra-object sparsity in the MDCT domain, multiple simultaneously occurring audio objects are compressed into a mono downmix signal with side information. To ensure a balanced perception quality of audio objects, a Psychoacoustic-based time-frequency instants sorting algorithm and an energy equalized Number of Preserved Time-Frequency Bins (NPTF allocation strategy are proposed, which are employed in the underlying compression framework. The downmix signal can be further encoded via Scalar Quantized Vector Huffman Coding (SQVH technique at a desirable bitrate, and the side information is transmitted in a lossless manner. Both objective and subjective evaluations show that the proposed encoding scheme outperforms the Sparsity Analysis (SPA approach and Spatial Audio Object Coding (SAOC in cases where eight objects were jointly encoded.

  19. Sparsity- and continuity-promoting seismic image recovery with curvelet frames

    Herrmann, Felix J.; Moghaddam, Peyman; Stolk, C.C.

    2008-01-01

    A nonlinear singularity-preserving solution to seismic image recovery with sparseness and continuity constraints is proposed. We observe that curvelets, as a directional frame expansion, lead to sparsity of seismic images and exhibit invariance under the normal operator of the linearized imaging

  20. Sparsity-based shrinkage approach for practicability improvement of H-LBP-based edge extraction

    Zhao, Chenyi [School of Physics, Northeast Normal University, Changchun 130024 (China); Qiao, Shuang, E-mail: qiaos810@nenu.edu.cn [School of Physics, Northeast Normal University, Changchun 130024 (China); Sun, Jianing, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun 130024 (China); Zhao, Ruikun; Wu, Wei [Jilin Cancer Hospital, Changchun 130021 (China)

    2016-07-21

    The local binary pattern with H function (H-LBP) technique enables fast and efficient edge extraction in digital radiography. In this paper, we reformulate the model of H-LBP and propose a novel sparsity-based shrinkage approach, in which the threshold can be adapted to the data sparsity. Using this model, we upgrade fast H-LBP framework and apply it to real digital radiography. The experiments show that the method improved using the new shrinkage approach can avoid elaborately artificial modulation of parameters and possess greater robustness in edge extraction compared with the other current methods without increasing processing time. - Highlights: • An novel sparsity-based shrinkage approach for edge extraction on digital radiography is proposed. • The threshold of SS-LBP can be adaptive to the data sparsity. • SS-LBP is the development of AH-LBP and H-LBP. • Without boosting processing time and losing processing efficiency, SS-LBP can avoid elaborately artificial modulation of parameters provides. • SS-LBP has more robust performance in edge extraction compared with the existing methods.

  1. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    Huaqing Wang

    2016-09-01

    Full Text Available The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples.

  2. 3D reconstruction for partial data electrical impedance tomography using a sparsity prior

    Garde, Henrik; Knudsen, Kim

    2015-01-01

    of the conductivity is used to improve reconstructions for the partial data problem with Cauchy data measured only on a subset of the boundary. A sparsity prior is enforced using the ℓ1 norm in the penalty term of a Tikhonov functional, and spatial prior information is incorporated by applying a spatially distributed...

  3. GAP Analysis Program (GAP)

    Kansas Data Access and Support Center — The Kansas GAP Analysis Land Cover database depicts 43 land cover classes for the state of Kansas. The database was generated using a two-stage hybrid classification...

  4. Modularity and Sparsity: Evolution of Neural Net Controllers in Physically Embodied Robots

    Nicholas Livingston

    2016-12-01

    Full Text Available While modularity is thought to be central for the evolution of complexity and evolvability, it remains unclear how systems boot-strap themselves into modularity from random or fully integrated starting conditions. Clune et al. (2013 suggested that a positive correlation between sparsity and modularity is the prime cause of this transition. We sought to test the generality of this modularity-sparsity hypothesis by testing it for the first time in physically embodied robots. A population of ten Tadros — autonomous, surface-swimming robots propelled by a flapping tail — was used. Individuals varied only in the structure of their neural net control, a 2 x 6 x 2 network with recurrence in the hidden layer. Each of the 60 possible connections was coded in the genome, and could achieve one of three states: -1, 0, 1. Inputs were two light-dependent resistors and outputs were two motor control variables to the flapping tail, one for the frequency of the flapping and the other for the turning offset. Each Tadro was tested separately in a circular tank lit by a single overhead light source. Fitness was the amount of light gathered by a vertically oriented sensor that was disconnected from the controller net. Reproduction was asexual, with the top performer cloned and then all individuals entered into a roulette wheel selection process, with genomes mutated to create the offspring. The starting population of networks was randomly generated. Over ten generations, the population’s mean fitness increased two-fold. This evolution occurred in spite of an unintentional integer overflow problem in recurrent nodes in the hidden layer that caused outputs to oscillate. Our investigation of the oscillatory behavior showed that the mutual information of inputs and outputs was sufficient for the reactive behaviors observed. While we had predicted that both modularity and sparsity would follow the same trend as fitness, neither did so. Instead, selection gradients

  5. A reconstruction algorithm for electrical impedance tomography based on sparsity regularization

    Jin, Bangti

    2011-08-24

    This paper develops a novel sparse reconstruction algorithm for the electrical impedance tomography problem of determining a conductivity parameter from boundary measurements. The sparsity of the \\'inhomogeneity\\' with respect to a certain basis is a priori assumed. The proposed approach is motivated by a Tikhonov functional incorporating a sparsity-promoting ℓ 1-penalty term, and it allows us to obtain quantitative results when the assumption is valid. A novel iterative algorithm of soft shrinkage type was proposed. Numerical results for several two-dimensional problems with both single and multiple convex and nonconvex inclusions were presented to illustrate the features of the proposed algorithm and were compared with one conventional approach based on smoothness regularization. © 2011 John Wiley & Sons, Ltd.

  6. Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach

    Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun

    2015-02-01

    The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.

  7. Observation uncertainty in reversible Markov chains.

    Metzner, Philipp; Weber, Marcus; Schütte, Christof

    2010-09-01

    In many applications one is interested in finding a simplified model which captures the essential dynamical behavior of a real life process. If the essential dynamics can be assumed to be (approximately) memoryless then a reasonable choice for a model is a Markov model whose parameters are estimated by means of Bayesian inference from an observed time series. We propose an efficient Monte Carlo Markov chain framework to assess the uncertainty of the Markov model and related observables. The derived Gibbs sampler allows for sampling distributions of transition matrices subject to reversibility and/or sparsity constraints. The performance of the suggested sampling scheme is demonstrated and discussed for a variety of model examples. The uncertainty analysis of functions of the Markov model under investigation is discussed in application to the identification of conformations of the trialanine molecule via Robust Perron Cluster Analysis (PCCA+) .

  8. Adaptive OFDM Waveform Design for Spatio-Temporal-Sparsity Exploited STAP Radar

    Sen, Satyabrata [ORNL

    2017-11-01

    In this chapter, we describe a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly moving target using an orthogonal frequency division multiplexing (OFDM) radar. The motivation of employing an OFDM signal is that it improves the target-detectability from the interfering signals by increasing the frequency diversity of the system. However, due to the addition of one extra dimension in terms of frequency, the adaptive degrees-of-freedom in an OFDM-STAP also increases. Therefore, to avoid the construction a fully adaptive OFDM-STAP, we develop a sparsity-based STAP algorithm. We observe that the interference spectrum is inherently sparse in the spatio-temporal domain, as the clutter responses occupy only a diagonal ridge on the spatio-temporal plane and the jammer signals interfere only from a few spatial directions. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data compared to the other existing STAP techniques, and produces nearly optimum STAP performance. In addition to designing the STAP filter, we optimally design the transmit OFDM signals by maximizing the output signal-to-interference-plus-noise ratio (SINR) in order to improve the STAP performance. The computation of output SINR depends on the estimated value of the interference covariance matrix, which we obtain by applying the sparse recovery algorithm. Therefore, we analytically assess the effects of the synthesized OFDM coefficients on the sparse recovery of the interference covariance matrix by computing the coherence measure of the sparse measurement matrix. Our numerical examples demonstrate the achieved STAP-performance due to sparsity-based technique and adaptive waveform design.

  9. Understanding uncertainty

    Lindley, Dennis V

    2013-01-01

    Praise for the First Edition ""...a reference for everyone who is interested in knowing and handling uncertainty.""-Journal of Applied Statistics The critically acclaimed First Edition of Understanding Uncertainty provided a study of uncertainty addressed to scholars in all fields, showing that uncertainty could be measured by probability, and that probability obeyed three basic rules that enabled uncertainty to be handled sensibly in everyday life. These ideas were extended to embrace the scientific method and to show how decisions, containing an uncertain element, could be rationally made.

  10. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Sparsity-Based Space-Time Adaptive Processing Using OFDM Radar

    Sen, Satyabrata [ORNL

    2012-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain, and hence we exploit that sparsity to develop an efficient STAP technique. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we show that the optimal STAP-filter weight-vector is equal to the generalized eigenvector corresponding to the minimum generalized eigenvalue of the interference and target covariance matrices. To estimate the target and interference covariance matrices, we apply a residual sparse-recovery technique that enables us to incorporate the partially known support of the sparse vector. Our numerical results demonstrate that the sparsity-based STAP algorithm, with considerably lesser number of secondary data, produces an equivalent performance as the other existing STAP techniques.

  12. Sparsity-based multi-height phase recovery in holographic microscopy

    Rivenson, Yair; Wu, Yichen; Wang, Hongda; Zhang, Yibo; Feizi, Alborz; Ozcan, Aydogan

    2016-11-01

    High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6-8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

  13. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  14. Sparsity-based multi-height phase recovery in holographic microscopy

    Rivenson, Yair

    2016-11-30

    High-resolution imaging of densely connected samples such as pathology slides using digital in-line holographic microscopy requires the acquisition of several holograms, e.g., at >6–8 different sample-to-sensor distances, to achieve robust phase recovery and coherent imaging of specimen. Reducing the number of these holographic measurements would normally result in reconstruction artifacts and loss of image quality, which would be detrimental especially for biomedical and diagnostics-related applications. Inspired by the fact that most natural images are sparse in some domain, here we introduce a sparsity-based phase reconstruction technique implemented in wavelet domain to achieve at least 2-fold reduction in the number of holographic measurements for coherent imaging of densely connected samples with minimal impact on the reconstructed image quality, quantified using a structural similarity index. We demonstrated the success of this approach by imaging Papanicolaou smears and breast cancer tissue slides over a large field-of-view of ~20 mm2 using 2 in-line holograms that are acquired at different sample-to-sensor distances and processed using sparsity-based multi-height phase recovery. This new phase recovery approach that makes use of sparsity can also be extended to other coherent imaging schemes, involving e.g., multiple illumination angles or wavelengths to increase the throughput and speed of coherent imaging.

  15. Measurement Uncertainty

    Koch, Michael

    Measurement uncertainty is one of the key issues in quality assurance. It became increasingly important for analytical chemistry laboratories with the accreditation to ISO/IEC 17025. The uncertainty of a measurement is the most important criterion for the decision whether a measurement result is fit for purpose. It also delivers help for the decision whether a specification limit is exceeded or not. Estimation of measurement uncertainty often is not trivial. Several strategies have been developed for this purpose that will shortly be described in this chapter. In addition the different possibilities to take into account the uncertainty in compliance assessment are explained.

  16. Multiple Speech Source Separation Using Inter-Channel Correlation and Relaxed Sparsity

    Maoshen Jia

    2018-01-01

    Full Text Available In this work, a multiple speech source separation method using inter-channel correlation and relaxed sparsity is proposed. A B-format microphone with four spatially located channels is adopted due to the size of the microphone array to preserve the spatial parameter integrity of the original signal. Specifically, we firstly measure the proportion of overlapped components among multiple sources and find that there exist many overlapped time-frequency (TF components with increasing source number. Then, considering the relaxed sparsity of speech sources, we propose a dynamic threshold-based separation approach of sparse components where the threshold is determined by the inter-channel correlation among the recording signals. After conducting a statistical analysis of the number of active sources at each TF instant, a form of relaxed sparsity called the half-K assumption is proposed so that the active source number in a certain TF bin does not exceed half the total number of simultaneously occurring sources. By applying the half-K assumption, the non-sparse components are recovered by regarding the extracted sparse components as a guide, combined with vector decomposition and matrix factorization. Eventually, the final TF coefficients of each source are recovered by the synthesis of sparse and non-sparse components. The proposed method has been evaluated using up to six simultaneous speech sources under both anechoic and reverberant conditions. Both objective and subjective evaluations validated that the perceptual quality of the separated speech by the proposed approach outperforms existing blind source separation (BSS approaches. Besides, it is robust to different speeches whilst confirming all the separated speeches with similar perceptual quality.

  17. Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity

    2015-10-23

    AFRL-AFOSR-VA-TR-2015-0337 Entropy Viscosity and L1-based Approximations of PDEs: Exploiting Sparsity Jean-Luc Guermond TEXAS A & M UNIVERSITY 750...REPORT DATE (DD-MM-YYYY) 09-05-2015 2. REPORT TYPE Final report 3. DATES COVERED (From - To) 01-07-2012 - 30-06-2015 4. TITLE AND SUBTITLE Entropy ...conservation equations can be stabilized by using the so-called entropy viscosity method and we proposed to to investigate this new technique. We

  18. Convex relaxations of spectral sparsity for robust super-resolution and line spectrum estimation

    Chi, Yuejie

    2017-08-01

    We consider recovering the amplitudes and locations of spikes in a point source signal from its low-pass spectrum that may suffer from missing data and arbitrary outliers. We first review and provide a unified view of several recently proposed convex relaxations that characterize and capitalize the spectral sparsity of the point source signal without discretization under the framework of atomic norms. Next we propose a new algorithm when the spikes are known a priori to be positive, motivated by applications such as neural spike sorting and fluorescence microscopy imaging. Numerical experiments are provided to demonstrate the effectiveness of the proposed approach.

  19. Resolution enhancement of lung 4D-CT via group-sparsity

    Bhavsar, Arnav; Wu, Guorong; Shen, Dinggang; Lian, Jun

    2013-01-01

    Purpose: 4D-CT typically delivers more accurate information about anatomical structures in the lung, over 3D-CT, due to its ability to capture visual information of the lung motion across different respiratory phases. This helps to better determine the dose during radiation therapy for lung cancer. However, a critical concern with 4D-CT that substantially compromises this advantage is the low superior-inferior resolution due to less number of acquired slices, in order to control the CT radiation dose. To address this limitation, the authors propose an approach to reconstruct missing intermediate slices, so as to improve the superior-inferior resolution.Methods: In this method the authors exploit the observation that sampling information across respiratory phases in 4D-CT can be complimentary due to lung motion. The authors’ approach uses this locally complimentary information across phases in a patch-based sparse-representation framework. Moreover, unlike some recent approaches that treat local patches independently, the authors’ approach employs the group-sparsity framework that imposes neighborhood and similarity constraints between patches. This helps in mitigating the trade-off between noise robustness and structure preservation, which is an important consideration in resolution enhancement. The authors discuss the regularizing ability of group-sparsity, which helps in reducing the effect of noise and enables better structural localization and enhancement.Results: The authors perform extensive experiments on the publicly available DIR-Lab Lung 4D-CT dataset [R. Castillo, E. Castillo, R. Guerra, V. Johnson, T. McPhail, A. Garg, and T. Guerrero, “A framework for evaluation of deformable image registration spatial accuracy using large landmark point sets,” Phys. Med. Biol. 54, 1849–1870 (2009)]. First, the authors carry out empirical parametric analysis of some important parameters in their approach. The authors then demonstrate, qualitatively as well as

  20. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  1. Uncertainty theory

    Liu, Baoding

    2015-01-01

    When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. Perhaps some people think that the belief degree should be modeled by subjective probability or fuzzy set theory. However, it is usually inappropriate because both of them may lead to counterintuitive results in this case. In order to rationally deal with belief degrees, uncertainty theory was founded in 2007 and subsequently studied by many researchers. Nowadays, uncertainty theory has become a branch of axiomatic mathematics for modeling belief degrees. This is an introductory textbook on uncertainty theory, uncertain programming, uncertain statistics, uncertain risk analysis, uncertain reliability analysis, uncertain set, uncertain logic, uncertain inference, uncertain process, uncertain calculus, and uncertain differential equation. This textbook also shows applications of uncertainty theory to scheduling, logistics, networks, data mining, c...

  2. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography

    Jørgensen, Jakob Sauer; Sidky, E. Y.

    2015-01-01

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study...... and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers...... measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means...

  3. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    Sandhu, Ali Imran; Desmal, Abdulla; Bagci, Hakan

    2016-01-01

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile's derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  4. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    Sandhu, Ali Imran

    2016-04-10

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  5. COLLAGE-BASED INVERSE PROBLEMS FOR IFSM WITH ENTROPY MAXIMIZATION AND SPARSITY CONSTRAINTS

    Herb Kunze

    2013-11-01

    Full Text Available We consider the inverse problem associated with IFSM: Given a target function f, find an IFSM, such that its invariant fixed point f is sufficiently close to f in the Lp distance. In this paper, we extend the collage-based method developed by Forte and Vrscay (1995 along two different directions. We first search for a set of mappings that not only minimizes the collage error but also maximizes the entropy of the dynamical system. We then include an extra term in the minimization process which takes into account the sparsity of the set of mappings. In this new formulation, the minimization of collage error is treated as multi-criteria problem: we consider three different and conflicting criteria i.e., collage error, entropy and sparsity. To solve this multi-criteria program we proceed by scalarization and we reduce the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented. Numerical studies indicate that a maximum entropy principle exists for this approximation problem, i.e., that the suboptimal solutions produced by collage coding can be improved at least slightly by adding a maximum entropy criterion.

  6. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-01-01

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy. (paper)

  7. A novel framework to alleviate the sparsity problem in context-aware recommender systems

    Yu, Penghua; Lin, Lanfen; Wang, Jing

    2017-04-01

    Recommender systems have become indispensable for services in the era of big data. To improve accuracy and satisfaction, context-aware recommender systems (CARSs) attempt to incorporate contextual information into recommendations. Typically, valid and influential contexts are determined in advance by domain experts or feature selection approaches. Most studies have focused on utilizing the unitary context due to the differences between various contexts. Meanwhile, multi-dimensional contexts will aggravate the sparsity problem, which means that the user preference matrix would become extremely sparse. Consequently, there are not enough or even no preferences in most multi-dimensional conditions. In this paper, we propose a novel framework to alleviate the sparsity issue for CARSs, especially when multi-dimensional contextual variables are adopted. Motivated by the intuition that the overall preferences tend to show similarities among specific groups of users and conditions, we first explore to construct one contextual profile for each contextual condition. In order to further identify those user and context subgroups automatically and simultaneously, we apply a co-clustering algorithm. Furthermore, we expand user preferences in a given contextual condition with the identified user and context clusters. Finally, we perform recommendations based on expanded preferences. Extensive experiments demonstrate the effectiveness of the proposed framework.

  8. OFDM Radar Space-Time Adaptive Processing by Exploiting Spatio-Temporal Sparsity

    Sen, Satyabrata [ORNL

    2013-01-01

    We propose a sparsity-based space-time adaptive processing (STAP) algorithm to detect a slowly-moving target using an orthogonal frequency division multiplexing (OFDM) radar. We observe that the target and interference spectra are inherently sparse in the spatio-temporal domain. Hence, we exploit that sparsity to develop an efficient STAP technique that utilizes considerably lesser number of secondary data and produces an equivalent performance as the other existing STAP techniques. In addition, the use of an OFDM signal increases the frequency diversity of our system, as different scattering centers of a target resonate at different frequencies, and thus improves the target detectability. First, we formulate a realistic sparse-measurement model for an OFDM radar considering both the clutter and jammer as the interfering sources. Then, we apply a residual sparse-recovery technique based on the LASSO estimator to estimate the target and interference covariance matrices, and subsequently compute the optimal STAP-filter weights. Our numerical results demonstrate a comparative performance analysis of the proposed sparse-STAP algorithm with four other existing STAP methods. Furthermore, we discover that the OFDM-STAP filter-weights are adaptable to the frequency-variabilities of the target and interference responses, in addition to the spatio-temporal variabilities. Hence, by better utilizing the frequency variabilities, we propose an adaptive OFDM-waveform design technique, and consequently gain a significant amount of STAP-performance improvement.

  9. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM)

    Gao, Hao; Osher, Stanley; Yu, Hengyong; Wang, Ge

    2011-01-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations. (papers)

  10. Sparsity guided empirical wavelet transform for fault diagnosis of rolling element bearings

    Wang, Dong; Zhao, Yang; Yi, Cai; Tsui, Kwok-Leung; Lin, Jianhui

    2018-02-01

    Rolling element bearings are widely used in various industrial machines, such as electric motors, generators, pumps, gearboxes, railway axles, turbines, and helicopter transmissions. Fault diagnosis of rolling element bearings is beneficial to preventing any unexpected accident and reducing economic loss. In the past years, many bearing fault detection methods have been developed. Recently, a new adaptive signal processing method called empirical wavelet transform attracts much attention from readers and engineers and its applications to bearing fault diagnosis have been reported. The main problem of empirical wavelet transform is that Fourier segments required in empirical wavelet transform are strongly dependent on the local maxima of the amplitudes of the Fourier spectrum of a signal, which connotes that Fourier segments are not always reliable and effective if the Fourier spectrum of the signal is complicated and overwhelmed by heavy noises and other strong vibration components. In this paper, sparsity guided empirical wavelet transform is proposed to automatically establish Fourier segments required in empirical wavelet transform for fault diagnosis of rolling element bearings. Industrial bearing fault signals caused by single and multiple railway axle bearing defects are used to verify the effectiveness of the proposed sparsity guided empirical wavelet transform. Results show that the proposed method can automatically discover Fourier segments required in empirical wavelet transform and reveal single and multiple railway axle bearing defects. Besides, some comparisons with three popular signal processing methods including ensemble empirical mode decomposition, the fast kurtogram and the fast spectral correlation are conducted to highlight the superiority of the proposed method.

  11. Sparsity-promoting orthogonal dictionary updating for image reconstruction from highly undersampled magnetic resonance data.

    Huang, Jinhong; Guo, Li; Feng, Qianjin; Chen, Wufan; Feng, Yanqiu

    2015-07-21

    Image reconstruction from undersampled k-space data accelerates magnetic resonance imaging (MRI) by exploiting image sparseness in certain transform domains. Employing image patch representation over a learned dictionary has the advantage of being adaptive to local image structures and thus can better sparsify images than using fixed transforms (e.g. wavelets and total variations). Dictionary learning methods have recently been introduced to MRI reconstruction, and these methods demonstrate significantly reduced reconstruction errors compared to sparse MRI reconstruction using fixed transforms. However, the synthesis sparse coding problem in dictionary learning is NP-hard and computationally expensive. In this paper, we present a novel sparsity-promoting orthogonal dictionary updating method for efficient image reconstruction from highly undersampled MRI data. The orthogonality imposed on the learned dictionary enables the minimization problem in the reconstruction to be solved by an efficient optimization algorithm which alternately updates representation coefficients, orthogonal dictionary, and missing k-space data. Moreover, both sparsity level and sparse representation contribution using updated dictionaries gradually increase during iterations to recover more details, assuming the progressively improved quality of the dictionary. Simulation and real data experimental results both demonstrate that the proposed method is approximately 10 to 100 times faster than the K-SVD-based dictionary learning MRI method and simultaneously improves reconstruction accuracy.

  12. On the interplay of basis smoothness and specific range conditions occurring in sparsity regularization

    Anzengruber, Stephan W; Hofmann, Bernd; Ramlau, Ronny

    2013-01-01

    The convergence rates results in ℓ 1 -regularization when the sparsity assumption is narrowly missed, presented by Burger et al (2013 Inverse Problems 29 025013), are based on a crucial condition which requires that all basis elements belong to the range of the adjoint of the forward operator. Partly it was conjectured that such a condition is very restrictive. In this context, we study sparsity-promoting varieties of Tikhonov regularization for linear ill-posed problems with respect to an orthonormal basis in a separable Hilbert space using ℓ 1 and sublinear penalty terms. In particular, we show that the corresponding range condition is always satisfied for all basis elements if the problems are well-posed in a certain weaker topology and the basis elements are chosen appropriately related to an associated Gelfand triple. The Radon transform, Symm’s integral equation and linear integral operators of Volterra type are examples for such behaviour, which allows us to apply convergence rates results for non-sparse solutions, and we further extend these results also to the case of non-convex ℓ q -regularization with 0 < q < 1. (paper)

  13. Study on highly efficient seismic data acquisition and processing methods based on sparsity constraint

    Wang, H.; Chen, S.; Tao, C.; Qiu, L.

    2017-12-01

    High-density, high-fold and wide-azimuth seismic data acquisition methods are widely used to overcome the increasingly sophisticated exploration targets. The acquisition period is longer and longer and the acquisition cost is higher and higher. We carry out the study of highly efficient seismic data acquisition and processing methods based on sparse representation theory (or compressed sensing theory), and achieve some innovative results. The theoretical principles of highly efficient acquisition and processing is studied. We firstly reveal sparse representation theory based on wave equation. Then we study the highly efficient seismic sampling methods and present an optimized piecewise-random sampling method based on sparsity prior information. At last, a reconstruction strategy with the sparsity constraint is developed; A two-step recovery approach by combining sparsity-promoting method and hyperbolic Radon transform is also put forward. The above three aspects constitute the enhanced theory of highly efficient seismic data acquisition. The specific implementation strategies of highly efficient acquisition and processing are studied according to the highly efficient acquisition theory expounded in paragraph 2. Firstly, we propose the highly efficient acquisition network designing method by the help of optimized piecewise-random sampling method. Secondly, we propose two types of highly efficient seismic data acquisition methods based on (1) single sources and (2) blended (or simultaneous) sources. Thirdly, the reconstruction procedures corresponding to the above two types of highly efficient seismic data acquisition methods are proposed to obtain the seismic data on the regular acquisition network. A discussion of the impact on the imaging result of blended shooting is discussed. In the end, we implement the numerical tests based on Marmousi model. The achieved results show: (1) the theoretical framework of highly efficient seismic data acquisition and processing

  14. Teaching Uncertainties

    Duerdoth, Ian

    2009-01-01

    The subject of uncertainties (sometimes called errors) is traditionally taught (to first-year science undergraduates) towards the end of a course on statistics that defines probability as the limit of many trials, and discusses probability distribution functions and the Gaussian distribution. We show how to introduce students to the concepts of…

  15. Calibration uncertainty

    Heydorn, Kaj; Anglov, Thomas

    2002-01-01

    Methods recommended by the International Standardization Organisation and Eurachem are not satisfactory for the correct estimation of calibration uncertainty. A novel approach is introduced and tested on actual calibration data for the determination of Pb by ICP-AES. The improved calibration...

  16. Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2017-01-01

    of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC

  17. Gap Resolution

    2017-04-25

    Gap Resolution is a software package that was developed to improve Newbler genome assemblies by automating the closure of sequence gaps caused by repetitive regions in the DNA. This is done by performing the follow steps:1) Identify and distribute the data for each gap in sub-projects. 2) Assemble the data associated with each sub-project using a secondary assembler, such as Newbler or PGA. 3) Determine if any gaps are closed after reassembly, and either design fakes (consensus of closed gap) for those that closed or lab experiments for those that require additional data. The software requires as input a genome assembly produce by the Newbler assembler provided by Roche and 454 data containing paired-end reads.

  18. l0 Sparsity for Image Denoising with Local and Global Priors

    Xiaoni Gao

    2015-01-01

    Full Text Available We propose a l0 sparsity based approach to remove additive white Gaussian noise from a given image. To achieve this goal, we combine the local prior and global prior together to recover the noise-free values of pixels. The local prior depends on the neighborhood relationships of a search window to help maintain edges and smoothness. The global prior is generated from a hierarchical l0 sparse representation to help eliminate the redundant information and preserve the global consistency. In addition, to make the correlations between pixels more meaningful, we adopt Principle Component Analysis to measure the similarities, which can be both propitious to reduce the computational complexity and improve the accuracies. Experiments on the benchmark image set show that the proposed approach can achieve superior performance to the state-of-the-art approaches both in accuracy and perception in removing the zero-mean additive white Gaussian noise.

  19. Improved k-t PCA Algorithm Using Artificial Sparsity in Dynamic MRI.

    Wang, Yiran; Chen, Zhifeng; Wang, Jing; Yuan, Lixia; Xia, Ling; Liu, Feng

    2017-01-01

    The k - t principal component analysis ( k - t PCA) is an effective approach for high spatiotemporal resolution dynamic magnetic resonance (MR) imaging. However, it suffers from larger residual aliasing artifacts and noise amplification when the reduction factor goes higher. To further enhance the performance of this technique, we propose a new method called sparse k - t PCA that combines the k - t PCA algorithm with an artificial sparsity constraint. It is a self-calibrated procedure that is based on the traditional k - t PCA method by further eliminating the reconstruction error derived from complex subtraction of the sampled k - t space from the original reconstructed k - t space. The proposed method is tested through both simulations and in vivo datasets with different reduction factors. Compared to the standard k - t PCA algorithm, the sparse k - t PCA can improve the normalized root-mean-square error performance and the accuracy of temporal resolution. It is thus useful for rapid dynamic MR imaging.

  20. Landmark matching based retinal image alignment by enforcing sparsity in correspondence matrix.

    Zheng, Yuanjie; Daniel, Ebenezer; Hunter, Allan A; Xiao, Rui; Gao, Jianbin; Li, Hongsheng; Maguire, Maureen G; Brainard, David H; Gee, James C

    2014-08-01

    Retinal image alignment is fundamental to many applications in diagnosis of eye diseases. In this paper, we address the problem of landmark matching based retinal image alignment. We propose a novel landmark matching formulation by enforcing sparsity in the correspondence matrix and offer its solutions based on linear programming. The proposed formulation not only enables a joint estimation of the landmark correspondences and a predefined transformation model but also combines the benefits of the softassign strategy (Chui and Rangarajan, 2003) and the combinatorial optimization of linear programming. We also introduced a set of reinforced self-similarities descriptors which can better characterize local photometric and geometric properties of the retinal image. Theoretical analysis and experimental results with both fundus color images and angiogram images show the superior performances of our algorithms to several state-of-the-art techniques. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Sparsity-Based Pixel Super Resolution for Lens-Free Digital In-line Holography.

    Song, Jun; Leon Swisher, Christine; Im, Hyungsoon; Jeong, Sangmoo; Pathania, Divya; Iwamoto, Yoshiko; Pivovarov, Misha; Weissleder, Ralph; Lee, Hakho

    2016-04-21

    Lens-free digital in-line holography (LDIH) is a promising technology for portable, wide field-of-view imaging. Its resolution, however, is limited by the inherent pixel size of an imaging device. Here we present a new computational approach to achieve sub-pixel resolution for LDIH. The developed method is a sparsity-based reconstruction with the capability to handle the non-linear nature of LDIH. We systematically characterized the algorithm through simulation and LDIH imaging studies. The method achieved the spatial resolution down to one-third of the pixel size, while requiring only single-frame imaging without any hardware modifications. This new approach can be used as a general framework to enhance the resolution in nonlinear holographic systems.

  2. Gap Junctions

    Nielsen, Morten Schak; Axelsen, Lene Nygaard; Sorgen, Paul L.; Verma, Vandana; Delmar, Mario; Holstein-Rathlou, Niels-Henrik

    2013-01-01

    Gap junctions are essential to the function of multicellular animals, which require a high degree of coordination between cells. In vertebrates, gap junctions comprise connexins and currently 21 connexins are known in humans. The functions of gap junctions are highly diverse and include exchange of metabolites and electrical signals between cells, as well as functions, which are apparently unrelated to intercellular communication. Given the diversity of gap junction physiology, regulation of gap junction activity is complex. The structure of the various connexins is known to some extent; and structural rearrangements and intramolecular interactions are important for regulation of channel function. Intercellular coupling is further regulated by the number and activity of channels present in gap junctional plaques. The number of connexins in cell-cell channels is regulated by controlling transcription, translation, trafficking, and degradation; and all of these processes are under strict control. Once in the membrane, channel activity is determined by the conductive properties of the connexin involved, which can be regulated by voltage and chemical gating, as well as a large number of posttranslational modifications. The aim of the present article is to review our current knowledge on the structure, regulation, function, and pharmacology of gap junctions. This will be supported by examples of how different connexins and their regulation act in concert to achieve appropriate physiological control, and how disturbances of connexin function can lead to disease. © 2012 American Physiological Society. Compr Physiol 2:1981-2035, 2012. PMID:23723031

  3. Demand Uncertainty

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...

  4. Mythic gaps

    William Hansen

    2014-11-01

    Full Text Available Different kinds of omissions sometimes occur, or are perceived to occur, in traditional narratives and in tradition-inspired literature. A familiar instance is when a narrator realizes that he or she does not fully remember the story that he or she has begun to tell, and so leaves out part of it, which for listeners may possibly result in an unintelligible narrative. But many instances of narrative gap are not so obvious. From straightforward, objective gaps one can distinguish less-obvious subjective gaps: in many cases narrators do not leave out anything crucial or truly relevant from their exposition, and yet readers perceive gaps and take steps to fill them. The present paper considers four examples of subjective gaps drawn from ancient Greek literature (the Pandora myth, ancient Roman literature (the Pygmalion legend, ancient Hebrew literature (the Joseph legend, and early Christian literature (the Jesus legend. I consider the quite varied ways in which interpreters expand the inherited texts of these stories, such as by devising names, manufacturing motives, creating backstories, and in general filling in biographical ellipses. Finally, I suggest an explanation for the phenomenon of subjective gaps, arguing that, despite their variety, they have a single cause.

  5. Uncertainty, joint uncertainty, and the quantum uncertainty principle

    Narasimhachar, Varun; Poostindouz, Alireza; Gour, Gilad

    2016-01-01

    Historically, the element of uncertainty in quantum mechanics has been expressed through mathematical identities called uncertainty relations, a great many of which continue to be discovered. These relations use diverse measures to quantify uncertainty (and joint uncertainty). In this paper we use operational information-theoretic principles to identify the common essence of all such measures, thereby defining measure-independent notions of uncertainty and joint uncertainty. We find that most existing entropic uncertainty relations use measures of joint uncertainty that yield themselves to a small class of operational interpretations. Our notion relaxes this restriction, revealing previously unexplored joint uncertainty measures. To illustrate the utility of our formalism, we derive an uncertainty relation based on one such new measure. We also use our formalism to gain insight into the conditions under which measure-independent uncertainty relations can be found. (paper)

  6. Photometric Uncertainties

    Zou, Xiao-Duan; Li, Jian-Yang; Clark, Beth Ellen; Golish, Dathon

    2018-01-01

    The OSIRIS-REx spacecraft, launched in September, 2016, will study the asteroid Bennu and return a sample from its surface to Earth in 2023. Bennu is a near-Earth carbonaceous asteroid which will provide insight into the formation and evolution of the solar system. OSIRIS-REx will first approach Bennu in August 2018 and will study the asteroid for approximately two years before sampling. OSIRIS-REx will develop its photometric model (including Lommel-Seelinger, ROLO, McEwen, Minnaert and Akimov) of Bennu with OCAM and OVIRS during the Detailed Survey mission phase. The model developed during this phase will be used to photometrically correct the OCAM and OVIRS data.Here we present the analysis of the error for the photometric corrections. Based on our testing data sets, we find:1. The model uncertainties is only correct when we use the covariance matrix to calculate, because the parameters are highly correlated.2. No evidence of domination of any parameter in each model.3. And both model error and the data error contribute to the final correction error comparably.4. We tested the uncertainty module on fake and real data sets, and find that model performance depends on the data coverage and data quality. These tests gave us a better understanding of how different model behave in different case.5. L-S model is more reliable than others. Maybe because the simulated data are based on L-S model. However, the test on real data (SPDIF) does show slight advantage of L-S, too. ROLO is not reliable to use when calculating bond albedo. The uncertainty of McEwen model is big in most cases. Akimov performs unphysical on SOPIE 1 data.6. Better use L-S as our default choice, this conclusion is based mainly on our test on SOPIE data and IPDIF.

  7. Uncertainty analysis

    Thomas, R.E.

    1982-03-01

    An evaluation is made of the suitability of analytical and statistical sampling methods for making uncertainty analyses. The adjoint method is found to be well-suited for obtaining sensitivity coefficients for computer programs involving large numbers of equations and input parameters. For this purpose the Latin Hypercube Sampling method is found to be inferior to conventional experimental designs. The Latin hypercube method can be used to estimate output probability density functions, but requires supplementary rank transformations followed by stepwise regression to obtain uncertainty information on individual input parameters. A simple Cork and Bottle problem is used to illustrate the efficiency of the adjoint method relative to certain statistical sampling methods. For linear models of the form Ax=b it is shown that a complete adjoint sensitivity analysis can be made without formulating and solving the adjoint problem. This can be done either by using a special type of statistical sampling or by reformulating the primal problem and using suitable linear programming software

  8. Uncertainty quantification for environmental models

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  9. A robust holographic autofocusing criterion based on edge sparsity: comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    Tamamitsu, Miu; Zhang, Yibo; Wang, Hongda; Wu, Yichen; Ozcan, Aydogan

    2018-02-01

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  10. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.

    He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi

    2014-06-27

    The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed

  11. Divergent NEE balances from manual-chamber CO2 fluxes linked to different measurement and gap-filling strategies: A source for uncertainty of estimated terrestrial C sources and sinks?

    Huth, Vytas; Vaidya, Shrijana; Hoffmann, Mathias

    2017-01-01

    Manual closed-chamber measurements are commonly used to quantify annual net CO2 ecosystem exchange (NEE) in a wide range of terrestrial ecosystems. However, differences in both the acquisition and gap filling of manual closed-chamber data are large in the existing literature, complicating inter...... measurements from sunrise to noon (sunrise approach) to capture a span of light conditions for measurements of NEE with transparent chambers. (2) The second level included three different methods of pooling measured ecosystem respiration (RECO) fluxes for empirical modeling of RECO: campaign-wise (19 single...... RECO fluxes (direct GPP modeling) or empirically modeled RECO fluxes from measured NEE fluxes (indirect GPP modeling). Measurements were made during 2013–2014 in a lucerne-clover-grass field in NE Germany. Across the different combinations of measurement and gap-filling options, the NEE balances...

  12. Correlation-constrained and sparsity-controlled vector autoregressive model for spatio-temporal wind power forecasting

    Zhao, Yongning; Ye, Lin; Pinson, Pierre

    2018-01-01

    The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity-Controlled Vec......The ever-increasing number of wind farms has brought both challenges and opportunities in the development of wind power forecasting techniques to take advantage of interdependenciesbetweentensorhundredsofspatiallydistributedwind farms, e.g., over a region. In this paper, a Sparsity...... matrices in direct manner. However this original SC-VAR is difficult to implement due to its complicated constraints and the lack of guidelines for setting its parameters. To reduce the complexity of this MINLP and to make it possible to incorporate prior expert knowledge to benefit model building...

  13. How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.

    Jørgensen, J S; Sidky, E Y

    2015-06-13

    We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.

  14. Comparison of Gini index and Tamura coefficient for holographic autofocusing based on the edge sparsity of the complex optical wavefront

    Tamamitsu, Miu

    2017-08-27

    The Sparsity of the Gradient (SoG) is a robust autofocusing criterion for holography, where the gradient modulus of the complex refocused hologram is calculated, on which a sparsity metric is applied. Here, we compare two different choices of sparsity metrics used in SoG, specifically, the Gini index (GI) and the Tamura coefficient (TC), for holographic autofocusing on dense/connected or sparse samples. We provide a theoretical analysis predicting that for uniformly distributed image data, TC and GI exhibit similar behavior, while for naturally sparse images containing few high-valued signal entries and many low-valued noisy background pixels, TC is more sensitive to distribution changes in the signal and more resistive to background noise. These predictions are also confirmed by experimental results using SoG-based holographic autofocusing on dense and connected samples (such as stained breast tissue sections) as well as highly sparse samples (such as isolated Giardia lamblia cysts). Through these experiments, we found that ToG and GoG offer almost identical autofocusing performance on dense and connected samples, whereas for naturally sparse samples, GoG should be calculated on a relatively small region of interest (ROI) closely surrounding the object, while ToG offers more flexibility in choosing a larger ROI containing more background pixels.

  15. ENDMEMBER EXTRACTION OF HIGHLY MIXED DATA USING L1 SPARSITY-CONSTRAINED MULTILAYER NONNEGATIVE MATRIX FACTORIZATION

    H. Fang

    2018-04-01

    Full Text Available Due to the limited spatial resolution of remote hyperspectral sensors, pixels are usually highly mixed in the hyperspectral images. Endmember extraction refers to the process identifying the pure endmember signatures from the mixture, which is an important step towards the utilization of hyperspectral data. Nonnegative matrix factorization (NMF is a widely used method of endmember extraction due to its effectiveness and convenience. While most NMF-based methods have single-layer structures, which may have difficulties in effectively learning the structures of highly mixed and complex data. On the other hand, multilayer algorithms have shown great advantages in learning data features and been widely studied in many fields. In this paper, we presented a L1 sparsityconstrained multilayer NMF method for endmember extraction of highly mixed data. Firstly, the multilayer NMF structure was obtained by unfolding NMF into a certain number of layers. In each layer, the abundance matrix was decomposed into the endmember matrix and abundance matrix of the next layer. Besides, to improve the performance of NMF, we incorporated sparsity constraints to the multilayer NMF model by adding a L1 regularizer of the abundance matrix to each layer. At last, a layer-wise optimization method based on NeNMF was proposed to train the multilayer NMF structure. Experiments were conducted on both synthetic data and real data. The results demonstrate that our proposed algorithm can achieve better results than several state-of-art approaches.

  16. Spatial sparsity based indoor localization in wireless sensor network for assistive healthcare.

    Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark

    2012-01-01

    Indoor localization is one of the key topics in the area of wireless networks with increasing applications in assistive healthcare, where tracking the position and actions of the patient or elderly are required for medical observation or accident prevention. Most of the common indoor localization methods are based on estimating one or more location-dependent signal parameters like TOA, AOA or RSS. However, some difficulties and challenges caused by the complex scenarios within a closed space significantly limit the applicability of those existing approaches in an indoor assistive environment, such as the well-known multipath effect. In this paper, we develop a new one-stage localization method based on spatial sparsity of the x-y plane. In this method, we directly estimate the location of the emitter without going through the intermediate stage of TOA or signal strength estimation. We evaluate the performance of the proposed method using Monte Carlo simulation. The results show that the proposed method is (i) very accurate even with a small number of sensors and (ii) very effective in addressing the multi-path issues.

  17. Sparc: a sparsity-based consensus algorithm for long erroneous sequencing reads

    Chengxi Ye

    2016-06-01

    Full Text Available Motivation. The third generation sequencing (3GS technology generates long sequences of thousands of bases. However, its current error rates are estimated in the range of 15–40%, significantly higher than those of the prevalent next generation sequencing (NGS technologies (less than 1%. Fundamental bioinformatics tasks such as de novo genome assembly and variant calling require high-quality sequences that need to be extracted from these long but erroneous 3GS sequences. Results. We describe a versatile and efficient linear complexity consensus algorithm Sparc to facilitate de novo genome assembly. Sparc builds a sparse k-mer graph using a collection of sequences from a targeted genomic region. The heaviest path which approximates the most likely genome sequence is searched through a sparsity-induced reweighted graph as the consensus sequence. Sparc supports using NGS and 3GS data together, which leads to significant improvements in both cost efficiency and computational efficiency. Experiments with Sparc show that our algorithm can efficiently provide high-quality consensus sequences using both PacBio and Oxford Nanopore sequencing technologies. With only 30× PacBio data, Sparc can reach a consensus with error rate <0.5%. With the more challenging Oxford Nanopore data, Sparc can also achieve similar error rate when combined with NGS data. Compared with the existing approaches, Sparc calculates the consensus with higher accuracy, and uses approximately 80% less memory and time. Availability. The source code is available for download at https://github.com/yechengxi/Sparc.

  18. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  19. An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System

    Hamza Djelouat

    2017-01-01

    Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.

  20. Time domain localization technique with sparsity constraint for imaging acoustic sources

    Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain

    2017-09-01

    This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.

  1. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  2. Spectrum Sensing and Primary User Localization in Cognitive Radio Networks via Sparsity

    Lanchao Liu

    2016-01-01

    Full Text Available The theory of compressive sensing (CS has been employed to detect available spectrum resource in cognitive radio (CR networks recently. Capitalizing on the spectrum resource underutilization and spatial sparsity of primary user (PU locations, CS enables the identification of the unused spectrum bands and PU locations at a low sampling rate. Although CS has been studied in the cooperative spectrum sensing mechanism in which CR nodes work collaboratively to accomplish the spectrum sensing and PU localization task, many important issues remain unsettled. Does the designed compressive spectrum sensing mechanism satisfy the Restricted Isometry Property, which guarantees a successful recovery of the original sparse signal? Can the spectrum sensing results help the localization of PUs? What are the characteristics of localization errors? To answer those questions, we try to justify the applicability of the CS theory to the compressive spectrum sensing framework in this paper, and propose a design of PU localization utilizing the spectrum usage information. The localization error is analyzed by the Cramér-Rao lower bound, which can be exploited to improve the localization performance. Detail analysis and simulations are presented to support the claims and demonstrate the efficacy and efficiency of the proposed mechanism.

  3. Global mapping of stratigraphy of an old-master painting using sparsity-based terahertz reflectometry.

    Dong, Junliang; Locquet, Alexandre; Melis, Marcello; Citrin, D S

    2017-11-08

    The process by which art paintings are produced typically involves the successive applications of preparatory and paint layers to a canvas or other support; however, there is an absence of nondestructive modalities to provide a global mapping of the stratigraphy, information that is crucial for evaluation of its authenticity and attribution, for insights into historical or artist-specific techniques, as well as for conservation. We demonstrate sparsity-based terahertz reflectometry can be applied to extract a detailed 3D mapping of the layer structure of the 17th century easel painting Madonna in Preghiera by the workshop of Giovanni Battista Salvi da Sassoferrato, in which the structure of the canvas support, the ground, imprimatura, underpainting, pictorial, and varnish layers are identified quantitatively. In addition, a hitherto unidentified restoration of the varnish has been found. Our approach unlocks the full promise of terahertz reflectometry to provide a global and detailed account of an easel painting's stratigraphy by exploiting the sparse deconvolution, without which terahertz reflectometry in the past has only provided a meager tool for the characterization of paintings with paint-layer thicknesses smaller than 50 μm. The proposed modality can also be employed across a broad range of applications in nondestructive testing and biomedical imaging.

  4. Gear fault diagnosis based on the structured sparsity time-frequency analysis

    Sun, Ruobin; Yang, Zhibo; Chen, Xuefeng; Tian, Shaohua; Xie, Yong

    2018-03-01

    Over the last decade, sparse representation has become a powerful paradigm in mechanical fault diagnosis due to its excellent capability and the high flexibility for complex signal description. The structured sparsity time-frequency analysis (SSTFA) is a novel signal processing method, which utilizes mixed-norm priors on time-frequency coefficients to obtain a fine match for the structure of signals. In order to extract the transient feature from gear vibration signals, a gear fault diagnosis method based on SSTFA is proposed in this work. The steady modulation components and impulsive components of the defective gear vibration signals can be extracted simultaneously by choosing different time-frequency neighborhood and generalized thresholding operators. Besides, the time-frequency distribution with high resolution is obtained by piling different components in the same diagram. The diagnostic conclusion can be made according to the envelope spectrum of the impulsive components or by the periodicity of impulses. The effectiveness of the method is verified by numerical simulations, and the vibration signals registered from a gearbox fault simulator and a wind turbine. To validate the efficiency of the presented methodology, comparisons are made among some state-of-the-art vibration separation methods and the traditional time-frequency analysis methods. The comparisons show that the proposed method possesses advantages in separating feature signals under strong noise and accounting for the inner time-frequency structure of the gear vibration signals.

  5. Accounting for Households' Perceived Income Uncertainty in Consumption Risk Sharing

    Singh, S.; Stoltenberg, C.A.

    2017-01-01

    We develop a consumption risk-sharing model that distinguishes households' perceived income uncertainty from income uncertainty as measured by an econometrician. Households receive signals on their future disposable income that can drive a gap between the two uncertainties. Accounting for the

  6. Knowledge Gaps

    Lyles, Marjorie; Pedersen, Torben; Petersen, Bent

    2003-01-01

    The study explores what factors influence the reduction of managers' perceivedknowledge gaps in the context of the environments of foreign markets. Potentialdeterminants are derived from traditional internationalization theory as well asorganizational learning theory, including the concept...... of absorptive capacity. Building onthese literature streams a conceptual model is developed and tested on a set of primarydata of Danish firms and their foreign market operations. The empirical study suggeststhat the factors that pertain to the absorptive capacity concept - capabilities ofrecognizing......, assimilating, and utilizing knowledge - are crucial determinants ofknowledge gap elimination. In contrast, the two factors deemed essential in traditionalinternationalization process theory - elapsed time of operations and experientiallearning - are found to have no or limited effect.Key words...

  7. Infrared dim moving target tracking via sparsity-based discriminative classifier and convolutional network

    Qian, Kun; Zhou, Huixin; Wang, Bingjian; Song, Shangzhen; Zhao, Dong

    2017-11-01

    Infrared dim and small target tracking is a great challenging task. The main challenge for target tracking is to account for appearance change of an object, which submerges in the cluttered background. An efficient appearance model that exploits both the global template and local representation over infrared image sequences is constructed for dim moving target tracking. A Sparsity-based Discriminative Classifier (SDC) and a Convolutional Network-based Generative Model (CNGM) are combined with a prior model. In the SDC model, a sparse representation-based algorithm is adopted to calculate the confidence value that assigns more weights to target templates than negative background templates. In the CNGM model, simple cell feature maps are obtained by calculating the convolution between target templates and fixed filters, which are extracted from the target region at the first frame. These maps measure similarities between each filter and local intensity patterns across the target template, therefore encoding its local structural information. Then, all the maps form a representation, preserving the inner geometric layout of a candidate template. Furthermore, the fixed target template set is processed via an efficient prior model. The same operation is applied to candidate templates in the CNGM model. The online update scheme not only accounts for appearance variations but also alleviates the migration problem. At last, collaborative confidence values of particles are utilized to generate particles' importance weights. Experiments on various infrared sequences have validated the tracking capability of the presented algorithm. Experimental results show that this algorithm runs in real-time and provides a higher accuracy than state of the art algorithms.

  8. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  9. Stochastically Estimating Modular Criticality in Large-Scale Logic Circuits Using Sparsity Regularization and Compressive Sensing

    Mohammed Alawad

    2015-03-01

    Full Text Available This paper considers the problem of how to efficiently measure a large and complex information field with optimally few observations. Specifically, we investigate how to stochastically estimate modular criticality values in a large-scale digital circuit with a very limited number of measurements in order to minimize the total measurement efforts and time. We prove that, through sparsity-promoting transform domain regularization and by strategically integrating compressive sensing with Bayesian learning, more than 98% of the overall measurement accuracy can be achieved with fewer than 10% of measurements as required in a conventional approach that uses exhaustive measurements. Furthermore, we illustrate that the obtained criticality results can be utilized to selectively fortify large-scale digital circuits for operation with narrow voltage headrooms and in the presence of soft-errors rising at near threshold voltage levels, without excessive hardware overheads. Our numerical simulation results have shown that, by optimally allocating only 10% circuit redundancy, for some large-scale benchmark circuits, we can achieve more than a three-times reduction in its overall error probability, whereas if randomly distributing such 10% hardware resource, less than 2% improvements in the target circuit’s overall robustness will be observed. Finally, we conjecture that our proposed approach can be readily applied to estimate other essential properties of digital circuits that are critical to designing and analyzing them, such as the observability measure in reliability analysis and the path delay estimation in stochastic timing analysis. The only key requirement of our proposed methodology is that these global information fields exhibit a certain degree of smoothness, which is universally true for almost any physical phenomenon.

  10. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    Jeffrey, N.; Abdalla, F. B.; Lahav, O.; Lanusse, F.; Starck, J.-L.; Leonard, A.; Kirk, D.; Chang, C.; Baxter, E.; Kacprzak, T.; Seitz, S.; Vikram, V.; Whiteway, L.; Abbott, T. M. C.; Allam, S.; Avila, S.; Bertin, E.; Brooks, D.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Davis, C.; De Vicente, J.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; García-Bellido, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Gschwend, J.; Gutierrez, G.; Hartley, W. G.; Honscheid, K.; Hoyle, B.; James, D. J.; Jarvis, M.; Kuehn, K.; Lima, M.; Lin, H.; March, M.; Melchior, P.; Menanteau, F.; Miquel, R.; Plazas, A. A.; Reil, K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, M.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Walker, A. R.

    2018-05-01

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion, not accounting for survey masks or noise. The Wiener filter is well-motivated for Gaussian density fields in a Bayesian framework. GLIMPSE uses sparsity, aiming to reconstruct non-linearities in the density field. We compare these methods with several tests using public Dark Energy Survey (DES) Science Verification (SV) data and realistic DES simulations. The Wiener filter and GLIMPSE offer substantial improvements over smoothed KS with a range of metrics. Both the Wiener filter and GLIMPSE convergence reconstructions show a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated ΛCDM shear catalogues and catalogues with no mass fluctuations (a standard data vector when inferring cosmology from peak statistics); the maximum signal-to-noise of these peak statistics is increased by a factor of 3.5 for the Wiener filter and 9 for GLIMPSE. With simulations we measure the reconstruction of the harmonic phases; the phase residuals' concentration is improved 17% by GLIMPSE and 18% by the Wiener filter. The correlation between reconstructions from data and foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE.

  11. Exploiting sparsity of interconnections in spatio-temporal wind speed forecasting using Wavelet Transform

    Tascikaraoglu, Akin; Sanandaji, Borhan M.; Poolla, Kameshwar; Varaiya, Pravin

    2016-01-01

    Highlights: • We propose a spatio-temporal approach for wind speed forecasting. • The method is based on a combination of Wavelet decomposition and structured-sparse recovery. • Our analyses confirm that low-dimensional structures govern the interactions between stations. • Our method particularly shows improvements for profiles with high ramps. • We examine our approach on real data and illustrate its superiority over a set of benchmark models. - Abstract: Integration of renewable energy resources into the power grid is essential in achieving the envisioned sustainable energy future. Stochasticity and intermittency characteristics of renewable energies, however, present challenges for integrating these resources into the existing grid in a large scale. Reliable renewable energy integration is facilitated by accurate wind forecasts. In this paper, we propose a novel wind speed forecasting method which first utilizes Wavelet Transform (WT) for decomposition of the wind speed data into more stationary components and then uses a spatio-temporal model on each sub-series for incorporating both temporal and spatial information. The proposed spatio-temporal forecasting approach on each sub-series is based on the assumption that there usually exists an intrinsic low-dimensional structure between time series data in a collection of meteorological stations. Our approach is inspired by Compressive Sensing (CS) and structured-sparse recovery algorithms. Based on detailed case studies, we show that the proposed approach based on exploiting the sparsity of correlations between a large set of meteorological stations and decomposing time series for higher-accuracy forecasts considerably improve the short-term forecasts compared to the temporal and spatio-temporal benchmark methods.

  12. Improving Weak Lensing Mass Map Reconstructions using Gaussian and Sparsity Priors: Application to DES SV

    Jeffrey, N.; et al.

    2018-01-26

    Mapping the underlying density field, including non-visible dark matter, using weak gravitational lensing measurements is now a standard tool in cosmology. Due to its importance to the science results of current and upcoming surveys, the quality of the convergence reconstruction methods should be well understood. We compare three different mass map reconstruction methods: Kaiser-Squires (KS), Wiener filter, and GLIMPSE. KS is a direct inversion method, taking no account of survey masks or noise. The Wiener filter is well motivated for Gaussian density fields in a Bayesian framework. The GLIMPSE method uses sparsity, with the aim of reconstructing non-linearities in the density field. We compare these methods with a series of tests on the public Dark Energy Survey (DES) Science Verification (SV) data and on realistic DES simulations. The Wiener filter and GLIMPSE methods offer substantial improvement on the standard smoothed KS with a range of metrics. For both the Wiener filter and GLIMPSE convergence reconstructions we present a 12% improvement in Pearson correlation with the underlying truth from simulations. To compare the mapping methods' abilities to find mass peaks, we measure the difference between peak counts from simulated {\\Lambda}CDM shear catalogues and catalogues with no mass fluctuations. This is a standard data vector when inferring cosmology from peak statistics. The maximum signal-to-noise value of these peak statistic data vectors was increased by a factor of 3.5 for the Wiener filter and by a factor of 9 using GLIMPSE. With simulations we measure the reconstruction of the harmonic phases, showing that the concentration of the phase residuals is improved 17% by GLIMPSE and 18% by the Wiener filter. We show that the correlation between the reconstructions from data and the foreground redMaPPer clusters is increased 18% by the Wiener filter and 32% by GLIMPSE. [Abridged

  13. Leveraging EAP-Sparsity for Compressed Sensing of MS-HARDI in (k, q)-Space.

    Sun, Jiaqi; Sakhaee, Elham; Entezari, Alireza; Vemuri, Baba C

    2015-01-01

    Compressed Sensing (CS) for the acceleration of MR scans has been widely investigated in the past decade. Lately, considerable progress has been made in achieving similar speed ups in acquiring multi-shell high angular resolution diffusion imaging (MS-HARDI) scans. Existing approaches in this context were primarily concerned with sparse reconstruction of the diffusion MR signal S(q) in the q-space. More recently, methods have been developed to apply the compressed sensing framework to the 6-dimensional joint (k, q)-space, thereby exploiting the redundancy in this 6D space. To guarantee accurate reconstruction from partial MS-HARDI data, the key ingredients of compressed sensing that need to be brought together are: (1) the function to be reconstructed needs to have a sparse representation, and (2) the data for reconstruction ought to be acquired in the dual domain (i.e., incoherent sensing) and (3) the reconstruction process involves a (convex) optimization. In this paper, we present a novel approach that uses partial Fourier sensing in the 6D space of (k, q) for the reconstruction of P(x, r). The distinct feature of our approach is a sparsity model that leverages surfacelets in conjunction with total variation for the joint sparse representation of P(x, r). Thus, our method stands to benefit from the practical guarantees for accurate reconstruction from partial (k, q)-space data. Further, we demonstrate significant savings in acquisition time over diffusion spectral imaging (DSI) which is commonly used as the benchmark for comparisons in reported literature. To demonstrate the benefits of this approach,.we present several synthetic and real data examples.

  14. Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rising, Michael Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Alwin, Jennifer Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The sensitivity and uncertainty analysis course will introduce students to keff sensitivity data, cross-section uncertainty data, how keff sensitivity data and keff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.

  15. Stable and efficient Q-compensated least-squares migration with compressive sensing, sparsity-promoting, and preconditioning

    Chai, Xintao; Wang, Shangxu; Tang, Genyang; Meng, Xiangcui

    2017-10-01

    The anelastic effects of subsurface media decrease the amplitude and distort the phase of propagating wave. These effects, also referred to as the earth's Q filtering effects, diminish seismic resolution. Ignoring anelastic effects during seismic imaging process generates an image with reduced amplitude and incorrect position of reflectors, especially for highly absorptive media. The numerical instability and the expensive computational cost are major concerns when compensating for anelastic effects during migration. We propose a stable and efficient Q-compensated imaging methodology with compressive sensing, sparsity-promoting, and preconditioning. The stability is achieved by using the Born operator for forward modeling and the adjoint operator for back propagating the residual wavefields. Constructing the attenuation-compensated operators by reversing the sign of attenuation operator is avoided. The method proposed is always stable. To reduce the computational cost that is proportional to the number of wave-equation to be solved (thereby the number of frequencies, source experiments, and iterations), we first subsample over both frequencies and source experiments. We mitigate the artifacts caused by the dimensionality reduction via promoting sparsity of the imaging solutions. We further employ depth- and Q-preconditioning operators to accelerate the convergence rate of iterative migration. We adopt a relatively simple linearized Bregman method to solve the sparsity-promoting imaging problem. Singular value decomposition analysis of the forward operator reveals that attenuation increases the condition number of migration operator, making the imaging problem more ill-conditioned. The visco-acoustic imaging problem converges slower than the acoustic case. The stronger the attenuation, the slower the convergence rate. The preconditioning strategy evidently decreases the condition number of migration operator, which makes the imaging problem less ill-conditioned and

  16. Uncertainty and measurement

    Landsberg, P.T.

    1990-01-01

    This paper explores how the quantum mechanics uncertainty relation can be considered to result from measurements. A distinction is drawn between the uncertainties obtained by scrutinising experiments and the standard deviation type of uncertainty definition used in quantum formalism. (UK)

  17. Diversifying the composition and structure of managed late-successional forests with harvest gaps: What is the optimal gap size?

    Christel C. Kern; Anthony W. D’Amato; Terry F. Strong

    2013-01-01

    Managing forests for resilience is crucial in the face of uncertain future environmental conditions. Because harvest gap size alters the species diversity and vertical and horizontal structural heterogeneity, there may be an optimum range of gap sizes for conferring resilience to environmental uncertainty. We examined the impacts of different harvest gap sizes on...

  18. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Xu, Qiaofeng; Sawatzky, Alex; Anastasio, Mark A.; Yang, Deshan; Tan, Jun

    2016-01-01

    accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets.

  19. Accelerated fast iterative shrinkage thresholding algorithms for sparsity-regularized cone-beam CT image reconstruction

    Xu, Qiaofeng; Yang, Deshan; Tan, Jun; Sawatzky, Alex; Anastasio, Mark A.

    2016-01-01

    accelerated FISTAs for use with two nonsmooth penalty functions that will lead to further reductions in image reconstruction times while preserving image quality. Moreover, with the help of a mixed sparsity-regularization, better preservation of soft-tissue structures can be potentially obtained. The algorithms were systematically evaluated by use of computer-simulated and clinical data sets. PMID:27036582

  20. The uncertainties in estimating measurement uncertainties

    Clark, J.P.; Shull, A.H.

    1994-01-01

    All measurements include some error. Whether measurements are used for accountability, environmental programs or process support, they are of little value unless accompanied by an estimate of the measurements uncertainty. This fact is often overlooked by the individuals who need measurements to make decisions. This paper will discuss the concepts of measurement, measurements errors (accuracy or bias and precision or random error), physical and error models, measurement control programs, examples of measurement uncertainty, and uncertainty as related to measurement quality. Measurements are comparisons of unknowns to knowns, estimates of some true value plus uncertainty; and are no better than the standards to which they are compared. Direct comparisons of unknowns that match the composition of known standards will normally have small uncertainties. In the real world, measurements usually involve indirect comparisons of significantly different materials (e.g., measuring a physical property of a chemical element in a sample having a matrix that is significantly different from calibration standards matrix). Consequently, there are many sources of error involved in measurement processes that can affect the quality of a measurement and its associated uncertainty. How the uncertainty estimates are determined and what they mean is as important as the measurement. The process of calculating the uncertainty of a measurement itself has uncertainties that must be handled correctly. Examples of chemistry laboratory measurement will be reviewed in this report and recommendations made for improving measurement uncertainties

  1. Uncertainty in social dilemmas

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size uncertainty). Several researchers have therefore asked themselves the question as to how such uncertainty influences people’s choice behavior. These researchers have repeatedly concluded that uncertainty...

  2. A Strategy for Uncertainty Visualization Design

    2009-10-01

    143–156, Magdeburg , Germany . [11] Thomson, J., Hetzler, E., MacEachren, A., Gahegan, M. and Pavel, M. (2005), A Typology for Visualizing Uncertainty...and Stasko [20] to bridge analytic gaps in visualization design, when tasks in the strategy overlap (and therefore complement) design frameworks

  3. Uncertainty analysis for hot channel

    Panka, I.; Kereszturi, A.

    2006-01-01

    The fulfillment of the safety analysis acceptance criteria is usually evaluated by separate hot channel calculations using the results of neutronic or/and thermo hydraulic system calculations. In case of an ATWS event (inadvertent withdrawal of control assembly), according to the analysis, a number of fuel rods are experiencing DNB for a longer time and must be regarded as failed. Their number must be determined for a further evaluation of the radiological consequences. In the deterministic approach, the global power history must be multiplied by different hot channel factors (kx) taking into account the radial power peaking factors for each fuel pin. If DNB occurs it is necessary to perform a few number of hot channel calculations to determine the limiting kx leading just to DNB and fuel failure (the conservative DNBR limit is 1.33). Knowing the pin power distribution from the core design calculation, the number of failed fuel pins can be calculated. The above procedure can be performed by conservative assumptions (e.g. conservative input parameters in the hot channel calculations), as well. In case of hot channel uncertainty analysis, the relevant input parameters (k x, mass flow, inlet temperature of the coolant, pin average burnup, initial gap size, selection of power history influencing the gap conductance value) of hot channel calculations and the DNBR limit are varied considering the respective uncertainties. An uncertainty analysis methodology was elaborated combining the response surface method with the one sided tolerance limit method of Wilks. The results of deterministic and uncertainty hot channel calculations are compared regarding to the number of failed fuel rods, max. temperature of the clad surface and max. temperature of the fuel (Authors)

  4. Pandemic influenza: certain uncertainties

    Morens, David M.; Taubenberger, Jeffery K.

    2011-01-01

    SUMMARY For at least five centuries, major epidemics and pandemics of influenza have occurred unexpectedly and at irregular intervals. Despite the modern notion that pandemic influenza is a distinct phenomenon obeying such constant (if incompletely understood) rules such as dramatic genetic change, cyclicity, “wave” patterning, virus replacement, and predictable epidemic behavior, much evidence suggests the opposite. Although there is much that we know about pandemic influenza, there appears to be much more that we do not know. Pandemics arise as a result of various genetic mechanisms, have no predictable patterns of mortality among different age groups, and vary greatly in how and when they arise and recur. Some are followed by new pandemics, whereas others fade gradually or abruptly into long-term endemicity. Human influenza pandemics have been caused by viruses that evolved singly or in co-circulation with other pandemic virus descendants and often have involved significant transmission between, or establishment of, viral reservoirs within other animal hosts. In recent decades, pandemic influenza has continued to produce numerous unanticipated events that expose fundamental gaps in scientific knowledge. Influenza pandemics appear to be not a single phenomenon but a heterogeneous collection of viral evolutionary events whose similarities are overshadowed by important differences, the determinants of which remain poorly understood. These uncertainties make it difficult to predict influenza pandemics and, therefore, to adequately plan to prevent them. PMID:21706672

  5. Do Orthopaedic Surgeons Acknowledge Uncertainty?

    Teunis, Teun; Janssen, Stein; Guitton, Thierry G; Ring, David; Parisien, Robert

    2016-06-01

    R(2), 0.29). The relatively low levels of uncertainty among orthopaedic surgeons and confidence bias seem inconsistent with the paucity of definitive evidence. If patients want to be informed of the areas of uncertainty and surgeon-to-surgeon variation relevant to their care, it seems possible that a low recognition of uncertainty and surgeon confidence bias might hinder adequately informing patients, informed decisions, and consent. Moreover, limited recognition of uncertainty is associated with modifiable factors such as confidence bias, trust in orthopaedic evidence base, and statistical understanding. Perhaps improved statistical teaching in residency, journal clubs to improve the critique of evidence and awareness of bias, and acknowledgment of knowledge gaps at courses and conferences might create awareness about existing uncertainties. Level 1, prognostic study.

  6. Kalman filter-based gap conductance modeling

    Tylee, J.L.

    1983-01-01

    Geometric and thermal property uncertainties contribute greatly to the problem of determining conductance within the fuel-clad gas gap of a nuclear fuel pin. Accurate conductance values are needed for power plant licensing transient analysis and for test analyses at research facilities. Recent work by Meek, Doerner, and Adams has shown that use of Kalman filters to estimate gap conductance is a promising approach. A Kalman filter is simply a mathematical algorithm that employs available system measurements and assumed dynamic models to generate optimal system state vector estimates. This summary addresses another Kalman filter approach to gap conductance estimation and subsequent identification of an empirical conductance model

  7. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity.

    Song, Chenchen; Martínez, Todd J

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N(2.6) for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  8. Atomic orbital-based SOS-MP2 with tensor hypercontraction. I. GPU-based tensor construction and exploiting sparsity

    Song, Chenchen; Martínez, Todd J. [Department of Chemistry and the PULSE Institute, Stanford University, Stanford, California 94305 (United States); SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)

    2016-05-07

    We present a tensor hypercontracted (THC) scaled opposite spin second order Møller-Plesset perturbation theory (SOS-MP2) method. By using THC, we reduce the formal scaling of SOS-MP2 with respect to molecular size from quartic to cubic. We achieve further efficiency by exploiting sparsity in the atomic orbitals and using graphical processing units (GPUs) to accelerate integral construction and matrix multiplication. The practical scaling of GPU-accelerated atomic orbital-based THC-SOS-MP2 calculations is found to be N{sup 2.6} for reference data sets of water clusters and alanine polypeptides containing up to 1600 basis functions. The errors in correlation energy with respect to density-fitting-SOS-MP2 are less than 0.5 kcal/mol for all systems tested (up to 162 atoms).

  9. Denoising of Microscopy Images: A Review of the State-of-the-Art, and a New Sparsity-Based Method.

    Meiniel, William; Olivo-Marin, Jean-Christophe; Angelini, Elsa D

    2018-08-01

    This paper reviews the state-of-the-art in denoising methods for biological microscopy images and introduces a new and original sparsity-based algorithm. The proposed method combines total variation (TV) spatial regularization, enhancement of low-frequency information, and aggregation of sparse estimators and is able to handle simple and complex types of noise (Gaussian, Poisson, and mixed), without any a priori model and with a single set of parameter values. An extended comparison is also presented, that evaluates the denoising performance of the thirteen (including ours) state-of-the-art denoising methods specifically designed to handle the different types of noises found in bioimaging. Quantitative and qualitative results on synthetic and real images show that the proposed method outperforms the other ones on the majority of the tested scenarios.

  10. The Linearized Bregman Method for Frugal Full-waveform Inversion with Compressive Sensing and Sparsity-promoting

    Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong

    2018-03-01

    Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.

  11. A Theory of Gender Wage Gap

    Jellal, Mohamed; Nordman, Christophe

    2009-01-01

    In this paper, we introduce uncertainty of the labour productivity of women in a competitive model of wage determination. We demonstrate that more qualified women are then offered much lower wages than men at the equilibrium. This result is consistent with the glass ceiling hypothesis according to which there exist larger gender wage gaps at the upper tail of the wage distribution.

  12. Instrument uncertainty predictions

    Coutts, D.A.

    1991-07-01

    The accuracy of measurements and correlations should normally be provided for most experimental activities. The uncertainty is a measure of the accuracy of a stated value or equation. The uncertainty term reflects a combination of instrument errors, modeling limitations, and phenomena understanding deficiencies. This report provides several methodologies to estimate an instrument's uncertainty when used in experimental work. Methods are shown to predict both the pretest and post-test uncertainty

  13. Info-gap decision theory decisions under severe uncertainty

    Ben-Haim, Yakov

    2006-01-01

    Everyone makes decisions, but not everyone is a decision analyst. A decision analyst uses quantitative models and computational methods to formulate decision algorithms, assess decision performance, identify and evaluate options, determine trade-offs and risks, evaluate strategies for investigation, and so on. This book is written for decision analysts. The term ""decision analyst"" covers an extremely broad range of practitioners. Virtually all engineers involved in design (of buildings, machines, processes, etc.) or analysis (of safety, reliability, feasibility, etc.) are decision analysts,

  14. Uncertainty analysis guide

    Andres, T.H.

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  15. Uncertainty analysis guide

    Andres, T.H

    2002-05-01

    This guide applies to the estimation of uncertainty in quantities calculated by scientific, analysis and design computer programs that fall within the scope of AECL's software quality assurance (SQA) manual. The guide weaves together rational approaches from the SQA manual and three other diverse sources: (a) the CSAU (Code Scaling, Applicability, and Uncertainty) evaluation methodology; (b) the ISO Guide,for the Expression of Uncertainty in Measurement; and (c) the SVA (Systems Variability Analysis) method of risk analysis. This report describes the manner by which random and systematic uncertainties in calculated quantities can be estimated and expressed. Random uncertainty in model output can be attributed to uncertainties of inputs. The propagation of these uncertainties through a computer model can be represented in a variety of ways, including exact calculations, series approximations and Monte Carlo methods. Systematic uncertainties emerge from the development of the computer model itself, through simplifications and conservatisms, for example. These must be estimated and combined with random uncertainties to determine the combined uncertainty in a model output. This report also addresses the method by which uncertainties should be employed in code validation, in order to determine whether experiments and simulations agree, and whether or not a code satisfies the required tolerance for its application. (author)

  16. Uncertainty and Cognitive Control

    Faisal eMushtaq

    2011-10-01

    Full Text Available A growing trend of neuroimaging, behavioural and computational research has investigated the topic of outcome uncertainty in decision-making. Although evidence to date indicates that humans are very effective in learning to adapt to uncertain situations, the nature of the specific cognitive processes involved in the adaptation to uncertainty are still a matter of debate. In this article, we reviewed evidence suggesting that cognitive control processes are at the heart of uncertainty in decision-making contexts. Available evidence suggests that: (1 There is a strong conceptual overlap between the constructs of uncertainty and cognitive control; (2 There is a remarkable overlap between the neural networks associated with uncertainty and the brain networks subserving cognitive control; (3 The perception and estimation of uncertainty might play a key role in monitoring processes and the evaluation of the need for control; (4 Potential interactions between uncertainty and cognitive control might play a significant role in several affective disorders.

  17. Uncertainty Reduction for Stochastic Processes on Complex Networks

    Radicchi, Filippo; Castellano, Claudio

    2018-05-01

    Many real-world systems are characterized by stochastic dynamical rules where a complex network of interactions among individual elements probabilistically determines their state. Even with full knowledge of the network structure and of the stochastic rules, the ability to predict system configurations is generally characterized by a large uncertainty. Selecting a fraction of the nodes and observing their state may help to reduce the uncertainty about the unobserved nodes. However, choosing these points of observation in an optimal way is a highly nontrivial task, depending on the nature of the stochastic process and on the structure of the underlying interaction pattern. In this paper, we introduce a computationally efficient algorithm to determine quasioptimal solutions to the problem. The method leverages network sparsity to reduce computational complexity from exponential to almost quadratic, thus allowing the straightforward application of the method to mid-to-large-size systems. Although the method is exact only for equilibrium stochastic processes defined on trees, it turns out to be effective also for out-of-equilibrium processes on sparse loopy networks.

  18. DS02 uncertainty analysis

    Kaul, Dean C.; Egbert, Stephen D.; Woolson, William A.

    2005-01-01

    In order to avoid the pitfalls that so discredited DS86 and its uncertainty estimates, and to provide DS02 uncertainties that are both defensible and credible, this report not only presents the ensemble uncertainties assembled from uncertainties in individual computational elements and radiation dose components but also describes how these relate to comparisons between observed and computed quantities at critical intervals in the computational process. These comparisons include those between observed and calculated radiation free-field components, where observations include thermal- and fast-neutron activation and gamma-ray thermoluminescence, which are relevant to the estimated systematic uncertainty for DS02. The comparisons also include those between calculated and observed survivor shielding, where the observations consist of biodosimetric measurements for individual survivors, which are relevant to the estimated random uncertainty for DS02. (J.P.N.)

  19. Model uncertainty and probability

    Parry, G.W.

    1994-01-01

    This paper discusses the issue of model uncertainty. The use of probability as a measure of an analyst's uncertainty as well as a means of describing random processes has caused some confusion, even though the two uses are representing different types of uncertainty with respect to modeling a system. The importance of maintaining the distinction between the two types is illustrated with a simple example

  20. Uncertainty in artificial intelligence

    Kanal, LN

    1986-01-01

    How to deal with uncertainty is a subject of much controversy in Artificial Intelligence. This volume brings together a wide range of perspectives on uncertainty, many of the contributors being the principal proponents in the controversy.Some of the notable issues which emerge from these papers revolve around an interval-based calculus of uncertainty, the Dempster-Shafer Theory, and probability as the best numeric model for uncertainty. There remain strong dissenting opinions not only about probability but even about the utility of any numeric method in this context.

  1. Uncertainties in hydrogen combustion

    Stamps, D.W.; Wong, C.C.; Nelson, L.S.

    1988-01-01

    Three important areas of hydrogen combustion with uncertainties are identified: high-temperature combustion, flame acceleration and deflagration-to-detonation transition, and aerosol resuspension during hydrogen combustion. The uncertainties associated with high-temperature combustion may affect at least three different accident scenarios: the in-cavity oxidation of combustible gases produced by core-concrete interactions, the direct containment heating hydrogen problem, and the possibility of local detonations. How these uncertainties may affect the sequence of various accident scenarios is discussed and recommendations are made to reduce these uncertainties. 40 references

  2. Uncertainty in hydrological signatures

    McMillan, Hilary; Westerberg, Ida

    2015-04-01

    Information that summarises the hydrological behaviour or flow regime of a catchment is essential for comparing responses of different catchments to understand catchment organisation and similarity, and for many other modelling and water-management applications. Such information types derived as an index value from observed data are known as hydrological signatures, and can include descriptors of high flows (e.g. mean annual flood), low flows (e.g. mean annual low flow, recession shape), the flow variability, flow duration curve, and runoff ratio. Because the hydrological signatures are calculated from observed data such as rainfall and flow records, they are affected by uncertainty in those data. Subjective choices in the method used to calculate the signatures create a further source of uncertainty. Uncertainties in the signatures may affect our ability to compare different locations, to detect changes, or to compare future water resource management scenarios. The aim of this study was to contribute to the hydrological community's awareness and knowledge of data uncertainty in hydrological signatures, including typical sources, magnitude and methods for its assessment. We proposed a generally applicable method to calculate these uncertainties based on Monte Carlo sampling and demonstrated it for a variety of commonly used signatures. The study was made for two data rich catchments, the 50 km2 Mahurangi catchment in New Zealand and the 135 km2 Brue catchment in the UK. For rainfall data the uncertainty sources included point measurement uncertainty, the number of gauges used in calculation of the catchment spatial average, and uncertainties relating to lack of quality control. For flow data the uncertainty sources included uncertainties in stage/discharge measurement and in the approximation of the true stage-discharge relation by a rating curve. The resulting uncertainties were compared across the different signatures and catchments, to quantify uncertainty

  3. A new approach to global seismic tomography based on regularization by sparsity in a novel 3D spherical wavelet basis

    Loris, Ignace; Simons, Frederik J.; Daubechies, Ingrid; Nolet, Guust; Fornasier, Massimo; Vetter, Philip; Judd, Stephen; Voronin, Sergey; Vonesch, Cédric; Charléty, Jean

    2010-05-01

    Global seismic wavespeed models are routinely parameterized in terms of spherical harmonics, networks of tetrahedral nodes, rectangular voxels, or spherical splines. Up to now, Earth model parametrizations by wavelets on the three-dimensional ball remain uncommon. Here we propose such a procedure with the following three goals in mind: (1) The multiresolution character of a wavelet basis allows for the models to be represented with an effective spatial resolution that varies as a function of position within the Earth. (2) This property can be used to great advantage in the regularization of seismic inversion schemes by seeking the most sparse solution vector, in wavelet space, through iterative minimization of a combination of the ℓ2 (to fit the data) and ℓ1 norms (to promote sparsity in wavelet space). (3) With the continuing increase in high-quality seismic data, our focus is also on numerical efficiency and the ability to use parallel computing in reconstructing the model. In this presentation we propose a new wavelet basis to take advantage of these three properties. To form the numerical grid we begin with a surface tesselation known as the 'cubed sphere', a construction popular in fluid dynamics and computational seismology, coupled with an semi-regular radial subdivison that honors the major seismic discontinuities between the core-mantle boundary and the surface. This mapping first divides the volume of the mantle into six portions. In each 'chunk' two angular and one radial variable are used for parametrization. In the new variables standard 'cartesian' algorithms can more easily be used to perform the wavelet transform (or other common transforms). Edges between chunks are handled by special boundary filters. We highlight the benefits of this construction and use it to analyze the information present in several published seismic compressional-wavespeed models of the mantle, paying special attention to the statistics of wavelet and scaling coefficients

  4. Exploiting sparsity and low-rank structure for the recovery of multi-slice breast MRIs with reduced sampling error.

    Yin, X X; Ng, B W-H; Ramamohanarao, K; Baghai-Wadji, A; Abbott, D

    2012-09-01

    It has been shown that, magnetic resonance images (MRIs) with sparsity representation in a transformed domain, e.g. spatial finite-differences (FD), or discrete cosine transform (DCT), can be restored from undersampled k-space via applying current compressive sampling theory. The paper presents a model-based method for the restoration of MRIs. The reduced-order model, in which a full-system-response is projected onto a subspace of lower dimensionality, has been used to accelerate image reconstruction by reducing the size of the involved linear system. In this paper, the singular value threshold (SVT) technique is applied as a denoising scheme to reduce and select the model order of the inverse Fourier transform image, and to restore multi-slice breast MRIs that have been compressively sampled in k-space. The restored MRIs with SVT for denoising show reduced sampling errors compared to the direct MRI restoration methods via spatial FD, or DCT. Compressive sampling is a technique for finding sparse solutions to underdetermined linear systems. The sparsity that is implicit in MRIs is to explore the solution to MRI reconstruction after transformation from significantly undersampled k-space. The challenge, however, is that, since some incoherent artifacts result from the random undersampling, noise-like interference is added to the image with sparse representation. These recovery algorithms in the literature are not capable of fully removing the artifacts. It is necessary to introduce a denoising procedure to improve the quality of image recovery. This paper applies a singular value threshold algorithm to reduce the model order of image basis functions, which allows further improvement of the quality of image reconstruction with removal of noise artifacts. The principle of the denoising scheme is to reconstruct the sparse MRI matrices optimally with a lower rank via selecting smaller number of dominant singular values. The singular value threshold algorithm is performed

  5. The prototype GAPS (pGAPS) experiment

    Mognet, S.A.I., E-mail: mognet@astro.ucla.edu [University of California, Los Angeles, CA 90095 (United States); Aramaki, T. [Columbia University, New York, NY 10027 (United States); Bando, N. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Boggs, S.E.; Doetinchem, P. von [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States); Fuke, H. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Gahbauer, F.H.; Hailey, C.J.; Koglin, J.E.; Madden, N. [Columbia University, New York, NY 10027 (United States); Mori, K.; Okazaki, S. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Ong, R.A. [University of California, Los Angeles, CA 90095 (United States); Perez, K.M.; Tajiri, G. [Columbia University, New York, NY 10027 (United States); Yoshida, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency (ISAS/JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Zweerink, J. [University of California, Los Angeles, CA 90095 (United States)

    2014-01-21

    The General Antiparticle Spectrometer (GAPS) experiment is a novel approach for the detection of cosmic ray antiparticles. A prototype GAPS (pGAPS) experiment was successfully flown on a high-altitude balloon in June of 2012. The goals of the pGAPS experiment were: to test the operation of lithium drifted silicon (Si(Li)) detectors at balloon altitudes, to validate the thermal model and cooling concept needed for engineering of a full-size GAPS instrument, and to characterize cosmic ray and X-ray backgrounds. The instrument was launched from the Japan Aerospace Exploration Agency's (JAXA) Taiki Aerospace Research Field in Hokkaido, Japan. The flight lasted a total of 6 h, with over 3 h at float altitude (∼33km). Over one million cosmic ray triggers were recorded and all flight goals were met or exceeded.

  6. The prototype GAPS (pGAPS) experiment

    Mognet, S.A.I.; Aramaki, T.; Bando, N.; Boggs, S.E.; Doetinchem, P. von; Fuke, H.; Gahbauer, F.H.; Hailey, C.J.; Koglin, J.E.; Madden, N.; Mori, K.; Okazaki, S.; Ong, R.A.; Perez, K.M.; Tajiri, G.; Yoshida, T.; Zweerink, J.

    2014-01-01

    The General Antiparticle Spectrometer (GAPS) experiment is a novel approach for the detection of cosmic ray antiparticles. A prototype GAPS (pGAPS) experiment was successfully flown on a high-altitude balloon in June of 2012. The goals of the pGAPS experiment were: to test the operation of lithium drifted silicon (Si(Li)) detectors at balloon altitudes, to validate the thermal model and cooling concept needed for engineering of a full-size GAPS instrument, and to characterize cosmic ray and X-ray backgrounds. The instrument was launched from the Japan Aerospace Exploration Agency's (JAXA) Taiki Aerospace Research Field in Hokkaido, Japan. The flight lasted a total of 6 h, with over 3 h at float altitude (∼33km). Over one million cosmic ray triggers were recorded and all flight goals were met or exceeded

  7. Uncertainty modeling process for semantic technology

    Rommel N. Carvalho

    2016-08-01

    Full Text Available The ubiquity of uncertainty across application domains generates a need for principled support for uncertainty management in semantically aware systems. A probabilistic ontology provides constructs for representing uncertainty in domain ontologies. While the literature has been growing on formalisms for representing uncertainty in ontologies, there remains little guidance in the knowledge engineering literature for how to design probabilistic ontologies. To address the gap, this paper presents the Uncertainty Modeling Process for Semantic Technology (UMP-ST, a new methodology for modeling probabilistic ontologies. To explain how the methodology works and to verify that it can be applied to different scenarios, this paper describes step-by-step the construction of a proof-of-concept probabilistic ontology. The resulting domain model can be used to support identification of fraud in public procurements in Brazil. While the case study illustrates the development of a probabilistic ontology in the PR-OWL probabilistic ontology language, the methodology is applicable to any ontology formalism that properly integrates uncertainty with domain semantics.

  8. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    Jiang, B; Gao, H

    2016-01-01

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify the residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.

  9. Uncertainty in social dilemmas

    Kwaadsteniet, Erik Willem de

    2007-01-01

    This dissertation focuses on social dilemmas, and more specifically, on environmental uncertainty in these dilemmas. Real-life social dilemma situations are often characterized by uncertainty. For example, fishermen mostly do not know the exact size of the fish population (i.e., resource size

  10. Uncertainty and Climate Change

    Berliner, L. Mark

    2003-01-01

    Anthropogenic, or human-induced, climate change is a critical issue in science and in the affairs of humankind. Though the target of substantial research, the conclusions of climate change studies remain subject to numerous uncertainties. This article presents a very brief review of the basic arguments regarding anthropogenic climate change with particular emphasis on uncertainty.

  11. Deterministic uncertainty analysis

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig

  12. Uncertainty and simulation

    Depres, B.; Dossantos-Uzarralde, P.

    2009-01-01

    More than 150 researchers and engineers from universities and the industrial world met to discuss on the new methodologies developed around assessing uncertainty. About 20 papers were presented and the main topics were: methods to study the propagation of uncertainties, sensitivity analysis, nuclear data covariances or multi-parameter optimisation. This report gathers the contributions of CEA researchers and engineers

  13. Sparsity-based fast CGH generation using layer-based approach for 3D point cloud model

    Kim, Hak Gu; Jeong, Hyunwook; Ro, Yong Man

    2017-03-01

    Computer generated hologram (CGH) is becoming increasingly important for a 3-D display in various applications including virtual reality. In the CGH, holographic fringe patterns are generated by numerically calculating them on computer simulation systems. However, a heavy computational cost is required to calculate the complex amplitude on CGH plane for all points of 3D objects. This paper proposes a new fast CGH generation based on the sparsity of CGH for 3D point cloud model. The aim of the proposed method is to significantly reduce computational complexity while maintaining the quality of the holographic fringe patterns. To that end, we present a new layer-based approach for calculating the complex amplitude distribution on the CGH plane by using sparse FFT (sFFT). We observe the CGH of a layer of 3D objects is sparse so that dominant CGH is rapidly generated from a small set of signals by sFFT. Experimental results have shown that the proposed method is one order of magnitude faster than recently reported fast CGH generation.

  14. Reconstruction of Undersampled Big Dynamic MRI Data Using Non-Convex Low-Rank and Sparsity Constraints

    Ryan Wen Liu

    2017-03-01

    Full Text Available Dynamic magnetic resonance imaging (MRI has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.

  15. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  16. Conditional uncertainty principle

    Gour, Gilad; Grudka, Andrzej; Horodecki, Michał; Kłobus, Waldemar; Łodyga, Justyna; Narasimhachar, Varun

    2018-04-01

    We develop a general operational framework that formalizes the concept of conditional uncertainty in a measure-independent fashion. Our formalism is built upon a mathematical relation which we call conditional majorization. We define conditional majorization and, for the case of classical memory, we provide its thorough characterization in terms of monotones, i.e., functions that preserve the partial order under conditional majorization. We demonstrate the application of this framework by deriving two types of memory-assisted uncertainty relations, (1) a monotone-based conditional uncertainty relation and (2) a universal measure-independent conditional uncertainty relation, both of which set a lower bound on the minimal uncertainty that Bob has about Alice's pair of incompatible measurements, conditioned on arbitrary measurement that Bob makes on his own system. We next compare the obtained relations with their existing entropic counterparts and find that they are at least independent.

  17. Physical Uncertainty Bounds (PUB)

    Vaughan, Diane Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Dean L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-19

    This paper introduces and motivates the need for a new methodology for determining upper bounds on the uncertainties in simulations of engineered systems due to limited fidelity in the composite continuum-level physics models needed to simulate the systems. We show that traditional uncertainty quantification methods provide, at best, a lower bound on this uncertainty. We propose to obtain bounds on the simulation uncertainties by first determining bounds on the physical quantities or processes relevant to system performance. By bounding these physics processes, as opposed to carrying out statistical analyses of the parameter sets of specific physics models or simply switching out the available physics models, one can obtain upper bounds on the uncertainties in simulated quantities of interest.

  18. Measurement uncertainty and probability

    Willink, Robin

    2013-01-01

    A measurement result is incomplete without a statement of its 'uncertainty' or 'margin of error'. But what does this statement actually tell us? By examining the practical meaning of probability, this book discusses what is meant by a '95 percent interval of measurement uncertainty', and how such an interval can be calculated. The book argues that the concept of an unknown 'target value' is essential if probability is to be used as a tool for evaluating measurement uncertainty. It uses statistical concepts, such as a conditional confidence interval, to present 'extended' classical methods for evaluating measurement uncertainty. The use of the Monte Carlo principle for the simulation of experiments is described. Useful for researchers and graduate students, the book also discusses other philosophies relating to the evaluation of measurement uncertainty. It employs clear notation and language to avoid the confusion that exists in this controversial field of science.

  19. Advancing Uncertainty: Untangling and Discerning Related Concepts

    Janice Penrod

    2002-12-01

    Full Text Available Methods of advancing concepts within the qualitative paradigm have been developed and articulated. In this section, I describe methodological perspectives of a project designed to advance the concept of uncertainty using multiple qualitative methods. Through a series of earlier studies, the concept of uncertainty arose repeatedly in varied contexts, working its way into prominence, and warranting further investigation. Processes of advanced concept analysis were used to initiate the formal investigation into the meaning of the concept. Through concept analysis, the concept was deconstructed to identify conceptual components and gaps in understanding. Using this skeletal framework of the concept identified through concept analysis, subsequent studies were carried out to add ‘flesh’ to the concept. First, a concept refinement using the literature as data was completed. Findings revealed that the current state of the concept of uncertainty failed to incorporate what was known of the lived experience. Therefore, using interview techniques as the primary data source, a phenomenological study of uncertainty among caregivers was conducted. Incorporating the findings of the phenomenology, the skeletal framework of the concept was further fleshed out using techniques of concept correction to produce a more mature conceptualization of uncertainty. In this section, I describe the flow of this qualitative project investigating the concept of uncertainty, with special emphasis on a particular threat to validity (called conceptual tunnel vision that was identified and addressed during the phases of concept correction. Though in this article I employ a study of uncertainty for illustration, limited substantive findings regarding uncertainty are presented to retain a clear focus on the methodological issues.

  20. Behind the Pay Gap

    Dey, Judy Goldberg; Hill, Catherine

    2007-01-01

    Women have made remarkable gains in education during the past three decades, yet these achievements have resulted in only modest improvements in pay equity. The gender pay gap has become a fixture of the U.S. workplace and is so ubiquitous that many simply view it as normal. "Behind the Pay Gap" examines the gender pay gap for college graduates.…

  1. Uncertainty Propagation in OMFIT

    Smith, Sterling; Meneghini, Orso; Sung, Choongki

    2017-10-01

    A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.

  2. Verification of uncertainty budgets

    Heydorn, Kaj; Madsen, B.S.

    2005-01-01

    , and therefore it is essential that the applicability of the overall uncertainty budget to actual measurement results be verified on the basis of current experimental data. This should be carried out by replicate analysis of samples taken in accordance with the definition of the measurand, but representing...... the full range of matrices and concentrations for which the budget is assumed to be valid. In this way the assumptions made in the uncertainty budget can be experimentally verified, both as regards sources of variability that are assumed negligible, and dominant uncertainty components. Agreement between...

  3. Evaluating prediction uncertainty

    McKay, M.D.

    1995-03-01

    The probability distribution of a model prediction is presented as a proper basis for evaluating the uncertainty in a model prediction that arises from uncertainty in input values. Determination of important model inputs and subsets of inputs is made through comparison of the prediction distribution with conditional prediction probability distributions. Replicated Latin hypercube sampling and variance ratios are used in estimation of the distributions and in construction of importance indicators. The assumption of a linear relation between model output and inputs is not necessary for the indicators to be effective. A sequential methodology which includes an independent validation step is applied in two analysis applications to select subsets of input variables which are the dominant causes of uncertainty in the model predictions. Comparison with results from methods which assume linearity shows how those methods may fail. Finally, suggestions for treating structural uncertainty for submodels are presented

  4. Uncertainty in oil projects

    Limperopoulos, G.J.

    1995-01-01

    This report presents an oil project valuation under uncertainty by means of two well-known financial techniques: The Capital Asset Pricing Model (CAPM) and The Black-Scholes Option Pricing Formula. CAPM gives a linear positive relationship between expected rate of return and risk but does not take into consideration the aspect of flexibility which is crucial for an irreversible investment as an oil price is. Introduction of investment decision flexibility by using real options can increase the oil project value substantially. Some simple tests for the importance of uncertainty in stock market for oil investments are performed. Uncertainty in stock returns is correlated with aggregate product market uncertainty according to Pindyck (1991). The results of the tests are not satisfactory due to the short data series but introducing two other explanatory variables the interest rate and Gross Domestic Product make the situation better. 36 refs., 18 figs., 6 tabs

  5. Uncertainties and climatic change

    De Gier, A.M.; Opschoor, J.B.; Van de Donk, W.B.H.J.; Hooimeijer, P.; Jepma, J.; Lelieveld, J.; Oerlemans, J.; Petersen, A.

    2008-01-01

    Which processes in the climate system are misunderstood? How are scientists dealing with uncertainty about climate change? What will be done with the conclusions of the recently published synthesis report of the IPCC? These and other questions were answered during the meeting 'Uncertainties and climate change' that was held on Monday 26 November 2007 at the KNAW in Amsterdam. This report is a compilation of all the presentations and provides some conclusions resulting from the discussions during this meeting. [mk] [nl

  6. Mechanics and uncertainty

    Lemaire, Maurice

    2014-01-01

    Science is a quest for certainty, but lack of certainty is the driving force behind all of its endeavors. This book, specifically, examines the uncertainty of technological and industrial science. Uncertainty and Mechanics studies the concepts of mechanical design in an uncertain setting and explains engineering techniques for inventing cost-effective products. Though it references practical applications, this is a book about ideas and potential advances in mechanical science.

  7. Uncertainty: lotteries and risk

    Ávalos, Eloy

    2011-01-01

    In this paper we develop the theory of uncertainty in a context where the risks assumed by the individual are measurable and manageable. We primarily use the definition of lottery to formulate the axioms of the individual's preferences, and its representation through the utility function von Neumann - Morgenstern. We study the expected utility theorem and its properties, the paradoxes of choice under uncertainty and finally the measures of risk aversion with monetary lotteries.

  8. Justification for recommended uncertainties

    Pronyaev, V.G.; Badikov, S.A.; Carlson, A.D.

    2007-01-01

    The uncertainties obtained in an earlier standards evaluation were considered to be unrealistically low by experts of the US Cross Section Evaluation Working Group (CSEWG). Therefore, the CSEWG Standards Subcommittee replaced the covariance matrices of evaluated uncertainties by expanded percentage errors that were assigned to the data over wide energy groups. There are a number of reasons that might lead to low uncertainties of the evaluated data: Underestimation of the correlations existing between the results of different measurements; The presence of unrecognized systematic uncertainties in the experimental data can lead to biases in the evaluated data as well as to underestimations of the resulting uncertainties; Uncertainties for correlated data cannot only be characterized by percentage uncertainties or variances. Covariances between evaluated value at 0.2 MeV and other points obtained in model (RAC R matrix and PADE2 analytical expansion) and non-model (GMA) fits of the 6 Li(n,t) TEST1 data and the correlation coefficients are presented and covariances between the evaluated value at 0.045 MeV and other points (along the line or column of the matrix) as obtained in EDA and RAC R matrix fits of the data available for reactions that pass through the formation of the 7 Li system are discussed. The GMA fit with the GMA database is shown for comparison. The following diagrams are discussed: Percentage uncertainties of the evaluated cross section for the 6 Li(n,t) reaction and the for the 235 U(n,f) reaction; estimation given by CSEWG experts; GMA result with full GMA database, including experimental data for the 6 Li(n,t), 6 Li(n,n) and 6 Li(n,total) reactions; uncertainties in the GMA combined fit for the standards; EDA and RAC R matrix results, respectively. Uncertainties of absolute and 252 Cf fission spectrum averaged cross section measurements, and deviations between measured and evaluated values for 235 U(n,f) cross-sections in the neutron energy range 1

  9. THE PAL 5 STAR STREAM GAPS

    Carlberg, R. G.; Hetherington, Nathan; Grillmair, C. J.

    2012-01-01

    Pal 5 is a low-mass, low-velocity-dispersion, globular cluster with spectacular tidal tails. We use the Sloan Digital Sky Survey Data Release 8 data to extend the density measurements of the trailing star stream to 23 deg distance from the cluster, at which point the stream runs off the edge of the available sky coverage. The size and the number of gaps in the stream are measured using a filter which approximates the structure of the gaps found in stream simulations. We find 5 gaps that are at least 99% confidence detections with about a dozen gaps at 90% confidence. The statistical significance of a gap is estimated using bootstrap resampling of the control regions on either side of the stream. The density minimum closest to the cluster is likely the result of the epicyclic orbits of the tidal outflow and has been discounted. To create the number of 99% confidence gaps per unit length at the mean age of the stream requires a halo population of nearly a thousand dark matter sub-halos with peak circular velocities above 1 km s –1 within 30 kpc of the galactic center. These numbers are a factor of about three below cold stream simulation at this sub-halo mass or velocity but, given the uncertainties in both measurement and more realistic warm stream modeling, are in substantial agreement with the LCDM prediction.

  10. Integrating uncertainties for climate change mitigation

    Rogelj, Joeri; McCollum, David; Reisinger, Andy; Meinshausen, Malte; Riahi, Keywan

    2013-04-01

    The target of keeping global average temperature increase to below 2°C has emerged in the international climate debate more than a decade ago. In response, the scientific community has tried to estimate the costs of reaching such a target through modelling and scenario analysis. Producing such estimates remains a challenge, particularly because of relatively well-known, but ill-quantified uncertainties, and owing to limited integration of scientific knowledge across disciplines. The integrated assessment community, on one side, has extensively assessed the influence of technological and socio-economic uncertainties on low-carbon scenarios and associated costs. The climate modelling community, on the other side, has worked on achieving an increasingly better understanding of the geophysical response of the Earth system to emissions of greenhouse gases (GHG). This geophysical response remains a key uncertainty for the cost of mitigation scenarios but has only been integrated with assessments of other uncertainties in a rudimentary manner, i.e., for equilibrium conditions. To bridge this gap between the two research communities, we generate distributions of the costs associated with limiting transient global temperature increase to below specific temperature limits, taking into account uncertainties in multiple dimensions: geophysical, technological, social and political. In other words, uncertainties resulting from our incomplete knowledge about how the climate system precisely reacts to GHG emissions (geophysical uncertainties), about how society will develop (social uncertainties and choices), which technologies will be available (technological uncertainty and choices), when we choose to start acting globally on climate change (political choices), and how much money we are or are not willing to spend to achieve climate change mitigation. We find that political choices that delay mitigation have the largest effect on the cost-risk distribution, followed by

  11. Uncertainties as Barriers for Knowledge Sharing with Enterprise Social Media

    Trier, Matthias; Fung, Magdalene; Hansen, Abigail

    2017-01-01

    become a barrier for the participants’ adoption. There is only limited existing research studying the types of uncertainties that employees perceive and their impact on knowledge transfer via social media. To address this gap, this article presents a qualitative interview-based study of the adoption...... of the Enterprise Social Media tool Yammer for knowledge sharing in a large global organization. We identify and categorize nine uncertainties that were perceived as barriers by the respondents. The study revealed that the uncertainty types play an important role in affecting employees’ participation...

  12. Gap and density theorems

    Levinson, N

    1940-01-01

    A typical gap theorem of the type discussed in the book deals with a set of exponential functions { \\{e^{{{i\\lambda}_n} x}\\} } on an interval of the real line and explores the conditions under which this set generates the entire L_2 space on this interval. A typical gap theorem deals with functions f on the real line such that many Fourier coefficients of f vanish. The main goal of this book is to investigate relations between density and gap theorems and to study various cases where these theorems hold. The author also shows that density- and gap-type theorems are related to various propertie

  13. Bridging the Gap

    Kramer Overgaard, Majken; Broeng, Jes; Jensen, Monika Luniewska

    Bridging the Gap (BtG) is a 2-year project funded by The Danish Industry Foundation. The goal of Bridging the Gap has been to create a new innovation model which will increase the rate at which Danish universities can spinout new technology ventures.......Bridging the Gap (BtG) is a 2-year project funded by The Danish Industry Foundation. The goal of Bridging the Gap has been to create a new innovation model which will increase the rate at which Danish universities can spinout new technology ventures....

  14. Dealing with exploration uncertainties

    Capen, E.

    1992-01-01

    Exploration for oil and gas should fulfill the most adventurous in their quest for excitement and surprise. This paper tries to cover that tall order. The authors will touch on the magnitude of the uncertainty (which is far greater than in most other businesses), the effects of not knowing target sizes very well, how to build uncertainty into analyses naturally, how to tie reserves and chance estimates to economics, and how to look at the portfolio effect of an exploration program. With no apologies, the authors will be using a different language for some readers - the language of uncertainty, which means probability and statistics. These tools allow one to combine largely subjective exploration information with the more analytical data from the engineering and economic side

  15. TH-EF-BRB-05: 4pi Non-Coplanar IMRT Beam Angle Selection by Convex Optimization with Group Sparsity Penalty

    O’Connor, D; Nguyen, D; Voronenko, Y; Yin, W; Sheng, K

    2016-01-01

    Purpose: Integrated beam orientation and fluence map optimization is expected to be the foundation of robust automated planning but existing heuristic methods do not promise global optimality. We aim to develop a new method for beam angle selection in 4π non-coplanar IMRT systems based on solving (globally) a single convex optimization problem, and to demonstrate the effectiveness of the method by comparison with a state of the art column generation method for 4π beam angle selection. Methods: The beam angle selection problem is formulated as a large scale convex fluence map optimization problem with an additional group sparsity term that encourages most candidate beams to be inactive. The optimization problem is solved using an accelerated first-order method, the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The beam angle selection and fluence map optimization algorithm is used to create non-coplanar 4π treatment plans for several cases (including head and neck, lung, and prostate cases) and the resulting treatment plans are compared with 4π treatment plans created using the column generation algorithm. Results: In our experiments the treatment plans created using the group sparsity method meet or exceed the dosimetric quality of plans created using the column generation algorithm, which was shown superior to clinical plans. Moreover, the group sparsity approach converges in about 3 minutes in these cases, as compared with runtimes of a few hours for the column generation method. Conclusion: This work demonstrates the first non-greedy approach to non-coplanar beam angle selection, based on convex optimization, for 4π IMRT systems. The method given here improves both treatment plan quality and runtime as compared with a state of the art column generation algorithm. When the group sparsity term is set to zero, we obtain an excellent method for fluence map optimization, useful when beam angles have already been selected. NIH R43CA183390, NIH R01CA

  16. Uncertainty in artificial intelligence

    Levitt, TS; Lemmer, JF; Shachter, RD

    1990-01-01

    Clearly illustrated in this volume is the current relationship between Uncertainty and AI.It has been said that research in AI revolves around five basic questions asked relative to some particular domain: What knowledge is required? How can this knowledge be acquired? How can it be represented in a system? How should this knowledge be manipulated in order to provide intelligent behavior? How can the behavior be explained? In this volume, all of these questions are addressed. From the perspective of the relationship of uncertainty to the basic questions of AI, the book divides naturally i

  17. Sensitivity and uncertainty analysis

    Cacuci, Dan G; Navon, Ionel Michael

    2005-01-01

    As computer-assisted modeling and analysis of physical processes have continued to grow and diversify, sensitivity and uncertainty analyses have become indispensable scientific tools. Sensitivity and Uncertainty Analysis. Volume I: Theory focused on the mathematical underpinnings of two important methods for such analyses: the Adjoint Sensitivity Analysis Procedure and the Global Adjoint Sensitivity Analysis Procedure. This volume concentrates on the practical aspects of performing these analyses for large-scale systems. The applications addressed include two-phase flow problems, a radiative c

  18. Bridge the Gap

    Marselis, Randi

    2017-01-01

    This article focuses on photo projects organised for teenage refugees by the Society for Humanistic Photography (Berlin, Germany). These projects, named Bridge the Gap I (2015), and Bridge the Gap II (2016), were carried out in Berlin and brought together teenagers with refugee and German...

  19. Bridging a Cultural Gap

    Leviatan, Talma

    2008-01-01

    There has been a broad wave of change in tertiary calculus courses in the past decade. However, the much-needed change in tertiary pre-calculus programmes--aimed at bridging the gap between high-school mathematics and tertiary mathematics--is happening at a far slower pace. Following a discussion on the nature of the gap and the objectives of a…

  20. Understanding the Gender Gap.

    Goldin, Claudia

    1985-01-01

    Despite the great influx of women into the labor market, the gap between men's and women's wages has remained stable at 40 percent since 1950. Analysis of labor data suggests that this has occurred because women's educational attainment compared to men has declined. Recently, however, the wage gap has begun to narrow, and this will probably become…

  1. Bridging the Transition Gap

    2013-05-23

    period and provide recommendations to guide future research and policy development. 4 DEFINING THE TRANSITIONAL SECURITY GAP There have been...BRIDGING THE TRANSITION GAP A Monograph by MAJ J.D. Hansen United States Army School of Advanced Military Studies United States Army...suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations and Reports (0704

  2. Uncertainty Analyses and Strategy

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  3. SPARK: Sparsity-based analysis of reliable k-hubness and overlapping network structure in brain functional connectivity.

    Lee, Kangjoo; Lina, Jean-Marc; Gotman, Jean; Grova, Christophe

    2016-07-01

    Functional hubs are defined as the specific brain regions with dense connections to other regions in a functional brain network. Among them, connector hubs are of great interests, as they are assumed to promote global and hierarchical communications between functionally specialized networks. Damage to connector hubs may have a more crucial effect on the system than does damage to other hubs. Hubs in graph theory are often identified from a correlation matrix, and classified as connector hubs when the hubs are more connected to regions in other networks than within the networks to which they belong. However, the identification of hubs from functional data is more complex than that from structural data, notably because of the inherent problem of multicollinearity between temporal dynamics within a functional network. In this context, we developed and validated a method to reliably identify connectors and corresponding overlapping network structure from resting-state fMRI. This new method is actually handling the multicollinearity issue, since it does not rely on counting the number of connections from a thresholded correlation matrix. The novelty of the proposed method is that besides counting the number of networks involved in each voxel, it allows us to identify which networks are actually involved in each voxel, using a data-driven sparse general linear model in order to identify brain regions involved in more than one network. Moreover, we added a bootstrap resampling strategy to assess statistically the reproducibility of our results at the single subject level. The unified framework is called SPARK, i.e. SParsity-based Analysis of Reliable k-hubness, where k-hubness denotes the number of networks overlapping in each voxel. The accuracy and robustness of SPARK were evaluated using two dimensional box simulations and realistic simulations that examined detection of artificial hubs generated on real data. Then, test/retest reliability of the method was assessed

  4. Uncertainties in repository modeling

    Wilson, J.R.

    1996-12-31

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling.

  5. Uncertainties in repository modeling

    Wilson, J.R.

    1996-01-01

    The distant future is ver difficult to predict. Unfortunately, our regulators are being enchouraged to extend ther regulatory period form the standard 10,000 years to 1 million years. Such overconfidence is not justified due to uncertainties in dating, calibration, and modeling

  6. Risks, uncertainty, vagueness

    Haefele, W.; Renn, O.; Erdmann, G.

    1990-01-01

    The notion of 'risk' is discussed in its social and technological contexts, leading to an investigation of the terms factuality, hypotheticality, uncertainty, and vagueness, and to the problems of acceptance and acceptability especially in the context of political decision finding. (DG) [de

  7. Evaluation of Gap Conductance Approach for Mid-Burnup Fuel LOCA Analysis

    Lee, Joosuk; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-10-15

    In this study, therefore, the applicability of gap conductance approach on the mid-burnup fuel in LOCA analysis was estimated in terms of the comparison of PCT distribution method means the fuel rod uncertainty is taken into account by the combination of overall uncertainty parameters of fuel rod altogether by use of a simple random sampling(SRS) technique. There are many uncertainty parameters of fuel rod that can change the PCT during LOCA analysis, and these have been identified by the authors' previous work already. But, for the 'best-estimate' LOCA safety analysis the methodology that dose not use the overall uncertainty parameters altogether but used the gap conductance uncertainty alone has been developed to simulate the overall fuel rod uncertainty, because it can represent many uncertainty parameters. Based on this approach, uncertainty range of gap conductance was prescribed as 0.67∼1.5 in audit calculation methodology on LBLOCA analysis. This uncertainty was derived from experimental data of fresh or low burnup fuel. Meanwhile, recent research work identify that the currently utilized uncertainty range seems to be not enough to encompass the uncertainty of mid-burnup fuel. Instead it has to be changed to 0.5∼2.4 for the mid-burnup fuel(30 MWd/kgU)

  8. Evaluation of Gap Conductance Approach for Mid-Burnup Fuel LOCA Analysis

    Lee, Joosuk; Woo, Swengwoong

    2013-01-01

    In this study, therefore, the applicability of gap conductance approach on the mid-burnup fuel in LOCA analysis was estimated in terms of the comparison of PCT distribution method means the fuel rod uncertainty is taken into account by the combination of overall uncertainty parameters of fuel rod altogether by use of a simple random sampling(SRS) technique. There are many uncertainty parameters of fuel rod that can change the PCT during LOCA analysis, and these have been identified by the authors' previous work already. But, for the 'best-estimate' LOCA safety analysis the methodology that dose not use the overall uncertainty parameters altogether but used the gap conductance uncertainty alone has been developed to simulate the overall fuel rod uncertainty, because it can represent many uncertainty parameters. Based on this approach, uncertainty range of gap conductance was prescribed as 0.67∼1.5 in audit calculation methodology on LBLOCA analysis. This uncertainty was derived from experimental data of fresh or low burnup fuel. Meanwhile, recent research work identify that the currently utilized uncertainty range seems to be not enough to encompass the uncertainty of mid-burnup fuel. Instead it has to be changed to 0.5∼2.4 for the mid-burnup fuel(30 MWd/kgU)

  9. Strategy under uncertainty.

    Courtney, H; Kirkland, J; Viguerie, P

    1997-01-01

    At the heart of the traditional approach to strategy lies the assumption that by applying a set of powerful analytic tools, executives can predict the future of any business accurately enough to allow them to choose a clear strategic direction. But what happens when the environment is so uncertain that no amount of analysis will allow us to predict the future? What makes for a good strategy in highly uncertain business environments? The authors, consultants at McKinsey & Company, argue that uncertainty requires a new way of thinking about strategy. All too often, they say, executives take a binary view: either they underestimate uncertainty to come up with the forecasts required by their companies' planning or capital-budging processes, or they overestimate it, abandon all analysis, and go with their gut instinct. The authors outline a new approach that begins by making a crucial distinction among four discrete levels of uncertainty that any company might face. They then explain how a set of generic strategies--shaping the market, adapting to it, or reserving the right to play at a later time--can be used in each of the four levels. And they illustrate how these strategies can be implemented through a combination of three basic types of actions: big bets, options, and no-regrets moves. The framework can help managers determine which analytic tools can inform decision making under uncertainty--and which cannot. At a broader level, it offers executives a discipline for thinking rigorously and systematically about uncertainty and its implications for strategy.

  10. Sparsity Invariant CNNs

    Uhrig, Jonas; Schneider, Nick; Schneider, Lukas; Franke, Uwe; Brox, Thomas; Geiger, Andreas

    2017-01-01

    In this paper, we consider convolutional neural networks operating on sparse inputs with an application to depth upsampling from sparse laser scan data. First, we show that traditional convolutional networks perform poorly when applied to sparse data even when the location of missing data is provided to the network. To overcome this problem, we propose a simple yet effective sparse convolution layer which explicitly considers the location of missing data during the convolution operation. We d...

  11. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    Wen Fang-Qing; Zhang Gong; Ben De

    2015-01-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. (paper)

  12. Direction-of-arrival estimation for co-located multiple-input multiple-output radar using structural sparsity Bayesian learning

    Wen, Fang-Qing; Zhang, Gong; Ben, De

    2015-11-01

    This paper addresses the direction of arrival (DOA) estimation problem for the co-located multiple-input multiple-output (MIMO) radar with random arrays. The spatially distributed sparsity of the targets in the background makes compressive sensing (CS) desirable for DOA estimation. A spatial CS framework is presented, which links the DOA estimation problem to support recovery from a known over-complete dictionary. A modified statistical model is developed to accurately represent the intra-block correlation of the received signal. A structural sparsity Bayesian learning algorithm is proposed for the sparse recovery problem. The proposed algorithm, which exploits intra-signal correlation, is capable being applied to limited data support and low signal-to-noise ratio (SNR) scene. Furthermore, the proposed algorithm has less computation load compared to the classical Bayesian algorithm. Simulation results show that the proposed algorithm has a more accurate DOA estimation than the traditional multiple signal classification (MUSIC) algorithm and other CS recovery algorithms. Project supported by the National Natural Science Foundation of China (Grant Nos. 61071163, 61271327, and 61471191), the Funding for Outstanding Doctoral Dissertation in Nanjing University of Aeronautics and Astronautics, China (Grant No. BCXJ14-08), the Funding of Innovation Program for Graduate Education of Jiangsu Province, China (Grant No. KYLX 0277), the Fundamental Research Funds for the Central Universities, China (Grant No. 3082015NP2015504), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PADA), China.

  13. 'Mind the Gap!'

    Persson, Karl Gunnar

    This paper challenges the widely held view that sharply falling real transport costs closed the transatlantic gap in grain prices in the second half of the 19th century. Several new results emerge from an analysis of a new data set of weekly wheat prices and freight costs from New York to UK...... markets. Firstly, there was a decline in the transatlantic price gap but it was not sharp and the gap remained substantial. Secondly, the fall in the transatlantic price differential had more to do with improved market and marketing efficiency than with falling transport costs. Thirdly, spurious price...

  14. Uncertainty during breast diagnostic evaluation: state of the science.

    Montgomery, Mariann

    2010-01-01

    To present the state of the science on uncertainty in relationship to the experiences of women undergoing diagnostic evaluation for suspected breast cancer. Published articles from Medline, CINAHL, PubMED, and PsycINFO from 1983-2008 using the following key words: breast biopsy, mammography, uncertainty, reframing, inner strength, and disruption. Fifty research studies were examined with all reporting the presence of anxiety persisting throughout the diagnostic evaluation until certitude is achieved through the establishment of a definitive diagnosis. Indirect determinants of uncertainty for women undergoing breast diagnostic evaluation include measures of anxiety, depression, social support, emotional responses, defense mechanisms, and the psychological impact of events. Understanding and influencing the uncertainty experience have been suggested to be key in relieving psychosocial distress and positively influencing future screening behaviors. Several studies examine correlational relationships among anxiety, selection of coping methods, and demographic factors that influence uncertainty. A gap exists in the literature with regard to the relationship of inner strength and uncertainty. Nurses can be invaluable in assisting women in coping with the uncertainty experience by providing positive communication and support. Nursing interventions should be designed and tested for their effects on uncertainty experienced by women undergoing a breast diagnostic evaluation.

  15. CIEEM Skills Gap Project

    Bartlett, Deborah

    2017-01-01

    This paper describes the research conducted for the Chartered Institute for Ecology and Environmental Management to identify skills gaps within the profession. It involved surveys of professionals, conference workshops and an investigation into the views of employers regarding graduate recruitment.

  16. Wide-Gap Chalcopyrites

    Siebentritt, Susanne

    2006-01-01

    Chalcopyrites, in particular those with a wide band gap, are fascinating materials in terms of their technological potential in the next generation of thin-film solar cells and in terms of their basic material properties. They exhibit uniquely low defect formation energies, leading to unusual doping and phase behavior and to extremely benign grain boundaries. This book collects articles on a number of those basic material properties of wide-gap chalcopyrites, comparing them to their low-gap cousins. They explore the doping of the materials, the electronic structure and the transport through interfaces and grain boundaries, the formation of the electric field in a solar cell, the mechanisms and suppression of recombination, the role of inhomogeneities, and the technological role of wide-gap chalcopyrites.

  17. Uncertainty in adaptive capacity

    Neil Adger, W.; Vincent, K.

    2005-01-01

    The capacity to adapt is a critical element of the process of adaptation: it is the vector of resources that represent the asset base from which adaptation actions can be made. Adaptive capacity can in theory be identified and measured at various scales, from the individual to the nation. The assessment of uncertainty within such measures comes from the contested knowledge domain and theories surrounding the nature of the determinants of adaptive capacity and the human action of adaptation. While generic adaptive capacity at the national level, for example, is often postulated as being dependent on health, governance and political rights, and literacy, and economic well-being, the determinants of these variables at national levels are not widely understood. We outline the nature of this uncertainty for the major elements of adaptive capacity and illustrate these issues with the example of a social vulnerability index for countries in Africa. (authors)

  18. Uncertainties about climate

    Laval, Katia; Laval, Guy

    2013-01-01

    Like meteorology, climatology is not an exact science: climate change forecasts necessarily include a share of uncertainty. It is precisely this uncertainty which is brandished and exploited by the opponents to the global warming theory to put into question the estimations of its future consequences. Is it legitimate to predict the future using the past climate data (well documented up to 100000 years BP) or the climates of other planets, taking into account the impreciseness of the measurements and the intrinsic complexity of the Earth's machinery? How is it possible to model a so huge and interwoven system for which any exact description has become impossible? Why water and precipitations play such an important role in local and global forecasts, and how should they be treated? This book written by two physicists answers with simpleness these delicate questions in order to give anyone the possibility to build his own opinion about global warming and the need to act rapidly

  19. Gender-Pay-Gap

    Eicker, Jannis

    2017-01-01

    Der Gender-Pay-Gap ist eine statistische Kennzahl zur Messung der Ungleichheit zwischen Männern* und Frauen* beim Verdienst. Es gibt zwei Versionen: einen "unbereinigten" und einen "bereinigten". Der "unbereinigte" Gender-Pay-Gap berechnet den geschlechtsspezifischen Verdienstunterschied auf Basis der Bruttostundenlöhne aller Männer* und Frauen* der Grundgesamtheit. Beim "bereinigten" Wert hingegen werden je nach Studie verschiedene Faktoren wie Branche, Position und Berufserfahrung herausger...

  20. The Gender Pay Gap

    Alan Manning

    2006-01-01

    Empirical research on gender pay gaps has traditionally focused on the role of gender-specific factors, particularly gender differences in qualifications and differences in the treatment of otherwise equally qualified male and female workers (i.e., labor market discrimination). This paper explores the determinants of the gender pay gap and argues for the importance of an additional factor, wage structure, the array of prices set for labor market skills and the rewards received for employment ...

  1. The uncertainty principle

    Martens, Hans.

    1991-01-01

    The subject of this thesis is the uncertainty principle (UP). The UP is one of the most characteristic points of differences between quantum and classical mechanics. The starting point of this thesis is the work of Niels Bohr. Besides the discussion the work is also analyzed. For the discussion of the different aspects of the UP the formalism of Davies and Ludwig is used instead of the more commonly used formalism of Neumann and Dirac. (author). 214 refs.; 23 figs

  2. Uncertainty in artificial intelligence

    Shachter, RD; Henrion, M; Lemmer, JF

    1990-01-01

    This volume, like its predecessors, reflects the cutting edge of research on the automation of reasoning under uncertainty.A more pragmatic emphasis is evident, for although some papers address fundamental issues, the majority address practical issues. Topics include the relations between alternative formalisms (including possibilistic reasoning), Dempster-Shafer belief functions, non-monotonic reasoning, Bayesian and decision theoretic schemes, and new inference techniques for belief nets. New techniques are applied to important problems in medicine, vision, robotics, and natural language und

  3. Decision Making Under Uncertainty

    2010-11-01

    A sound approach to rational decision making requires a decision maker to establish decision objectives, identify alternatives, and evaluate those...often violate the axioms of rationality when making decisions under uncertainty. The systematic description of such observations may lead to the...which leads to “anchoring” on the initial value. The fact that individuals have been shown to deviate from rationality when making decisions

  4. Economic uncertainty principle?

    Alexander Harin

    2006-01-01

    The economic principle of (hidden) uncertainty is presented. New probability formulas are offered. Examples of solutions of three types of fundamental problems are reviewed.; Principe d'incertitude économique? Le principe économique d'incertitude (cachée) est présenté. De nouvelles formules de chances sont offertes. Les exemples de solutions des trois types de problèmes fondamentaux sont reconsidérés.

  5. Citizen Candidates Under Uncertainty

    Eguia, Jon X.

    2005-01-01

    In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...

  6. Calibration Under Uncertainty.

    Swiler, Laura Painton; Trucano, Timothy Guy

    2005-03-01

    This report is a white paper summarizing the literature and different approaches to the problem of calibrating computer model parameters in the face of model uncertainty. Model calibration is often formulated as finding the parameters that minimize the squared difference between the model-computed data (the predicted data) and the actual experimental data. This approach does not allow for explicit treatment of uncertainty or error in the model itself: the model is considered the %22true%22 deterministic representation of reality. While this approach does have utility, it is far from an accurate mathematical treatment of the true model calibration problem in which both the computed data and experimental data have error bars. This year, we examined methods to perform calibration accounting for the error in both the computer model and the data, as well as improving our understanding of its meaning for model predictability. We call this approach Calibration under Uncertainty (CUU). This talk presents our current thinking on CUU. We outline some current approaches in the literature, and discuss the Bayesian approach to CUU in detail.

  7. Participation under Uncertainty

    Boudourides, Moses A.

    2003-01-01

    This essay reviews a number of theoretical perspectives about uncertainty and participation in the present-day knowledge-based society. After discussing the on-going reconfigurations of science, technology and society, we examine how appropriate for policy studies are various theories of social complexity. Post-normal science is such an example of a complexity-motivated approach, which justifies civic participation as a policy response to an increasing uncertainty. But there are different categories and models of uncertainties implying a variety of configurations of policy processes. A particular role in all of them is played by expertise whose democratization is an often-claimed imperative nowadays. Moreover, we discuss how different participatory arrangements are shaped into instruments of policy-making and framing regulatory processes. As participation necessitates and triggers deliberation, we proceed to examine the role and the barriers of deliberativeness. Finally, we conclude by referring to some critical views about the ultimate assumptions of recent European policy frameworks and the conceptions of civic participation and politicization that they invoke

  8. Uncertainty analysis techniques

    Marivoet, J.; Saltelli, A.; Cadelli, N.

    1987-01-01

    The origin of the uncertainty affecting Performance Assessments, as well as their propagation to dose and risk results is discussed. The analysis is focused essentially on the uncertainties introduced by the input parameters, the values of which may range over some orders of magnitude and may be given as probability distribution function. The paper briefly reviews the existing sampling techniques used for Monte Carlo simulations and the methods for characterizing the output curves, determining their convergence and confidence limits. Annual doses, expectation values of the doses and risks are computed for a particular case of a possible repository in clay, in order to illustrate the significance of such output characteristics as the mean, the logarithmic mean and the median as well as their ratios. The report concludes that provisionally, due to its better robustness, such estimation as the 90th percentile may be substituted to the arithmetic mean for comparison of the estimated doses with acceptance criteria. In any case, the results obtained through Uncertainty Analyses must be interpreted with caution as long as input data distribution functions are not derived from experiments reasonably reproducing the situation in a well characterized repository and site

  9. Deterministic uncertainty analysis

    Worley, B.A.

    1987-12-01

    This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs

  10. Gap length distributions by PEPR

    Warszawer, T.N.

    1980-01-01

    Conditions guaranteeing exponential gap length distributions are formulated and discussed. Exponential gap length distributions of bubble chamber tracks first obtained on a CRT device are presented. Distributions of resulting average gap lengths and their velocity dependence are discussed. (orig.)

  11. Methodologies of Uncertainty Propagation Calculation

    Chojnacki, Eric

    2002-01-01

    After recalling the theoretical principle and the practical difficulties of the methodologies of uncertainty propagation calculation, the author discussed how to propagate input uncertainties. He said there were two kinds of input uncertainty: - variability: uncertainty due to heterogeneity, - lack of knowledge: uncertainty due to ignorance. It was therefore necessary to use two different propagation methods. He demonstrated this in a simple example which he generalised, treating the variability uncertainty by the probability theory and the lack of knowledge uncertainty by the fuzzy theory. He cautioned, however, against the systematic use of probability theory which may lead to unjustifiable and illegitimate precise answers. Mr Chojnacki's conclusions were that the importance of distinguishing variability and lack of knowledge increased as the problem was getting more and more complex in terms of number of parameters or time steps, and that it was necessary to develop uncertainty propagation methodologies combining probability theory and fuzzy theory

  12. LOFT uncertainty-analysis methodology

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear-reactor-safety research program is described and compared with other methodologies established for performing uncertainty analyses

  13. LOFT uncertainty-analysis methodology

    Lassahn, G.D.

    1983-01-01

    The methodology used for uncertainty analyses of measurements in the Loss-of-Fluid Test (LOFT) nuclear reactor safety research program is described and compared with other methodologies established for performing uncertainty analyses

  14. Do Orthopaedic Surgeons Acknowledge Uncertainty?

    Teunis, Teun; Janssen, Stein; Guitton, Thierry G.; Ring, David; Parisien, Robert

    2016-01-01

    Much of the decision-making in orthopaedics rests on uncertain evidence. Uncertainty is therefore part of our normal daily practice, and yet physician uncertainty regarding treatment could diminish patients' health. It is not known if physician uncertainty is a function of the evidence alone or if

  15. SRTC - Gap Analysis Table

    M.L. Johnson

    2005-01-01

    The purpose of this document is to review the existing SRTC design against the ''Nuclear Safety Design Bases for License Application'' (NSDB) [Ref. 10] requirements and to identify codes and standards and supplemental requirements to meet these requirements. If these codes and standards and supplemental requirements can not fully meet these safety requirements then a ''gap'' is identified. These gaps will be identified here and addressed using the ''Site Rail Transfer Cart (SRTC) Design Development Plan'' [Ref. 14]. The codes and standards, supplemental requirements, and design development requirements are provided in the SRTC and associated rails gap analysis table in Appendix A. Because SRTCs are credited with performing functions important to safety (ITS) in the NSDB [Ref. 10], design basis requirements are applicable to ensure equipment is available and performs required safety functions when needed. The gap analysis table is used to identify design objectives and provide a means to satisfy safety requirements. To ensure that the SRTC and rail design perform required safety Functions and meet performance criteria, this portion of the gap analysis table supplies codes and standards sections and the supplemental requirements and identifies design development requirements, if needed

  16. Investment and uncertainty

    Greasley, David; Madsen, Jakob B.

    2006-01-01

    A severe collapse of fixed capital formation distinguished the onset of the Great Depression from other investment downturns between the world wars. Using a model estimated for the years 1890-2000, we show that the expected profitability of capital measured by Tobin's q, and the uncertainty...... surrounding expected profits indicated by share price volatility, were the chief influences on investment levels, and that heightened share price volatility played the dominant role in the crucial investment collapse in 1930. Investment did not simply follow the downward course of income at the onset...

  17. Optimization under Uncertainty

    Lopez, Rafael H.

    2016-01-06

    The goal of this poster is to present the main approaches to optimization of engineering systems in the presence of uncertainties. We begin by giving an insight about robust optimization. Next, we detail how to deal with probabilistic constraints in optimization, the so called the reliability based design. Subsequently, we present the risk optimization approach, which includes the expected costs of failure in the objective function. After that the basic description of each approach is given, the projects developed by CORE are presented. Finally, the main current topic of research of CORE is described.

  18. Optimizing production under uncertainty

    Rasmussen, Svend

    This Working Paper derives criteria for optimal production under uncertainty based on the state-contingent approach (Chambers and Quiggin, 2000), and discusses po-tential problems involved in applying the state-contingent approach in a normative context. The analytical approach uses the concept...... of state-contingent production functions and a definition of inputs including both sort of input, activity and alloca-tion technology. It also analyses production decisions where production is combined with trading in state-contingent claims such as insurance contracts. The final part discusses...

  19. Commonplaces and social uncertainty

    Lassen, Inger

    2008-01-01

    This article explores the concept of uncertainty in four focus group discussions about genetically modified food. In the discussions, members of the general public interact with food biotechnology scientists while negotiating their attitudes towards genetic engineering. Their discussions offer...... an example of risk discourse in which the use of commonplaces seems to be a central feature (Myers 2004: 81). My analyses support earlier findings that commonplaces serve important interactional purposes (Barton 1999) and that they are used for mitigating disagreement, for closing topics and for facilitating...

  20. Principles of Uncertainty

    Kadane, Joseph B

    2011-01-01

    An intuitive and mathematical introduction to subjective probability and Bayesian statistics. An accessible, comprehensive guide to the theory of Bayesian statistics, Principles of Uncertainty presents the subjective Bayesian approach, which has played a pivotal role in game theory, economics, and the recent boom in Markov Chain Monte Carlo methods. Both rigorous and friendly, the book contains: Introductory chapters examining each new concept or assumption Just-in-time mathematics -- the presentation of ideas just before they are applied Summary and exercises at the end of each chapter Discus

  1. Mathematical Analysis of Uncertainty

    Angel GARRIDO

    2016-01-01

    Full Text Available Classical Logic showed early its insufficiencies for solving AI problems. The introduction of Fuzzy Logic aims at this problem. There have been research in the conventional Rough direction alone or in the Fuzzy direction alone, and more recently, attempts to combine both into Fuzzy Rough Sets or Rough Fuzzy Sets. We analyse some new and powerful tools in the study of Uncertainty, as the Probabilistic Graphical Models, Chain Graphs, Bayesian Networks, and Markov Networks, integrating our knowledge of graphs and probability.

  2. Robustness of risk maps and survey networks to knowledge gaps about a new invasive pest

    Denys Yemshanov; Frank H. Koch; Yakov Ben-Haim; William D. Smith

    2010-01-01

    In pest risk assessment it is frequently necessary to make management decisions regarding emerging threats under severe uncertainty. Although risk maps provide useful decision support for invasive alien species, they rarely address knowledge gaps associated with the underlying risk model or how they may change the risk estimates. Failure to recognize uncertainty leads...

  3. Summary from the epistemic uncertainty workshop: consensus amid diversity

    Ferson, Scott; Joslyn, Cliff A.; Helton, Jon C.; Oberkampf, William L.; Sentz, Kari

    2004-01-01

    The 'Epistemic Uncertainty Workshop' sponsored by Sandia National Laboratories was held in Albuquerque, New Mexico, on 6-7 August 2002. The workshop was organized around a set of Challenge Problems involving both epistemic and aleatory uncertainty that the workshop participants were invited to solve and discuss. This concluding article in a special issue of Reliability Engineering and System Safety based on the workshop discusses the intent of the Challenge Problems, summarizes some discussions from the workshop, and provides a technical comparison among the papers in this special issue. The Challenge Problems were computationally simple models that were intended as vehicles for the illustration and comparison of conceptual and numerical techniques for use in analyses that involve: (i) epistemic uncertainty, (ii) aggregation of multiple characterizations of epistemic uncertainty, (iii) combination of epistemic and aleatory uncertainty, and (iv) models with repeated parameters. There was considerable diversity of opinion at the workshop about both methods and fundamental issues, and yet substantial consensus about what the answers to the problems were, and even about how each of the four issues should be addressed. Among the technical approaches advanced were probability theory, Dempster-Shafer evidence theory, random sets, sets of probability measures, imprecise coherent probabilities, coherent lower previsions, probability boxes, possibility theory, fuzzy sets, joint distribution tableaux, polynomial chaos expansions, and info-gap models. Although some participants maintained that a purely probabilistic approach is fully capable of accounting for all forms of uncertainty, most agreed that the treatment of epistemic uncertainty introduces important considerations and that the issues underlying the Challenge Problems are legitimate and significant. Topics identified as meriting additional research include elicitation of uncertainty representations, aggregation of

  4. The longevity gender gap

    Aviv, Abraham; Shay, Jerry; Christensen, Kaare

    2005-01-01

    In this Perspective, we focus on the greater longevity of women as compared with men. We propose that, like aging itself, the longevity gender gap is exceedingly complex and argue that it may arise from sex-related hormonal differences and from somatic cell selection that favors cells more...... resistant to the ravages of time. We discuss the interplay of these factors with telomere biology and oxidative stress and suggest that an explanation for the longevity gender gap may arise from a better understanding of the differences in telomere dynamics between men and women....

  5. Investment, regulation, and uncertainty

    Smyth, Stuart J; McDonald, Jillian; Falck-Zepeda, Jose

    2014-01-01

    As with any technological innovation, time refines the technology, improving upon the original version of the innovative product. The initial GM crops had single traits for either herbicide tolerance or insect resistance. Current varieties have both of these traits stacked together and in many cases other abiotic and biotic traits have also been stacked. This innovation requires investment. While this is relatively straight forward, certain conditions need to exist such that investments can be facilitated. The principle requirement for investment is that regulatory frameworks render consistent and timely decisions. If the certainty of regulatory outcomes weakens, the potential for changes in investment patterns increases.   This article provides a summary background to the leading plant breeding technologies that are either currently being used to develop new crop varieties or are in the pipeline to be applied to plant breeding within the next few years. Challenges for existing regulatory systems are highlighted. Utilizing an option value approach from investment literature, an assessment of uncertainty regarding the regulatory approval for these varying techniques is undertaken. This research highlights which technology development options have the greatest degree of uncertainty and hence, which ones might be expected to see an investment decline. PMID:24499745

  6. Probabilistic Mass Growth Uncertainties

    Plumer, Eric; Elliott, Darren

    2013-01-01

    Mass has been widely used as a variable input parameter for Cost Estimating Relationships (CER) for space systems. As these space systems progress from early concept studies and drawing boards to the launch pad, their masses tend to grow substantially, hence adversely affecting a primary input to most modeling CERs. Modeling and predicting mass uncertainty, based on historical and analogous data, is therefore critical and is an integral part of modeling cost risk. This paper presents the results of a NASA on-going effort to publish mass growth datasheet for adjusting single-point Technical Baseline Estimates (TBE) of masses of space instruments as well as spacecraft, for both earth orbiting and deep space missions at various stages of a project's lifecycle. This paper will also discusses the long term strategy of NASA Headquarters in publishing similar results, using a variety of cost driving metrics, on an annual basis. This paper provides quantitative results that show decreasing mass growth uncertainties as mass estimate maturity increases. This paper's analysis is based on historical data obtained from the NASA Cost Analysis Data Requirements (CADRe) database.

  7. Embracing uncertainty in applied ecology.

    Milner-Gulland, E J; Shea, K

    2017-12-01

    Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.

  8. Oil price uncertainty in Canada

    Elder, John [Department of Finance and Real Estate, 1272 Campus Delivery, Colorado State University, Fort Collins, CO 80523 (United States); Serletis, Apostolos [Department of Economics, University of Calgary, Calgary, Alberta (Canada)

    2009-11-15

    Bernanke [Bernanke, Ben S. Irreversibility, uncertainty, and cyclical investment. Quarterly Journal of Economics 98 (1983), 85-106.] shows how uncertainty about energy prices may induce optimizing firms to postpone investment decisions, thereby leading to a decline in aggregate output. Elder and Serletis [Elder, John and Serletis, Apostolos. Oil price uncertainty.] find empirical evidence that uncertainty about oil prices has tended to depress investment in the United States. In this paper we assess the robustness of these results by investigating the effects of oil price uncertainty in Canada. Our results are remarkably similar to existing results for the United States, providing additional evidence that uncertainty about oil prices may provide another explanation for why the sharp oil price declines of 1985 failed to produce rapid output growth. Impulse-response analysis suggests that uncertainty about oil prices may tend to reinforce the negative response of output to positive oil shocks. (author)

  9. Quantification of margins and uncertainties: Alternative representations of epistemic uncertainty

    Helton, Jon C.; Johnson, Jay D.

    2011-01-01

    In 2001, the National Nuclear Security Administration of the U.S. Department of Energy in conjunction with the national security laboratories (i.e., Los Alamos National Laboratory, Lawrence Livermore National Laboratory and Sandia National Laboratories) initiated development of a process designated Quantification of Margins and Uncertainties (QMU) for the use of risk assessment methodologies in the certification of the reliability and safety of the nation's nuclear weapons stockpile. A previous presentation, 'Quantification of Margins and Uncertainties: Conceptual and Computational Basis,' describes the basic ideas that underlie QMU and illustrates these ideas with two notional examples that employ probability for the representation of aleatory and epistemic uncertainty. The current presentation introduces and illustrates the use of interval analysis, possibility theory and evidence theory as alternatives to the use of probability theory for the representation of epistemic uncertainty in QMU-type analyses. The following topics are considered: the mathematical structure of alternative representations of uncertainty, alternative representations of epistemic uncertainty in QMU analyses involving only epistemic uncertainty, and alternative representations of epistemic uncertainty in QMU analyses involving a separation of aleatory and epistemic uncertainty. Analyses involving interval analysis, possibility theory and evidence theory are illustrated with the same two notional examples used in the presentation indicated above to illustrate the use of probability to represent aleatory and epistemic uncertainty in QMU analyses.

  10. A study of the influence of forest gaps on fire–atmosphere interactions

    Michael T. Kiefer; Warren E. Heilman; Shiyuan Zhong; Joseph J. (Jay) Charney; Xindi (Randy) Bian

    2016-01-01

    Much uncertainty exists regarding the possible role that gaps in forest canopies play in modulating fire–atmosphere interactions in otherwise horizontally homogeneous forests. This study examines the influence of gaps in forest canopies on atmospheric perturbations induced by a low-intensity fire using the ARPS-CANOPY model, a version of the Advanced Regional...

  11. SU-E-I-93: Improved Imaging Quality for Multislice Helical CT Via Sparsity Regularized Iterative Image Reconstruction Method Based On Tensor Framelet

    Nam, H; Guo, M; Lee, K; Li, R; Xing, L; Gao, H

    2014-01-01

    Purpose: Inspired by compressive sensing, sparsity regularized iterative reconstruction method has been extensively studied. However, its utility pertinent to multislice helical 4D CT for radiotherapy with respect to imaging quality, dose, and time has not been thoroughly addressed. As the beginning of such an investigation, this work carries out the initial comparison of reconstructed imaging quality between sparsity regularized iterative method and analytic method through static phantom studies using a state-of-art 128-channel multi-slice Siemens helical CT scanner. Methods: In our iterative method, tensor framelet (TF) is chosen as the regularization method for its superior performance from total variation regularization in terms of reduced piecewise-constant artifacts and improved imaging quality that has been demonstrated in our prior work. On the other hand, X-ray transforms and its adjoints are computed on-the-fly through GPU implementation using our previous developed fast parallel algorithms with O(1) complexity per computing thread. For comparison, both FDK (approximate analytic method) and Katsevich algorithm (exact analytic method) are used for multislice helical CT image reconstruction. Results: The phantom experimental data with different imaging doses were acquired using a state-of-art 128-channel multi-slice Siemens helical CT scanner. The reconstructed image quality was compared between TF-based iterative method, FDK and Katsevich algorithm with the quantitative analysis for characterizing signal-to-noise ratio, image contrast, and spatial resolution of high-contrast and low-contrast objects. Conclusion: The experimental results suggest that our tensor framelet regularized iterative reconstruction algorithm improves the helical CT imaging quality from FDK and Katsevich algorithm for static experimental phantom studies that have been performed

  12. Estimating Gender Wage Gaps

    McDonald, Judith A.; Thornton, Robert J.

    2011-01-01

    Course research projects that use easy-to-access real-world data and that generate findings with which undergraduate students can readily identify are hard to find. The authors describe a project that requires students to estimate the current female-male earnings gap for new college graduates. The project also enables students to see to what…

  13. The uncertainty in physical measurements an introduction to data analysis in the physics laboratory

    Fornasini, Paolo

    2008-01-01

    All measurements of physical quantities are affected by uncertainty. Understanding the origin of uncertainty, evaluating its extent and suitably taking it into account in data analysis is essential for assessing the degree of accuracy of phenomenological relationships and physical laws in both scientific research and technological applications. The Uncertainty in Physical Measurements: An Introduction to Data Analysis in the Physics Laboratory presents an introduction to uncertainty and to some of the most common procedures of data analysis. This book will serve the reader well by filling the gap between tutorial textbooks and highly specialized monographs. The book is divided into three parts. The first part is a phenomenological introduction to measurement and uncertainty: properties of instruments, different causes and corresponding expressions of uncertainty, histograms and distributions, and unified expression of uncertainty. The second part contains an introduction to probability theory, random variable...

  14. Determining the ’Gap

    2009-05-01

    Army training doctrine, and by adjusting the curriculum of the officer core in order to close the knowledge gap . The author closes by concluding...fight. The research to find these gaps begins with a process trace of doctrine from 1976 to the present, starting with the advent of Active Defense...discovering the one gap , three were found. Upon further examination below, even these initially perceived gaps dissipate under close scrutiny. Gap

  15. Uncertainties of Molecular Structural Parameters

    Császár, Attila G.

    2014-01-01

    performed. Simply, there are significant disagreements between the same bond lengths measured by different techniques. These disagreements are, however, systematic and can be computed via techniques of quantum chemistry which deal not only with the motions of the electrons (electronic structure theory) but also with the often large amplitude motions of the nuclei. As to the relevant quantum chemical computations, since about 1970 electronic structure theory has become able to make quantitative predictions and thus challenge (or even overrule) many experiments. Nevertheless, quantitative agreement of quantum chemical results with experiment can only be expected when the motions of the atoms are also considered. In the fourth age of quantum chemistry we are living in an era where one can bridge quantitatively the gap between ‘effective’, experimental and ‘equilibrium’, computed structures at even elevated temperatures of interest thus minimizing any real uncertainties of structural parameters. The connections mentioned are extremely important as they help to understand the true uncertainty of measured structural parameters. Traditionally it is microwave (MW) and millimeterwave (MMW) spectroscopy, as well as gas-phase electron diffraction (GED), which yielded the most accurate structural parameters of molecules. The accuracy of the MW and GED experiments approached about 0.001Å and 0.1º under ideal circumstances, worse, sometimes considerably worse, in less than ideal and much more often encountered situations. Quantum chemistry can define both highly accurate equilibrium (so-called Born-Oppenheimer, r_e"B"O, and semiexperimental, r_e"S"E) structures and, via detailed investigation of molecular motions, accurate temperature-dependent rovibrationally averaged structures. Determining structures is still a rich field for research, understanding the measured or computed uncertainties of structures and structural parameters is still a challenge but there are firm and well

  16. Heisenberg's principle of uncertainty and the uncertainty relations

    Redei, Miklos

    1987-01-01

    The usual verbal form of the Heisenberg uncertainty principle and the usual mathematical formulation (the so-called uncertainty theorem) are not equivalent. The meaning of the concept 'uncertainty' is not unambiguous and different interpretations are used in the literature. Recently a renewed interest has appeared to reinterpret and reformulate the precise meaning of Heisenberg's principle and to find adequate mathematical form. The suggested new theorems are surveyed and critically analyzed. (D.Gy.) 20 refs

  17. Uncertainty as Certaint

    Petzinger, Tom

    I am trying to make money in the biotech industry from complexity science. And I am doing it with inspiration that I picked up on the edge of Appalachia spending time with June Holley and ACEnet when I was a Wall Street Journal reporter. I took some of those ideas to Pittsburgh, in biotechnology, in a completely private setting with an economic development focus, but also with a mission t o return profit to private capital. And we are doing that. I submit as a hypothesis, something we are figuring out in the post- industrial era, that business evolves. It is not the definition of business, but business critically involves the design of systems in which uncertainty is treated as a certainty. That is what I have seen and what I have tried to put into practice.

  18. Orientation and uncertainties

    Peters, H.P.; Hennen, L.

    1990-01-01

    The authors report on the results of three representative surveys that made a closer inquiry into perceptions and valuations of information and information sources concering Chernobyl. If turns out that the information sources are generally considered little trustworthy. This was generally attributable to the interpretation of the events being tied to attitudes in the atmonic energy issue. The greatest credit was given to television broadcasting. The authors summarize their discourse as follows: There is good reason to interpret the widespread uncertainty after Chernobyl as proof of the fact that large parts of the population are prepared and willing to assume a critical stance towards information and prefer to draw their information from various sources representing different positions. (orig.) [de

  19. DOD ELAP Lab Uncertainties

    2012-03-01

    ISO / IEC   17025  Inspection Bodies – ISO / IEC  17020  RMPs – ISO  Guide 34 (Reference...certify to :  ISO  9001 (QMS),  ISO  14001 (EMS),   TS 16949 (US Automotive)  etc. 2 3 DoD QSM 4.2 standard   ISO / IEC   17025 :2005  Each has uncertainty...IPV6, NLLAP, NEFAP  TRAINING Programs  Certification Bodies – ISO / IEC  17021  Accreditation for  Management System 

  20. Traceability and Measurement Uncertainty

    Tosello, Guido; De Chiffre, Leonardo

    2004-01-01

    . The project partnership aims (composed by 7 partners in 5 countries, thus covering a real European spread in high tech production technology) to develop and implement an advanced e-learning system that integrates contributions from quite different disciplines into a user-centred approach that strictly....... Machine tool testing 9. The role of manufacturing metrology for QM 10. Inspection planning 11. Quality management of measurements incl. Documentation 12. Advanced manufacturing measurement technology The present report (which represents the section 2 - Traceability and Measurement Uncertainty – of the e-learning......This report is made as a part of the project ‘Metro-E-Learn: European e-Learning in Manufacturing Metrology’, an EU project under the program SOCRATES MINERVA (ODL and ICT in Education), Contract No: 101434-CP-1-2002-1-DE-MINERVA, coordinated by Friedrich-Alexander-University Erlangen...

  1. Decision making under uncertainty

    Cyert, R.M.

    1989-01-01

    This paper reports on ways of improving the reliability of products and systems in this country if we are to survive as a first-rate industrial power. The use of statistical techniques have, since the 1920s, been viewed as one of the methods for testing quality and estimating the level of quality in a universe of output. Statistical quality control is not relevant, generally, to improving systems in an industry like yours, but certainly the use of probability concepts is of significance. In addition, when it is recognized that part of the problem involves making decisions under uncertainty, it becomes clear that techniques such as sequential decision making and Bayesian analysis become major methodological approaches that must be utilized

  2. Sustainability and uncertainty

    Jensen, Karsten Klint

    2007-01-01

    The widely used concept of sustainability is seldom precisely defined, and its clarification involves making up one's mind about a range of difficult questions. One line of research (bottom-up) takes sustaining a system over time as its starting point and then infers prescriptions from...... this requirement. Another line (top-down) takes an economical interpretation of the Brundtland Commission's suggestion that the present generation's needsatisfaction should not compromise the need-satisfaction of future generations as its starting point. It then measures sustainability at the level of society...... a clarified ethical goal, disagreements can arise. At present we do not know what substitutions will be possible in the future. This uncertainty clearly affects the prescriptions that follow from the measure of sustainability. Consequently, decisions about how to make future agriculture sustainable...

  3. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  4. Modelling ecosystem service flows under uncertainty with stochiastic SPAN

    Johnson, Gary W.; Snapp, Robert R.; Villa, Ferdinando; Bagstad, Kenneth J.

    2012-01-01

    Ecosystem service models are increasingly in demand for decision making. However, the data required to run these models are often patchy, missing, outdated, or untrustworthy. Further, communication of data and model uncertainty to decision makers is often either absent or unintuitive. In this work, we introduce a systematic approach to addressing both the data gap and the difficulty in communicating uncertainty through a stochastic adaptation of the Service Path Attribution Networks (SPAN) framework. The SPAN formalism assesses ecosystem services through a set of up to 16 maps, which characterize the services in a study area in terms of flow pathways between ecosystems and human beneficiaries. Although the SPAN algorithms were originally defined deterministically, we present them here in a stochastic framework which combines probabilistic input data with a stochastic transport model in order to generate probabilistic spatial outputs. This enables a novel feature among ecosystem service models: the ability to spatially visualize uncertainty in the model results. The stochastic SPAN model can analyze areas where data limitations are prohibitive for deterministic models. Greater uncertainty in the model inputs (including missing data) should lead to greater uncertainty expressed in the model’s output distributions. By using Bayesian belief networks to fill data gaps and expert-provided trust assignments to augment untrustworthy or outdated information, we can account for uncertainty in input data, producing a model that is still able to run and provide information where strictly deterministic models could not. Taken together, these attributes enable more robust and intuitive modelling of ecosystem services under uncertainty.

  5. Essays on model uncertainty in financial models

    Li, Jing

    2018-01-01

    This dissertation studies model uncertainty, particularly in financial models. It consists of two empirical chapters and one theoretical chapter. The first empirical chapter (Chapter 2) classifies model uncertainty into parameter uncertainty and misspecification uncertainty. It investigates the

  6. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    Mathieu Lepot

    2017-10-01

    Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.

  7. A new uncertainty importance measure

    Borgonovo, E.

    2007-01-01

    Uncertainty in parameters is present in many risk assessment problems and leads to uncertainty in model predictions. In this work, we introduce a global sensitivity indicator which looks at the influence of input uncertainty on the entire output distribution without reference to a specific moment of the output (moment independence) and which can be defined also in the presence of correlations among the parameters. We discuss its mathematical properties and highlight the differences between the present indicator, variance-based uncertainty importance measures and a moment independent sensitivity indicator previously introduced in the literature. Numerical results are discussed with application to the probabilistic risk assessment model on which Iman [A matrix-based approach to uncertainty and sensitivity analysis for fault trees. Risk Anal 1987;7(1):22-33] first introduced uncertainty importance measures

  8. Uncertainty Management and Sensitivity Analysis

    Rosenbaum, Ralph K.; Georgiadis, Stylianos; Fantke, Peter

    2018-01-01

    Uncertainty is always there and LCA is no exception to that. The presence of uncertainties of different types and from numerous sources in LCA results is a fact, but managing them allows to quantify and improve the precision of a study and the robustness of its conclusions. LCA practice sometimes...... suffers from an imbalanced perception of uncertainties, justifying modelling choices and omissions. Identifying prevalent misconceptions around uncertainties in LCA is a central goal of this chapter, aiming to establish a positive approach focusing on the advantages of uncertainty management. The main...... objectives of this chapter are to learn how to deal with uncertainty in the context of LCA, how to quantify it, interpret and use it, and how to communicate it. The subject is approached more holistically than just focusing on relevant statistical methods or purely mathematical aspects. This chapter...

  9. Additivity of entropic uncertainty relations

    René Schwonnek

    2018-03-01

    Full Text Available We consider the uncertainty between two pairs of local projective measurements performed on a multipartite system. We show that the optimal bound in any linear uncertainty relation, formulated in terms of the Shannon entropy, is additive. This directly implies, against naive intuition, that the minimal entropic uncertainty can always be realized by fully separable states. Hence, in contradiction to proposals by other authors, no entanglement witness can be constructed solely by comparing the attainable uncertainties of entangled and separable states. However, our result gives rise to a huge simplification for computing global uncertainty bounds as they now can be deduced from local ones. Furthermore, we provide the natural generalization of the Maassen and Uffink inequality for linear uncertainty relations with arbitrary positive coefficients.

  10. Decommissioning funding: ethics, implementation, uncertainties

    2006-01-01

    This status report on Decommissioning Funding: Ethics, Implementation, Uncertainties also draws on the experience of the NEA Working Party on Decommissioning and Dismantling (WPDD). The report offers, in a concise form, an overview of relevant considerations on decommissioning funding mechanisms with regard to ethics, implementation and uncertainties. Underlying ethical principles found in international agreements are identified, and factors influencing the accumulation and management of funds for decommissioning nuclear facilities are discussed together with the main sources of uncertainties of funding systems. (authors)

  11. Chemical model reduction under uncertainty

    Najm, Habib; Galassi, R. Malpica; Valorani, M.

    2016-01-01

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  12. Chemical model reduction under uncertainty

    Najm, Habib

    2016-01-05

    We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.

  13. The Uncertainty of Measurement Results

    Ambrus, A. [Hungarian Food Safety Office, Budapest (Hungary)

    2009-07-15

    Factors affecting the uncertainty of measurement are explained, basic statistical formulae given, and the theoretical concept explained in the context of pesticide formulation analysis. Practical guidance is provided on how to determine individual uncertainty components within an analytical procedure. An extended and comprehensive table containing the relevant mathematical/statistical expressions elucidates the relevant underlying principles. Appendix I provides a practical elaborated example on measurement uncertainty estimation, above all utilizing experimental repeatability and reproducibility laboratory data. (author)

  14. Uncertainty analysis of environmental models

    Monte, L.

    1990-01-01

    In the present paper an evaluation of the output uncertainty of an environmental model for assessing the transfer of 137 Cs and 131 I in the human food chain are carried out on the basis of a statistical analysis of data reported by the literature. The uncertainty analysis offers the oppotunity of obtaining some remarkable information about the uncertainty of models predicting the migration of non radioactive substances in the environment mainly in relation to the dry and wet deposition

  15. Mind the Gap!

    Schmidt, Kjeld; Simone, Carla

    2000-01-01

    CSCW at large seems to be pursuing two diverging strategies: on one hand a strategy aiming at coordination technologies that reduce the complexity of coordinating cooperative activities by regulating the coordinative interactions, and on the other hand a strategy that aims at radically flexible m...... and blended in the course of real world cooperative activities. On the basis of this discussion the paper outlines an approach which may help CSCW research to bridge this gap....... means of interaction which do not regulate interaction but rather leave it to the users to cope with the complexity of coordinating their activities. As both strategies reflect genuine requirements, we need to address the issue of how the gap can be bridged, that is, how the two strategies can...

  16. Closing the gap

    Moxon, Suzanne

    1999-01-01

    The problem of fish going through turbines at hydroelectric power plants and the growing concern over the survival rate of salmon at the US Army Corps operated Bonneville lock and dam on the Columbia river in the Pacific Northwest is discussed. The protection of the fish, the assessment of the hazards facing fish passing through turbines, the development of a new turbine, and improved turbine efficiency that reduces cavitation, turbulence and shear flow are examined. The closing of the gap between the turbine blades, hub and discharge ring to increase efficiency and reduce the risk to fish, and the development of the minimum gap runner (MGR) are described, and the lower maximum permitted power output of MGR is noted. (UK)

  17. Filling the Gap between IT Governance and IT Project Management

    Lundin, Jette

    2007-01-01

    there is a gap between IT governance and IT project management. Theory on IT governance assumes that strategies are implemented through projects - but do not go into detail on how to do it. Theories on project management do not include interaction with governance processes. A gap between IT governance...... and IT project management can result in IT that does not support business strategy and in lack of flexibility and agility. Competitive, changing business environments combined with the uncertainty and unpredictability of IT implementation projects call for IT governance organisation and processes to sense......The goal of this paper is to explore coordination mechanisms as part of a solution to fill the gap between IT Governance and IT project management. The Gap between IT governance and IT project management has not been fully explored in IT management research. Both in theory and in practice...

  18. Uncertainty quantification in resonance absorption

    Williams, M.M.R.

    2012-01-01

    We assess the uncertainty in the resonance escape probability due to uncertainty in the neutron and radiation line widths for the first 21 resonances in 232 Th as given by . Simulation, quadrature and polynomial chaos methods are used and the resonance data are assumed to obey a beta distribution. We find the uncertainty in the total resonance escape probability to be the equivalent, in reactivity, of 75–130 pcm. Also shown are pdfs of the resonance escape probability for each resonance and the variation of the uncertainty with temperature. The viability of the polynomial chaos expansion method is clearly demonstrated.

  19. Reliability analysis under epistemic uncertainty

    Nannapaneni, Saideep; Mahadevan, Sankaran

    2016-01-01

    This paper proposes a probabilistic framework to include both aleatory and epistemic uncertainty within model-based reliability estimation of engineering systems for individual limit states. Epistemic uncertainty is considered due to both data and model sources. Sparse point and/or interval data regarding the input random variables leads to uncertainty regarding their distribution types, distribution parameters, and correlations; this statistical uncertainty is included in the reliability analysis through a combination of likelihood-based representation, Bayesian hypothesis testing, and Bayesian model averaging techniques. Model errors, which include numerical solution errors and model form errors, are quantified through Gaussian process models and included in the reliability analysis. The probability integral transform is used to develop an auxiliary variable approach that facilitates a single-level representation of both aleatory and epistemic uncertainty. This strategy results in an efficient single-loop implementation of Monte Carlo simulation (MCS) and FORM/SORM techniques for reliability estimation under both aleatory and epistemic uncertainty. Two engineering examples are used to demonstrate the proposed methodology. - Highlights: • Epistemic uncertainty due to data and model included in reliability analysis. • A novel FORM-based approach proposed to include aleatory and epistemic uncertainty. • A single-loop Monte Carlo approach proposed to include both types of uncertainties. • Two engineering examples used for illustration.

  20. Simplified propagation of standard uncertainties

    Shull, A.H.

    1997-01-01

    An essential part of any measurement control program is adequate knowledge of the uncertainties of the measurement system standards. Only with an estimate of the standards'' uncertainties can one determine if the standard is adequate for its intended use or can one calculate the total uncertainty of the measurement process. Purchased standards usually have estimates of uncertainty on their certificates. However, when standards are prepared and characterized by a laboratory, variance propagation is required to estimate the uncertainty of the standard. Traditional variance propagation typically involves tedious use of partial derivatives, unfriendly software and the availability of statistical expertise. As a result, the uncertainty of prepared standards is often not determined or determined incorrectly. For situations meeting stated assumptions, easier shortcut methods of estimation are now available which eliminate the need for partial derivatives and require only a spreadsheet or calculator. A system of simplifying the calculations by dividing into subgroups of absolute and relative uncertainties is utilized. These methods also incorporate the International Standards Organization (ISO) concepts for combining systematic and random uncertainties as published in their Guide to the Expression of Measurement Uncertainty. Details of the simplified methods and examples of their use are included in the paper

  1. Minding the Gap

    Firestone, Millicent Anne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Neutron & X-ray scattering provides nano- to meso-scale details of complex fluid structure; 1D electronic density maps dervied from SAXS yield molecular level insights; Neutron reflectivity provides substructure details of substrate supported complex fluids; Complex fluids composition can be optimized to support a wide variety of both soluble and membrane proteins; The water gap dimensions can be finely tuned through polymer component.

  2. Gender gap in entrepreneurship

    Startienė, Gražina; Remeikienė, Rita

    2008-01-01

    The article considers a significant global issue - gender gap starting and developing own business. The field of business was for a long time reserved to men, thus, despite of an increasing number of female entrepreneurs during last decade, the number of female entrepreneurs in Europe, including Lithuania, remains lower than the one of male entrepreneurs. According to the data of various statistical sources, an average ratio of enterprises newly established by men and women in EU countries is...

  3. Mind the Gap

    Fairbanks, Terry; Savage, Erica; Adams, Katie; Wittie, Michael; Boone, Edna; Hayden, Andrew; Barnes, Janey; Hettinger, Zach; Gettinger, Andrew

    2016-01-01

    Summary Objective Decisions made during electronic health record (EHR) implementations profoundly affect usability and safety. This study aims to identify gaps between the current literature and key stakeholders’ perceptions of usability and safety practices and the challenges encountered during the implementation of EHRs. Materials and Methods Two approaches were used: a literature review and interviews with key stakeholders. We performed a systematic review of the literature to identify usability and safety challenges and best practices during implementation. A total of 55 articles were reviewed through searches of PubMed, Web of Science and Scopus. We used a qualitative approach to identify key stakeholders’ perceptions; semi-structured interviews were conducted with a diverse set of health IT stakeholders to understand their current practices and challenges related to usability during implementation. We used a grounded theory approach: data were coded, sorted, and emerging themes were identified. Conclusions from both sources of data were compared to identify areas of misalignment. Results We identified six emerging themes from the literature and stakeholder interviews: cost and resources, risk assessment, governance and consensus building, customization, clinical work-flow and usability testing, and training. Across these themes, there were misalignments between the literature and stakeholder perspectives, indicating major gaps. Discussion Major gaps identified from each of six emerging themes are discussed as critical areas for future research, opportunities for new stakeholder initiatives, and opportunities to better disseminate resources to improve the implementation of EHRs. Conclusion Our analysis identified practices and challenges across six different emerging themes, illustrated important gaps, and results suggest critical areas for future research and dissemination to improve EHR implementation. PMID:27847961

  4. MV controlled spark gap

    Evdokimovich, V.M.; Evlampiev, S.B.; Korshunov, G.S.; Nikolaev, V.A.; Sviridov, Yu.F.; Khmyrov, V.V.

    1980-01-01

    A megavolt gas-filled trigatron gap with a sectional gas-discharge chamber having a more than three-fold range of operating voltages is described. The discharge chamber consists of ten sections, each 70 mm thick, made of organic glass. The sections are separated one from another by aluminium gradient rings to which ohmic voltage divider is connected. Insulational sections and gradient rings are braced between themselves by means of metal flanges through gaskets made of oil-resistant rubber with the help of fiberglass-laminate pins. The gap has two electrodes 110 mm in diameter. The trigatron ignition assembly uses a dielectric bushing projecting over the main electrode plane. Use has been made of a gas mixture containing 10% of SF 6 and 90% of air making possible to ensure stable gap operation without readjusting in the voltage range from 0.4 to 1.35 MV. The operation time lag in this range is equal to 10 μs at a spread of [ru

  5. Climate Certainties and Uncertainties

    Morel, Pierre

    2012-01-01

    In issue 380 of Futuribles in December 2011, Antonin Pottier analysed in detail the workings of what is today termed 'climate scepticism' - namely the propensity of certain individuals to contest the reality of climate change on the basis of pseudo-scientific arguments. He emphasized particularly that what fuels the debate on climate change is, largely, the degree of uncertainty inherent in the consequences to be anticipated from observation of the facts, not the description of the facts itself. In his view, the main aim of climate sceptics is to block the political measures for combating climate change. However, since they do not admit to this political posture, they choose instead to deny the scientific reality. This month, Futuribles complements this socio-psychological analysis of climate-sceptical discourse with an - in this case, wholly scientific - analysis of what we know (or do not know) about climate change on our planet. Pierre Morel gives a detailed account of the state of our knowledge in the climate field and what we are able to predict in the medium/long-term. After reminding us of the influence of atmospheric meteorological processes on the climate, he specifies the extent of global warming observed since 1850 and the main origin of that warming, as revealed by the current state of knowledge: the increase in the concentration of greenhouse gases. He then describes the changes in meteorological regimes (showing also the limits of climate simulation models), the modifications of hydrological regimes, and also the prospects for rises in sea levels. He also specifies the mechanisms that may potentially amplify all these phenomena and the climate disasters that might ensue. Lastly, he shows what are the scientific data that cannot be disregarded, the consequences of which are now inescapable (melting of the ice-caps, rises in sea level etc.), the only remaining uncertainty in this connection being the date at which these things will happen. 'In this

  6. Planning for robust reserve networks using uncertainty analysis

    Moilanen, A.; Runge, M.C.; Elith, Jane; Tyre, A.; Carmel, Y.; Fegraus, E.; Wintle, B.A.; Burgman, M.; Ben-Haim, Y.

    2006-01-01

    Planning land-use for biodiversity conservation frequently involves computer-assisted reserve selection algorithms. Typically such algorithms operate on matrices of species presence?absence in sites, or on species-specific distributions of model predicted probabilities of occurrence in grid cells. There are practically always errors in input data?erroneous species presence?absence data, structural and parametric uncertainty in predictive habitat models, and lack of correspondence between temporal presence and long-run persistence. Despite these uncertainties, typical reserve selection methods proceed as if there is no uncertainty in the data or models. Having two conservation options of apparently equal biological value, one would prefer the option whose value is relatively insensitive to errors in planning inputs. In this work we show how uncertainty analysis for reserve planning can be implemented within a framework of information-gap decision theory, generating reserve designs that are robust to uncertainty. Consideration of uncertainty involves modifications to the typical objective functions used in reserve selection. Search for robust-optimal reserve structures can still be implemented via typical reserve selection optimization techniques, including stepwise heuristics, integer-programming and stochastic global search.

  7. Fluctuations, Uncertainty and Income Inequality in Developing Countries

    Fadi Fawaz; Masha Rahnamamoghadam; Victor Valcarcel

    2012-01-01

    We analyze income inequality and high frequency movements in economic activity for low and high income developing countries (LIDC and HIDC). The impact of human capital, capital formation, and economic uncertainty on income inequality for LIDC and HIDC are also evaluated. We find strong evidence that business cycle fluctuations serve to exacerbate income inequality in HIDC while they help narrow the gap in LIDC. Importantly, volatility in output widens inequality across the board but to a con...

  8. Sketching Uncertainty into Simulations.

    Ribicic, H; Waser, J; Gurbat, R; Sadransky, B; Groller, M E

    2012-12-01

    In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.

  9. Uncertainty vs. Information (Invited)

    Nearing, Grey

    2017-04-01

    Information theory is the branch of logic that describes how rational epistemic states evolve in the presence of empirical data (Knuth, 2005), and any logic of science is incomplete without such a theory. Developing a formal philosophy of science that recognizes this fact results in essentially trivial solutions to several longstanding problems are generally considered intractable, including: • Alleviating the need for any likelihood function or error model. • Derivation of purely logical falsification criteria for hypothesis testing. • Specification of a general quantitative method for process-level model diagnostics. More generally, I make the following arguments: 1. Model evaluation should not proceed by quantifying and/or reducing error or uncertainty, and instead should be approached as a problem of ensuring that our models contain as much information as our experimental data. I propose that the latter is the only question a scientist actually has the ability to ask. 2. Instead of building geophysical models as solutions to differential equations that represent conservation laws, we should build models as maximum entropy distributions constrained by conservation symmetries. This will allow us to derive predictive probabilities directly from first principles. Knuth, K. H. (2005) 'Lattice duality: The origin of probability and entropy', Neurocomputing, 67, pp. 245-274.

  10. Big data uncertainties.

    Maugis, Pierre-André G

    2018-07-01

    Big data-the idea that an always-larger volume of information is being constantly recorded-suggests that new problems can now be subjected to scientific scrutiny. However, can classical statistical methods be used directly on big data? We analyze the problem by looking at two known pitfalls of big datasets. First, that they are biased, in the sense that they do not offer a complete view of the populations under consideration. Second, that they present a weak but pervasive level of dependence between all their components. In both cases we observe that the uncertainty of the conclusion obtained by statistical methods is increased when used on big data, either because of a systematic error (bias), or because of a larger degree of randomness (increased variance). We argue that the key challenge raised by big data is not only how to use big data to tackle new problems, but to develop tools and methods able to rigorously articulate the new risks therein. Copyright © 2016. Published by Elsevier Ltd.

  11. Uncertainty enabled Sensor Observation Services

    Cornford, Dan; Williams, Matthew; Bastin, Lucy

    2010-05-01

    Almost all observations of reality are contaminated with errors, which introduce uncertainties into the actual observation result. Such uncertainty is often held to be a data quality issue, and quantification of this uncertainty is essential for the principled exploitation of the observations. Many existing systems treat data quality in a relatively ad-hoc manner, however if the observation uncertainty is a reliable estimate of the error on the observation with respect to reality then knowledge of this uncertainty enables optimal exploitation of the observations in further processes, or decision making. We would argue that the most natural formalism for expressing uncertainty is Bayesian probability theory. In this work we show how the Open Geospatial Consortium Sensor Observation Service can be implemented to enable the support of explicit uncertainty about observations. We show how the UncertML candidate standard is used to provide a rich and flexible representation of uncertainty in this context. We illustrate this on a data set of user contributed weather data where the INTAMAP interpolation Web Processing Service is used to help estimate the uncertainty on the observations of unknown quality, using observations with known uncertainty properties. We then go on to discuss the implications of uncertainty for a range of existing Open Geospatial Consortium standards including SWE common and Observations and Measurements. We discuss the difficult decisions in the design of the UncertML schema and its relation and usage within existing standards and show various options. We conclude with some indications of the likely future directions for UncertML in the context of Open Geospatial Consortium services.

  12. GapBlaster-A Graphical Gap Filler for Prokaryote Genomes.

    Pablo H C G de Sá

    Full Text Available The advent of NGS (Next Generation Sequencing technologies has resulted in an exponential increase in the number of complete genomes available in biological databases. This advance has allowed the development of several computational tools enabling analyses of large amounts of data in each of the various steps, from processing and quality filtering to gap filling and manual curation. The tools developed for gap closure are very useful as they result in more complete genomes, which will influence downstream analyses of genomic plasticity and comparative genomics. However, the gap filling step remains a challenge for genome assembly, often requiring manual intervention. Here, we present GapBlaster, a graphical application to evaluate and close gaps. GapBlaster was developed via Java programming language. The software uses contigs obtained in the assembly of the genome to perform an alignment against a draft of the genome/scaffold, using BLAST or Mummer to close gaps. Then, all identified alignments of contigs that extend through the gaps in the draft sequence are presented to the user for further evaluation via the GapBlaster graphical interface. GapBlaster presents significant results compared to other similar software and has the advantage of offering a graphical interface for manual curation of the gaps. GapBlaster program, the user guide and the test datasets are freely available at https://sourceforge.net/projects/gapblaster2015/. It requires Sun JDK 8 and Blast or Mummer.

  13. Alignment measurements uncertainties for large assemblies using probabilistic analysis techniques

    AUTHOR|(CDS)2090816; Almond, Heather

    Big science and ambitious industrial projects continually push forward with technical requirements beyond the grasp of conventional engineering techniques. Example of those are ultra-high precision requirements in the field of celestial telescopes, particle accelerators and aerospace industry. Such extreme requirements are limited largely by the capability of the metrology used, namely, it’s uncertainty in relation to the alignment tolerance required. The current work was initiated as part of Maria Curie European research project held at CERN, Geneva aiming to answer those challenges as related to future accelerators requiring alignment of 2 m large assemblies to tolerances in the 10 µm range. The thesis has found several gaps in current knowledge limiting such capability. Among those was the lack of application of state of the art uncertainty propagation methods in alignment measurements metrology. Another major limiting factor found was the lack of uncertainty statements in the thermal errors compensatio...

  14. A commentary on model uncertainty

    Apostolakis, G.

    1994-01-01

    A framework is proposed for the identification of model and parameter uncertainties in risk assessment models. Two cases are distinguished; in the first case, a set of mutually exclusive and exhaustive hypotheses (models) can be formulated, while, in the second, only one reference model is available. The relevance of this formulation to decision making and the communication of uncertainties is discussed

  15. Mama Software Features: Uncertainty Testing

    Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-05-30

    This document reviews how the uncertainty in the calculations is being determined with test image data. The results of this testing give an ‘initial uncertainty’ number than can be used to estimate the ‘back end’ uncertainty in digital image quantification in images. Statisticians are refining these numbers as part of a UQ effort.

  16. Designing for Uncertainty: Three Approaches

    Bennett, Scott

    2007-01-01

    Higher education wishes to get long life and good returns on its investment in learning spaces. Doing this has become difficult because rapid changes in information technology have created fundamental uncertainties about the future in which capital investments must deliver value. Three approaches to designing for this uncertainty are described…

  17. Gap Analysis Bulletin No. 13

    2005-12-01

    we would like to web developer; gather comments from GAP researchers and data users. We are * facilitate collaboration among GAP projects by...N.Y. Research Grant #012/01 A. 42 Gap Analysis Bulletin No. 13, December 2005 Ga pAnalysis Smith, S. D., W. A. Brown, C. R. Smith, and M. E. Richmond... GAP will be focusing activities have greatly reduced the habitat available to support on the enduring features of the Great Lakes basin. Influences

  18. Uncertainties in historical pollution data from sedimentary records from an Australian urban floodplain lake

    Lintern, A.; Leahy, P.; Deletic, A.; Heijnis, H.; Zawadzki, A.; Gadd, P.; McCarthy, D.

    2018-05-01

    Sediment cores from aquatic environments can provide valuable information about historical pollution levels and sources. However, there is little understanding of the uncertainties associated with these findings. The aim of this study is to fill this knowledge gap by proposing a framework for quantifying the uncertainties in historical heavy metal pollution records reconstructed from sediment cores. This uncertainty framework consists of six sources of uncertainty: uncertainties in (1) metals analysis methods, (2) spatial variability of sediment core heavy metal profiles, (3) sub-sampling intervals, (4) the sediment chronology, (5) the assumption that metal levels in bed sediments reflect the magnitude of metal inputs into the aquatic system, and (6) post-depositional transformation of metals. We apply this uncertainty framework to an urban floodplain lake in South-East Australia (Willsmere Billabong). We find that for this site, uncertainties in historical dated heavy metal profiles can be up to 176%, largely due to uncertainties in the sediment chronology, and in the assumption that the settled heavy metal mass is equivalent to the heavy metal mass entering the aquatic system. As such, we recommend that future studies reconstructing historical pollution records using sediment cores from aquatic systems undertake an investigation of the uncertainties in the reconstructed pollution record, using the uncertainty framework provided in this study. We envisage that quantifying and understanding the uncertainties associated with the reconstructed pollution records will facilitate the practical application of sediment core heavy metal profiles in environmental management projects.

  19. The homeownership gap

    Andrew F. Haughwout; Richard Peach; Joseph Tracy

    2009-01-01

    After rising for a decade, the U.S. homeownership rate peaked at 69 percent in the third quarter of 2006. Over the next two and a half years, as home prices fell in many parts of the country and the unemployment rate rose sharply, the homeownership rate declined by 1.7 percentage points. An important question is, how much more will this rate decline over the current economic downturn? To address this question, we propose the concept of the 'homeownership gap' as a gauge of downward pressure o...

  20. Gaps in nonsymmetric numerical semigroups

    Fel, Leonid G.; Aicardi, Francesca

    2006-12-01

    There exist two different types of gaps in the nonsymmetric numerical semigroups S(d 1 , . . . , d m ) finitely generated by a minimal set of positive integers {d 1 , . . . , d m }. We give the generating functions for the corresponding sets of gaps. Detailed description of both gap types is given for the 1st nontrivial case m = 3. (author)

  1. The Politics of Achievement Gaps

    Valant, J.; Newark, D. A.

    2016-01-01

    on achievement gaps have received little attention from researchers, despite playing an important role in shaping policymakers’ behaviors. Drawing on randomized experiments with a nationally representative sample of adults, we explore the public’s beliefs about test score gaps and its support for gap...

  2. GAP-REACH

    Lewis-Fernández, Roberto; Raggio, Greer A.; Gorritz, Magdaliz; Duan, Naihua; Marcus, Sue; Cabassa, Leopoldo J.; Humensky, Jennifer; Becker, Anne E.; Alarcón, Renato D.; Oquendo, María A.; Hansen, Helena; Like, Robert C.; Weiss, Mitchell; Desai, Prakash N.; Jacobsen, Frederick M.; Foulks, Edward F.; Primm, Annelle; Lu, Francis; Kopelowicz, Alex; Hinton, Ladson; Hinton, Devon E.

    2015-01-01

    Growing awareness of health and health care disparities highlights the importance of including information about race, ethnicity, and culture (REC) in health research. Reporting of REC factors in research publications, however, is notoriously imprecise and unsystematic. This article describes the development of a checklist to assess the comprehensiveness and the applicability of REC factor reporting in psychiatric research publications. The 16-itemGAP-REACH© checklist was developed through a rigorous process of expert consensus, empirical content analysis in a sample of publications (N = 1205), and interrater reliability (IRR) assessment (N = 30). The items assess each section in the conventional structure of a health research article. Data from the assessment may be considered on an item-by-item basis or as a total score ranging from 0% to 100%. The final checklist has excellent IRR (κ = 0.91). The GAP-REACH may be used by multiple research stakeholders to assess the scope of REC reporting in a research article. PMID:24080673

  3. Gap Task Force

    Lissuaer, D

    One of the more congested areas in the ATLAS detector is the GAP region (the area between the Barrel Calorimeter and the End Cap calorimeter) where Inner Detector services, LAr Services and some Tile services all must co-habitat in a very limited area. It has been clear for some time that the space in the GAP region is not sufficient to accommodate all that is needed. In the last few month additional problems of routing all the services to Z=0 have been encountered due to the very limited space between the Tile Calorimeter and the first layer of Muon chambers. The Technical Management Board (TMB) and the Executive Board (EB) decided in the middle of March to establish a Task Force to look at this problem and come up with a solution within well-specified guidelines. The task force consisted of experts from the ID, Muon, Liquid Argon and Tile systems in addition to experts from the Technical Coordination team and the Physics coordinator. The task force held many meetings and in general there were some very l...

  4. Closing the value gap

    Snyder, A.V.

    1992-01-01

    It's a predicament. For the most part, investor-owned electric utilities trade at a deep discount to the actual (that is, replacement-cost) value to their assets. That's because most utilities fail to earn real returns large enough to justify raising and investing capital. The result is a value gap, where overall market value is significantly lower than the replacement costs of the assets. This gap is wider for utilities than for virtually any other industry in our economy. In addition to providing education and awareness, senior management must determine which businesses and activities create value and which diminish it. Then, management must allocate capital and human resources appropriately, holding down investments in value-diminishing areas until they can improve their profitability, and aggressively investing in value-enhancing businesses while preserving their profitability. But value management must not stop with resource-allocation decisions. To create a lasting transition to a value management philosophy, the utility's compensation system must also change: executives will have motivation to create value when compensation stems from this goal, not from such misleading accounting measures as earnings-per-share growth or ROE. That requires clear value-creation goals, and the organization must continuously evaluate top management's performance in light of the progress made toward those goals

  5. Board affiliation and pay gap

    Shenglan Chen

    2014-06-01

    Full Text Available This paper examines the effects of board affiliation on the corporate pay gap. Using a sample of Chinese listed firms from 2005 to 2011, we find that boards with a greater presence of directors appointed by block shareholders have lower pay gaps. Furthermore, the governance effects of board affiliation with and without pay are distinguished. The empirical results show that board affiliation without pay is negatively related to the pay gap, while board affiliation with pay is positively related to the pay gap. Overall, the results shed light on how block shareholders affect their companies’ pay gaps through board affiliation.

  6. Uncertainties in Nuclear Proliferation Modeling

    Kim, Chul Min; Yim, Man-Sung; Park, Hyeon Seok

    2015-01-01

    There have been various efforts in the research community to understand the determinants of nuclear proliferation and develop quantitative tools to predict nuclear proliferation events. Such systematic approaches have shown the possibility to provide warning for the international community to prevent nuclear proliferation activities. However, there are still large debates for the robustness of the actual effect of determinants and projection results. Some studies have shown that several factors can cause uncertainties in previous quantitative nuclear proliferation modeling works. This paper analyzes the uncertainties in the past approaches and suggests future works in the view of proliferation history, analysis methods, and variable selection. The research community still lacks the knowledge for the source of uncertainty in current models. Fundamental problems in modeling will remain even other advanced modeling method is developed. Before starting to develop fancy model based on the time dependent proliferation determinants' hypothesis, using graph theory, etc., it is important to analyze the uncertainty of current model to solve the fundamental problems of nuclear proliferation modeling. The uncertainty from different proliferation history coding is small. Serious problems are from limited analysis methods and correlation among the variables. Problems in regression analysis and survival analysis cause huge uncertainties when using the same dataset, which decreases the robustness of the result. Inaccurate variables for nuclear proliferation also increase the uncertainty. To overcome these problems, further quantitative research should focus on analyzing the knowledge suggested on the qualitative nuclear proliferation studies

  7. Measurement uncertainty: Friend or foe?

    Infusino, Ilenia; Panteghini, Mauro

    2018-02-02

    The definition and enforcement of a reference measurement system, based on the implementation of metrological traceability of patients' results to higher order reference methods and materials, together with a clinically acceptable level of measurement uncertainty, are fundamental requirements to produce accurate and equivalent laboratory results. The uncertainty associated with each step of the traceability chain should be governed to obtain a final combined uncertainty on clinical samples fulfilling the requested performance specifications. It is important that end-users (i.e., clinical laboratory) may know and verify how in vitro diagnostics (IVD) manufacturers have implemented the traceability of their calibrators and estimated the corresponding uncertainty. However, full information about traceability and combined uncertainty of calibrators is currently very difficult to obtain. Laboratory professionals should investigate the need to reduce the uncertainty of the higher order metrological references and/or to increase the precision of commercial measuring systems. Accordingly, the measurement uncertainty should not be considered a parameter to be calculated by clinical laboratories just to fulfil the accreditation standards, but it must become a key quality indicator to describe both the performance of an IVD measuring system and the laboratory itself. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  8. Model uncertainty in safety assessment

    Pulkkinen, U.; Huovinen, T.

    1996-01-01

    The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.)

  9. Model uncertainty in safety assessment

    Pulkkinen, U; Huovinen, T [VTT Automation, Espoo (Finland). Industrial Automation

    1996-01-01

    The uncertainty analyses are an essential part of any risk assessment. Usually the uncertainties of reliability model parameter values are described by probability distributions and the uncertainty is propagated through the whole risk model. In addition to the parameter uncertainties, the assumptions behind the risk models may be based on insufficient experimental observations and the models themselves may not be exact descriptions of the phenomena under analysis. The description and quantification of this type of uncertainty, model uncertainty, is the topic of this report. The model uncertainty is characterized and some approaches to model and quantify it are discussed. The emphasis is on so called mixture models, which have been applied in PSAs. Some of the possible disadvantages of the mixture model are addressed. In addition to quantitative analyses, also qualitative analysis is discussed shortly. To illustrate the models, two simple case studies on failure intensity and human error modeling are described. In both examples, the analysis is based on simple mixture models, which are observed to apply in PSA analyses. (orig.) (36 refs., 6 figs., 2 tabs.).

  10. Educational segregation and the gender wage gap in Greece

    Livanos, Ilias; Pouliakas, Konstantinos

    2012-01-01

    Purpose\\ud To investigate the extent to which differences in the subject of degree studied by male and female university graduates contributes to the gender pay gap in Greece, an EU country with historically large gender discrepancies in earnings and occupational segregation. In addition, to explore the reasons underlying the distinct educational choices of men and women, with particular emphasis on the role of wage uncertainty.\\ud \\ud Design/methodology/approach\\ud Using micro-data from the ...

  11. Model uncertainty: Probabilities for models?

    Winkler, R.L.

    1994-01-01

    Like any other type of uncertainty, model uncertainty should be treated in terms of probabilities. The question is how to do this. The most commonly-used approach has a drawback related to the interpretation of the probabilities assigned to the models. If we step back and look at the big picture, asking what the appropriate focus of the model uncertainty question should be in the context of risk and decision analysis, we see that a different probabilistic approach makes more sense, although it raise some implementation questions. Current work that is underway to address these questions looks very promising

  12. Minimizing measurement uncertainties of coniferous needle-leaf optical properties, part II: experimental set-up and error analysis

    Yanez Rausell, L.; Malenovsky, Z.; Clevers, J.G.P.W.; Schaepman, M.E.

    2014-01-01

    We present uncertainties associated with the measurement of coniferous needle-leaf optical properties (OPs) with an integrating sphere using an optimized gap-fraction (GF) correction method, where GF refers to the air gaps appearing between the needles of a measured sample. We used an optically

  13. Minding the gap

    Mia Carlberg

    2013-12-01

    Full Text Available The plan for the Round table session was to focus on organizational and social/cultural differences between librarians and faculty with the aim to increase our awareness of the differences when we try to find ways to cooperate within the academy or school. This may help us to sort things out, experience acceptance and take adequate actions, saving energy and perhaps be less frustrated.  The questions that the workshop addressed were: What is in the gap between librarians and faculty when dealing with information literacy? How can we fill the gap? Participants discussed this in detail with the aim of together finding ways to understand it better and make it possible to find ways to fill this gap. By defining it and thereby making it easier to work out a strategy for future action to improve the teaching of information literacy, including listing possible, impossible or nearly impossible ways. The springboard to the discussion was extracted from some projects that the workshop leader has been engaged in since 2009. The first example is a research circle where Uppsala University Library used action research to observe and understand the process when we had the opportunity to implement information literacy classes with progression in an undergraduate program. What worked well? What did not? Why? This work was described together with other examples from Uppsala University to an international panel working with quality issues. What did they think of our work? May this change the ways we are working? How? Another example is an ongoing joint project where librarians and faculty members are trying to define ways to increase the cooperation between the library and faculty and make this cooperation sustainable. Recent experience from this was brought to the discussion.   There are an overwhelming number of papers written in this field. A few papers have inspired these ideas. One article in particular: Christiansen, L., Stombler, M. & Thaxton, L. (2004. A

  14. Decision-making under great uncertainty

    Hansson, S.O.

    1992-01-01

    Five types of decision-uncertainty are distinguished: uncertainty of consequences, of values, of demarcation, of reliance, and of co-ordination. Strategies are proposed for each type of uncertainty. The general conclusion is that it is meaningful for decision theory to treat cases with greater uncertainty than the textbook case of 'decision-making under uncertainty'. (au)

  15. Bridging the Evaluation Gap

    Paul Wouters

    2017-02-01

    Full Text Available Paul Wouters’ essay is concerned with bridging the gap between what we value in our academic work and how we are assessed in formal evaluation exercises. He reflects on the recent evaluation of his own center, and reminds us that it is productive to see evaluations not as the (obviously impossible attempt to produce a true representation of past work, but rather as the exploration and performance of “who one wants to be.” Reflecting on why STS should do more than just play along to survive in the indicator game, he suggests that our field should contribute to changing its very rules. In this endeavor, the attitude and sensibilities developed in our field may be more important than any specific theoretical concepts or methodologies.

  16. The GAP-TPC

    Rossi, B.; Anastasio, A.; Boiano, A.; Cocco, A.G.; Meo, P. Di; Vanzanella, A.; Catalanotti, S.; Covone, G.; Longo, G.; Walker, S.; Fiorillo, G.; Wang, H.; Wang, Y.

    2016-01-01

    Several experiments have been conducted worldwide, with the goal of observing low-energy nuclear recoils induced by WIMPs scattering off target nuclei in ultra-sensitive, low-background detectors. In the last few decades noble liquid detectors designed to search for dark matter in the form of WIMPs have been extremely successful in improving their sensitivities and setting the best limits. One of the crucial problems to be faced for the development of large size (multi ton-scale) liquid argon experiments is the lack of reliable and low background cryogenic PMTs: their intrinsic radioactivity, cost, and borderline performance at 87 K rule them out as a possible candidate for photosensors. We propose a brand new concept of liquid argon-based detector for direct dark matter search: the Geiger-mode Avalanche Photodiode Time Projection Chamber (GAP-TPC) optimized in terms of residual radioactivity of the photosensors, energy and spatial resolution, light and charge collection efficiency

  17. Finding the gaps

    Stoneham, A. M.

    Much of the pioneering work on radiation damage was based on very simple potentials. Potentials are now much more sophisticated and accurate. Self-consistent molecular dynamics is routine for adiabatic energy surfaces, at least for modest numbers of atoms and modest timescales. This means that non-equilibrium nuclear processes can be followed dynamically. It might also give the illusion that any damage process can be modelled with success. Sadly, this is not yet so. This paper discusses where the gaps lie, and specifically three groups of challenges. The first challenge concerns electronic excited states. The second challenge concerns timescales, from femtoseconds to tens of years. The third challenge concerns length scales, and the link between microscopic (atomistic) and mesoscopic (microstructural) scales. The context of these challenges is materials modification by excitation: the removal of material, the modification of bulk or surface material, the altering of rates of processes or changing of branching ratios, and damage, good or bad.

  18. Gaps in Political Interest

    Robison, Joshua

    2015-01-01

    Political interest fundamentally influences political behavior, knowledge, and persuasion (Brady, Verba, & Schlozman, 1995; Delli Carpini & Keeter, 1996; Luskin, 1990; Zukin, Andolina, Keeter, Jenkins, & Delli Carpini, 2006). Since the early 1960s, the American National Election Studies (ANES) has...... sought to measure respondents’ general interest in politics by asking them how often they follow public affairs. In this article, we uncover novel sources of measurement error concerning this question. We first show that other nationally representative surveys that frequently use this item deliver...... drastically higher estimates of mass interest. We then use a survey experiment included on a wave of the ANES’ Evaluating Government and Society Surveys (EGSS) to explore the influence of question order in explaining this systemic gap in survey results. We show that placing batteries of political...

  19. Insights into water managers' perception and handling of uncertainties - a study of the role of uncertainty in practitioners' planning and decision-making

    Höllermann, Britta; Evers, Mariele

    2017-04-01

    Planning and decision-making under uncertainty is common in water management due to climate variability, simplified models, societal developments, planning restrictions just to name a few. Dealing with uncertainty can be approached from two sites, hereby affecting the process and form of communication: Either improve the knowledge base by reducing uncertainties or apply risk-based approaches to acknowledge uncertainties throughout the management process. Current understanding is that science more strongly focusses on the former approach, while policy and practice are more actively applying a risk-based approach to handle incomplete and/or ambiguous information. The focus of this study is on how water managers perceive and handle uncertainties at the knowledge/decision interface in their daily planning and decision-making routines. How they evaluate the role of uncertainties for their decisions and how they integrate this information into the decision-making process. Expert interviews and questionnaires among practitioners and scientists provided an insight into their perspectives on uncertainty handling allowing a comparison of diverse strategies between science and practice as well as between different types of practitioners. Our results confirmed the practitioners' bottom up approach from potential measures upwards instead of impact assessment downwards common in science-based approaches. This science-practice gap may hinder effective uncertainty integration and acknowledgement in final decisions. Additionally, the implementation of an adaptive and flexible management approach acknowledging uncertainties is often stalled by rigid regulations favouring a predict-and-control attitude. However, the study showed that practitioners' level of uncertainty recognition varies with respect to his or her affiliation to type of employer and business unit, hence, affecting the degree of the science-practice-gap with respect to uncertainty recognition. The level of working

  20. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    Wang, Ying; Zhou, Zhi; Botterud, Audun; Zhang, Kaifeng

    2018-01-01

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixed integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.

  1. The Uncertainties of Risk Management

    Vinnari, Eija; Skærbæk, Peter

    2014-01-01

    for expanding risk management. More generally, such uncertainties relate to the professional identities and responsibilities of operational managers as defined by the framing devices. Originality/value – The paper offers three contributions to the extant literature: first, it shows how risk management itself......Purpose – The purpose of this paper is to analyse the implementation of risk management as a tool for internal audit activities, focusing on unexpected effects or uncertainties generated during its application. Design/methodology/approach – Public and confidential documents as well as semi......-structured interviews are analysed through the lens of actor-network theory to identify the effects of risk management devices in a Finnish municipality. Findings – The authors found that risk management, rather than reducing uncertainty, itself created unexpected uncertainties that would otherwise not have emerged...

  2. Climate Projections and Uncertainty Communication.

    Joslyn, Susan L; LeClerc, Jared E

    2016-01-01

    Lingering skepticism about climate change might be due in part to the way climate projections are perceived by members of the public. Variability between scientists' estimates might give the impression that scientists disagree about the fact of climate change rather than about details concerning the extent or timing. Providing uncertainty estimates might clarify that the variability is due in part to quantifiable uncertainty inherent in the prediction process, thereby increasing people's trust in climate projections. This hypothesis was tested in two experiments. Results suggest that including uncertainty estimates along with climate projections leads to an increase in participants' trust in the information. Analyses explored the roles of time, place, demographic differences (e.g., age, gender, education level, political party affiliation), and initial belief in climate change. Implications are discussed in terms of the potential benefit of adding uncertainty estimates to public climate projections. Copyright © 2015 Cognitive Science Society, Inc.

  3. Relational uncertainty in service dyads

    Kreye, Melanie

    2017-01-01

    in service dyads and how they resolve it through suitable organisational responses to increase the level of service quality. Design/methodology/approach: We apply the overall logic of Organisational Information-Processing Theory (OIPT) and present empirical insights from two industrial case studies collected...... the relational uncertainty increased the functional quality while resolving the partner’s organisational uncertainty increased the technical quality of the delivered service. Originality: We make two contributions. First, we introduce relational uncertainty to the OM literature as the inability to predict...... and explain the actions of a partnering organisation due to a lack of knowledge about their abilities and intentions. Second, we present suitable organisational responses to relational uncertainty and their effect on service quality....

  4. Advanced LOCA code uncertainty assessment

    Wickett, A.J.; Neill, A.P.

    1990-11-01

    This report describes a pilot study that identified, quantified and combined uncertainties for the LOBI BL-02 3% small break test. A ''dials'' version of TRAC-PF1/MOD1, called TRAC-F, was used. (author)

  5. Technique for estimating relocated gap width for gap conductance calculations

    Klink, P.H.

    1978-01-01

    Thermally induced fuel fragmentation and relocation has been demonstrated to influence the thermal behavior of a fuel rod in two ways. The effective fuel pellet conductivity is decreased and pellet-to-cladding heat transfer is improved. This paper presents a correlation between as-built and relocated gap width which, used with the Ross and Stoute Gap Conductance Correlation and an appropriate fuel thermal expansion model, closely predicts the measured gap conductances

  6. How to live with uncertainties?

    Michel, R.

    2012-01-01

    In a short introduction, the problem of uncertainty as a general consequence of incomplete information as well as the approach to quantify uncertainty in metrology are addressed. A little history of the more than 30 years of the working group AK SIGMA is followed by an appraisal of its up-to-now achievements. Then, the potential future of the AK SIGMA is discussed based on its actual tasks and on open scientific questions and future topics. (orig.)

  7. Some remarks on modeling uncertainties

    Ronen, Y.

    1983-01-01

    Several topics related to the question of modeling uncertainties are considered. The first topic is related to the use of the generalized bias operator method for modeling uncertainties. The method is expanded to a more general form of operators. The generalized bias operator is also used in the inverse problem and applied to determine the anisotropic scattering law. The last topic discussed is related to the question of the limit to accuracy and how to establish its value. (orig.) [de

  8. Uncertainty analysis in safety assessment

    Lemos, Francisco Luiz de; Sullivan, Terry

    1997-01-01

    Nuclear waste disposal is a very complex subject which requires the study of many different fields of science, like hydro geology, meteorology, geochemistry, etc. In addition, the waste disposal facilities are designed to last for a very long period of time. Both of these conditions make safety assessment projections filled with uncertainty. This paper addresses approaches for treatment of uncertainties in the safety assessment modeling due to the variability of data and some current approaches used to deal with this problem. (author)

  9. Propagation of dynamic measurement uncertainty

    Hessling, J P

    2011-01-01

    The time-dependent measurement uncertainty has been evaluated in a number of recent publications, starting from a known uncertain dynamic model. This could be defined as the 'downward' propagation of uncertainty from the model to the targeted measurement. The propagation of uncertainty 'upward' from the calibration experiment to a dynamic model traditionally belongs to system identification. The use of different representations (time, frequency, etc) is ubiquitous in dynamic measurement analyses. An expression of uncertainty in dynamic measurements is formulated for the first time in this paper independent of representation, joining upward as well as downward propagation. For applications in metrology, the high quality of the characterization may be prohibitive for any reasonably large and robust model to pass the whiteness test. This test is therefore relaxed by not directly requiring small systematic model errors in comparison to the randomness of the characterization. Instead, the systematic error of the dynamic model is propagated to the uncertainty of the measurand, analogously but differently to how stochastic contributions are propagated. The pass criterion of the model is thereby transferred from the identification to acceptance of the total accumulated uncertainty of the measurand. This increases the relevance of the test of the model as it relates to its final use rather than the quality of the calibration. The propagation of uncertainty hence includes the propagation of systematic model errors. For illustration, the 'upward' propagation of uncertainty is applied to determine if an appliance box is damaged in an earthquake experiment. In this case, relaxation of the whiteness test was required to reach a conclusive result

  10. Optimal Taxation under Income Uncertainty

    Xianhua Dai

    2011-01-01

    Optimal taxation under income uncertainty has been extensively developed in expected utility theory, but it is still open for inseparable utility function between income and effort. As an alternative of decision-making under uncertainty, prospect theory (Kahneman and Tversky (1979), Tversky and Kahneman (1992)) has been obtained empirical support, for example, Kahneman and Tversky (1979), and Camerer and Lowenstein (2003). It is beginning to explore optimal taxation in the context of prospect...

  11. New Perspectives on Policy Uncertainty

    Hlatshwayo, Sandile

    2017-01-01

    In recent years, the ubiquitous and intensifying nature of economic policy uncertainty has made it a popular explanation for weak economic performance in developed and developing markets alike. The primary channel for this effect is decreased and delayed investment as firms adopt a ``wait and see'' approach to irreversible investments (Bernanke, 1983; Dixit and Pindyck, 1994). Deep empirical examination of policy uncertainty's impact is rare because of the difficulty associated in measuring i...

  12. Uncertainty Quantification given Discontinuous Climate Model Response and a Limited Number of Model Runs

    Sargsyan, K.; Safta, C.; Debusschere, B.; Najm, H.

    2010-12-01

    Uncertainty quantification in complex climate models is challenged by the sparsity of available climate model predictions due to the high computational cost of model runs. Another feature that prevents classical uncertainty analysis from being readily applicable is bifurcative behavior in climate model response with respect to certain input parameters. A typical example is the Atlantic Meridional Overturning Circulation. The predicted maximum overturning stream function exhibits discontinuity across a curve in the space of two uncertain parameters, namely climate sensitivity and CO2 forcing. We outline a methodology for uncertainty quantification given discontinuous model response and a limited number of model runs. Our approach is two-fold. First we detect the discontinuity with Bayesian inference, thus obtaining a probabilistic representation of the discontinuity curve shape and location for arbitrarily distributed input parameter values. Then, we construct spectral representations of uncertainty, using Polynomial Chaos (PC) expansions on either side of the discontinuity curve, leading to an averaged-PC representation of the forward model that allows efficient uncertainty quantification. The approach is enabled by a Rosenblatt transformation that maps each side of the discontinuity to regular domains where desirable orthogonality properties for the spectral bases hold. We obtain PC modes by either orthogonal projection or Bayesian inference, and argue for a hybrid approach that targets a balance between the accuracy provided by the orthogonal projection and the flexibility provided by the Bayesian inference - where the latter allows obtaining reasonable expansions without extra forward model runs. The model output, and its associated uncertainty at specific design points, are then computed by taking an ensemble average over PC expansions corresponding to possible realizations of the discontinuity curve. The methodology is tested on synthetic examples of

  13. Pharmacological Fingerprints of Contextual Uncertainty.

    Louise Marshall

    2016-11-01

    Full Text Available Successful interaction with the environment requires flexible updating of our beliefs about the world. By estimating the likelihood of future events, it is possible to prepare appropriate actions in advance and execute fast, accurate motor responses. According to theoretical proposals, agents track the variability arising from changing environments by computing various forms of uncertainty. Several neuromodulators have been linked to uncertainty signalling, but comprehensive empirical characterisation of their relative contributions to perceptual belief updating, and to the selection of motor responses, is lacking. Here we assess the roles of noradrenaline, acetylcholine, and dopamine within a single, unified computational framework of uncertainty. Using pharmacological interventions in a sample of 128 healthy human volunteers and a hierarchical Bayesian learning model, we characterise the influences of noradrenergic, cholinergic, and dopaminergic receptor antagonism on individual computations of uncertainty during a probabilistic serial reaction time task. We propose that noradrenaline influences learning of uncertain events arising from unexpected changes in the environment. In contrast, acetylcholine balances attribution of uncertainty to chance fluctuations within an environmental context, defined by a stable set of probabilistic associations, or to gross environmental violations following a contextual switch. Dopamine supports the use of uncertainty representations to engender fast, adaptive responses.

  14. A Bayesian approach to model uncertainty

    Buslik, A.

    1994-01-01

    A Bayesian approach to model uncertainty is taken. For the case of a finite number of alternative models, the model uncertainty is equivalent to parameter uncertainty. A derivation based on Savage's partition problem is given

  15. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  16. Global Gaps in Clean Energy RD and D

    NONE

    2010-07-01

    This report seeks to inform decision makers seeking to prioritise RD&D investments in a time of financial uncertainty. It is an update of the December 2009 IEA report Global Gaps in Clean Energy Research, Development and Demonstration, which examined whether rates of LCET investment were sufficient to achieve shared global energy and environmental goals (IEA,2009). It discusses the impact of the green stimulus spending announcements, and provides private sector perspectives on priorities for government RD&D spending. Finally, it includes a revised assessment of the gaps in public RD&D, together with suggestions for possible areas for expanded international collaboration on specific LCETs. The conclusion re-affirms the first Global Gaps study finding that governments and industry need to dramatically increase their spending on RD&D for LCETs.

  17. The Adaptation Finance Gap Report

    UNEP’s Adaptation Gap Report series focuses on Finance, Technology and Knowledge gaps in climate change adaptation. It compliments the Emissions Gap Report series, and explores the implications of failing to close the emissions gap. The report builds on a 2014 assessment by the United Nations...... Environment Programme (UNEP), which laid out the concept of ‘adaptation gaps’ and outlined three such gaps: technology, finance and knowledge. The 2016 Adaptation Gap Report assesses the difference between the financial costs of adapting to climate change in developing countries and the amount of money...... actually available to meet these costs – a difference known as the “adaptation finance gap”. Like the 2014 report, the 2016 report focuses on developing countries, where adaptation capacity is often the lowest and needs the highest, and concentrates on the period up to 2050. The report identifies trends...

  18. Gender Pay Gap in Poland

    Oczki, Jarosław

    2016-01-01

    The aim of the article is to investigate the actual and explained gender pay gaps in Poland in comparison with selected highly developed countries, and to discuss the factors determining wage disparities between men and women. Data from Eurostat EU-SILC and the International Labour Organization were used. The article concludes that the gender pay gap in Poland is relatively small and decreasing, and that estimates of the explained gender pay gap published by the Internationa...

  19. Achieving Robustness to Uncertainty for Financial Decision-making

    Barnum, George M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Van Buren, Kendra L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Song, Peter [Univ. of Pennsylvania, Philadelphia, PA (United States)

    2014-01-10

    This report investigates the concept of robustness analysis to support financial decision-making. Financial models, that forecast future stock returns or market conditions, depend on assumptions that might be unwarranted and variables that might exhibit large fluctuations from their last-known values. The analysis of robustness explores these sources of uncertainty, and recommends model settings such that the forecasts used for decision-making are as insensitive as possible to the uncertainty. A proof-of-concept is presented with the Capital Asset Pricing Model. The robustness of model predictions is assessed using info-gap decision theory. Info-gaps are models of uncertainty that express the “distance,” or gap of information, between what is known and what needs to be known in order to support the decision. The analysis yields a description of worst-case stock returns as a function of increasing gaps in our knowledge. The analyst can then decide on the best course of action by trading-off worst-case performance with “risk”, which is how much uncertainty they think needs to be accommodated in the future. The report also discusses the Graphical User Interface, developed using the MATLAB® programming environment, such that the user can control the analysis through an easy-to-navigate interface. Three directions of future work are identified to enhance the present software. First, the code should be re-written using the Python scientific programming software. This change will achieve greater cross-platform compatibility, better portability, allow for a more professional appearance, and render it independent from a commercial license, which MATLAB® requires. Second, a capability should be developed to allow users to quickly implement and analyze their own models. This will facilitate application of the software to the evaluation of proprietary financial models. The third enhancement proposed is to add the ability to evaluate multiple models simultaneously

  20. Next Generation Nuclear Plant GAP Analysis Report

    Ball, Sydney J [ORNL; Burchell, Timothy D [ORNL; Corwin, William R [ORNL; Fisher, Stephen Eugene [ORNL; Forsberg, Charles W. [Massachusetts Institute of Technology (MIT); Morris, Robert Noel [ORNL; Moses, David Lewis [ORNL

    2008-12-01

    As a follow-up to the phenomena identification and ranking table (PIRT) studies conducted recently by NRC on next generation nuclear plant (NGNP) safety, a study was conducted to identify the significant 'gaps' between what is needed and what is already available to adequately assess NGNP safety characteristics. The PIRT studies focused on identifying important phenomena affecting NGNP plant behavior, while the gap study gives more attention to off-normal behavior, uncertainties, and event probabilities under both normal operation and postulated accident conditions. Hence, this process also involved incorporating more detailed evaluations of accident sequences and risk assessments. This study considers thermal-fluid and neutronic behavior under both normal and postulated accident conditions, fission product transport (FPT), high-temperature metals, and graphite behavior and their effects on safety. In addition, safety issues related to coupling process heat (hydrogen production) systems to the reactor are addressed, given the limited design information currently available. Recommendations for further study, including analytical methods development and experimental needs, are presented as appropriate in each of these areas.

  1. Gap junctions and motor behavior

    Kiehn, Ole; Tresch, Matthew C.

    2002-01-01

    The production of any motor behavior requires coordinated activity in motor neurons and premotor networks. In vertebrates, this coordination is often assumed to take place through chemical synapses. Here we review recent data suggesting that electrical gap-junction coupling plays an important role...... in coordinating and generating motor outputs in embryonic and early postnatal life. Considering the recent demonstration of a prevalent expression of gap-junction proteins and gap-junction structures in the adult mammalian spinal cord, we suggest that neuronal gap-junction coupling might also contribute...... to the production of motor behavior in adult mammals....

  2. Axial gap rotating electrical machine

    None

    2016-02-23

    Direct drive rotating electrical machines with axial air gaps are disclosed. In these machines, a rotor ring and stator ring define an axial air gap between them. Sets of gap-maintaining rolling supports bear between the rotor ring and the stator ring at their peripheries to maintain the axial air gap. Also disclosed are wind turbines using these generators, and structures and methods for mounting direct drive rotating electrical generators to the hubs of wind turbines. In particular, the rotor ring of the generator may be carried directly by the hub of a wind turbine to rotate relative to a shaft without being mounted directly to the shaft.

  3. Radiating gap filler

    2009-01-01

    Full text: In May, corrosion on the outside wall of the over 50 year old Canadian Chalk River reactor vessel caused a heavy water leak and the reactor was shut down triggering worldwide a nuclear medicine shortage. The reactor is also a major supplier of the isotope molybdenum-99 (Mo-99), a precursor of the medically widely used technetium-99 m . To fill the gap in demand, the Australian Nuclear Science and Technology Organisation has now arranged with US company Lantheus Medical Imaging, Inc., a world leader in medical imaging, to supply Mo-99. Subject to pending Australian regulatory processes, the deal is expected to assist in alleviating the world's current nuclear medicine shortage. As ANSTO is currently also the only global commercial supplier that produces Mo-99 from low enriched uranium (LEU) targets, Lantheus will be the first company bringing LEU derived Tc-99 m to the US market. To date, over 95% of Mo-99 is derived from highly enriched uranium (HEU) targets. However, there are concerns regarding proliferation risks associated with HEU targets and for commercial uses production from LEU targets would be desirable. ANSTO says that global Mo-99 supply chain is fragile and limited and it is working closely with nuclear safety and healthy regulators, both domestically and overseas, to expedite all necessary approvals to allow long-term production and export of medical isotopes.

  4. Uncertainty of Forest Biomass Estimates in North Temperate Forests Due to Allometry: Implications for Remote Sensing

    Razi Ahmed

    2013-06-01

    Full Text Available Estimates of above ground biomass density in forests are crucial for refining global climate models and understanding climate change. Although data from field studies can be aggregated to estimate carbon stocks on global scales, the sparsity of such field data, temporal heterogeneity and methodological variations introduce large errors. Remote sensing measurements from spaceborne sensors are a realistic alternative for global carbon accounting; however, the uncertainty of such measurements is not well known and remains an active area of research. This article describes an effort to collect field data at the Harvard and Howland Forest sites, set in the temperate forests of the Northeastern United States in an attempt to establish ground truth forest biomass for calibration of remote sensing measurements. We present an assessment of the quality of ground truth biomass estimates derived from three different sets of diameter-based allometric equations over the Harvard and Howland Forests to establish the contribution of errors in ground truth data to the error in biomass estimates from remote sensing measurements.

  5. RUMINATIONS ON NDA MEASUREMENT UNCERTAINTY COMPARED TO DA UNCERTAINTY

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-06-17

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  6. Ruminations On NDA Measurement Uncertainty Compared TO DA Uncertainty

    Salaymeh, S.; Ashley, W.; Jeffcoat, R.

    2010-01-01

    It is difficult to overestimate the importance that physical measurements performed with nondestructive assay instruments play throughout the nuclear fuel cycle. They underpin decision making in many areas and support: criticality safety, radiation protection, process control, safeguards, facility compliance, and waste measurements. No physical measurement is complete or indeed meaningful, without a defensible and appropriate accompanying statement of uncertainties and how they combine to define the confidence in the results. The uncertainty budget should also be broken down in sufficient detail suitable for subsequent uses to which the nondestructive assay (NDA) results will be applied. Creating an uncertainty budget and estimating the total measurement uncertainty can often be an involved process, especially for non routine situations. This is because data interpretation often involves complex algorithms and logic combined in a highly intertwined way. The methods often call on a multitude of input data subject to human oversight. These characteristics can be confusing and pose a barrier to developing and understanding between experts and data consumers. ASTM subcommittee C26-10 recognized this problem in the context of how to summarize and express precision and bias performance across the range of standards and guides it maintains. In order to create a unified approach consistent with modern practice and embracing the continuous improvement philosophy a consensus arose to prepare a procedure covering the estimation and reporting of uncertainties in non destructive assay of nuclear materials. This paper outlines the needs analysis, objectives and on-going development efforts. In addition to emphasizing some of the unique challenges and opportunities facing the NDA community we hope this article will encourage dialog and sharing of best practice and furthermore motivate developers to revisit the treatment of measurement uncertainty.

  7. Critical loads - assessment of uncertainty

    Barkman, A.

    1998-10-01

    The effects of data uncertainty in applications of the critical loads concept were investigated on different spatial resolutions in Sweden and northern Czech Republic. Critical loads of acidity (CL) were calculated for Sweden using the biogeochemical model PROFILE. Three methods with different structural complexity were used to estimate the adverse effects of S0{sub 2} concentrations in northern Czech Republic. Data uncertainties in the calculated critical loads/levels and exceedances (EX) were assessed using Monte Carlo simulations. Uncertainties within cumulative distribution functions (CDF) were aggregated by accounting for the overlap between site specific confidence intervals. Aggregation of data uncertainties within CDFs resulted in lower CL and higher EX best estimates in comparison with percentiles represented by individual sites. Data uncertainties were consequently found to advocate larger deposition reductions to achieve non-exceedance based on low critical loads estimates on 150 x 150 km resolution. Input data were found to impair the level of differentiation between geographical units at all investigated resolutions. Aggregation of data uncertainty within CDFs involved more constrained confidence intervals for a given percentile. Differentiation as well as identification of grid cells on 150 x 150 km resolution subjected to EX was generally improved. Calculation of the probability of EX was shown to preserve the possibility to differentiate between geographical units. Re-aggregation of the 95%-ile EX on 50 x 50 km resolution generally increased the confidence interval for each percentile. Significant relationships were found between forest decline and the three methods addressing risks induced by S0{sub 2} concentrations. Modifying S0{sub 2} concentrations by accounting for the length of the vegetation period was found to constitute the most useful trade-off between structural complexity, data availability and effects of data uncertainty. Data

  8. Uncertainty Quantification in Numerical Aerodynamics

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  9. Uncertainty in spatial planning proceedings

    Aleš Mlakar

    2009-01-01

    Full Text Available Uncertainty is distinctive of spatial planning as it arises from the necessity to co-ordinate the various interests within the area, from the urgency of adopting spatial planning decisions, the complexity of the environment, physical space and society, addressing the uncertainty of the future and from the uncertainty of actually making the right decision. Response to uncertainty is a series of measures that mitigate the effects of uncertainty itself. These measures are based on two fundamental principles – standardization and optimization. The measures are related to knowledge enhancement and spatial planning comprehension, in the legal regulation of changes, in the existence of spatial planning as a means of different interests co-ordination, in the active planning and the constructive resolution of current spatial problems, in the integration of spatial planning and the environmental protection process, in the implementation of the analysis as the foundation of spatial planners activities, in the methods of thinking outside the parameters, in forming clear spatial concepts and in creating a transparent management spatial system and also in the enforcement the participatory processes.

  10. Uncertainty modeling and decision support

    Yager, Ronald R.

    2004-01-01

    We first formulate the problem of decision making under uncertainty. The importance of the representation of our knowledge about the uncertainty in formulating a decision process is pointed out. We begin with a brief discussion of the case of probabilistic uncertainty. Next, in considerable detail, we discuss the case of decision making under ignorance. For this case the fundamental role of the attitude of the decision maker is noted and its subjective nature is emphasized. Next the case in which a Dempster-Shafer belief structure is used to model our knowledge of the uncertainty is considered. Here we also emphasize the subjective choices the decision maker must make in formulating a decision function. The case in which the uncertainty is represented by a fuzzy measure (monotonic set function) is then investigated. We then return to the Dempster-Shafer belief structure and show its relationship to the fuzzy measure. This relationship allows us to get a deeper understanding of the formulation the decision function used Dempster- Shafer framework. We discuss how this deeper understanding allows a decision analyst to better make the subjective choices needed in the formulation of the decision function

  11. Does the central bank directly respond to output and 
inflation uncertainties in Turkey?

    Pelin Öge Güney

    2016-06-01

    Full Text Available This paper investigates the role of inflation and output uncertainties on monetary policy rules in Turkey for the period 2002:01–2014:02. In the literature it is suggested that uncertainty is a key element in monetary policy, hence empirical models of monetary policy should regard to uncertainty. In this study, we estimate a forward-looking monetary reaction function of the Central Bank of the Republic of Turkey (CBRT. In addition to inflation and output gap variables, our reaction function also includes both the inflation and output growth uncertainties. Our results suggest that the Central Bank of the Republic of Turkey (CBRT concerns with mainly price stability and significantly responds to inflation and growth uncertainties.

  12. Mind the Gap

    2008-09-01

    Astronomers have been able to study planet-forming discs around young Sun-like stars in unsurpassed detail, clearly revealing the motion and distribution of the gas in the inner parts of the disc. This result, which possibly implies the presence of giant planets, was made possible by the combination of a very clever method enabled by ESO's Very Large Telescope. Uncovering the disc ESO PR Photo 27a/08 Planet-forming Disc Planets could be home to other forms of life, so the study of exoplanets ranks very high in contemporary astronomy. More than 300 planets are already known to orbit stars other than the Sun, and these new worlds show an amazing diversity in their characteristics. But astronomers don't just look at systems where planets have already formed - they can also get great insights by studying the discs around young stars where planets may currently be forming. "This is like going 4.6 billion years back in time to watch how the planets of our own Solar System formed," says Klaus Pontoppidan from Caltech, who led the research. Pontoppidan and colleagues have analysed three young analogues of our Sun that are each surrounded by a disc of gas and dust from which planets could form. These three discs are just a few million years old and were known to have gaps or holes in them, indicating regions where the dust has been cleared and the possible presence of young planets. The new results not only confirm that gas is present in the gaps in the dust, but also enable astronomers to measure how the gas is distributed in the disc and how the disc is oriented. In regions where the dust appears to have been cleared out, molecular gas is still highly abundant. This can either mean that the dust has clumped together to form planetary embryos, or that a planet has already formed and is in the process of clearing the gas in the disc. For one of the stars, SR 21, a likely explanation is the presence of a massive giant planet orbiting at less than 3.5 times the distance

  13. On the uncertainty principle. V

    Halpern, O.

    1976-01-01

    The treatment of ideal experiments connected with the uncertainty principle is continued. The author analyzes successively measurements of momentum and position, and discusses the common reason why the results in all cases differ from the conventional ones. A similar difference exists for the measurement of field strengths. The interpretation given by Weizsaecker, who tried to interpret Bohr's complementarity principle by introducing a multi-valued logic is analyzed. The treatment of the uncertainty principle ΔE Δt is deferred to a later paper as is the interpretation of the method of variation of constants. Every ideal experiment discussed shows various lower limits for the value of the uncertainty product which limits depend on the experimental arrangement and are always (considerably) larger than h. (Auth.)

  14. Davis-Besse uncertainty study

    Davis, C.B.

    1987-08-01

    The uncertainties of calculations of loss-of-feedwater transients at Davis-Besse Unit 1 were determined to address concerns of the US Nuclear Regulatory Commission relative to the effectiveness of feed and bleed cooling. Davis-Besse Unit 1 is a pressurized water reactor of the raised-loop Babcock and Wilcox design. A detailed, quality-assured RELAP5/MOD2 model of Davis-Besse was developed at the Idaho National Engineering Laboratory. The model was used to perform an analysis of the loss-of-feedwater transient that occurred at Davis-Besse on June 9, 1985. A loss-of-feedwater transient followed by feed and bleed cooling was also calculated. The evaluation of uncertainty was based on the comparisons of calculations and data, comparisons of different calculations of the same transient, sensitivity calculations, and the propagation of the estimated uncertainty in initial and boundary conditions to the final calculated results

  15. Decommissioning Funding: Ethics, Implementation, Uncertainties

    2007-01-01

    This status report on decommissioning funding: ethics, implementation, uncertainties is based on a review of recent literature and materials presented at NEA meetings in 2003 and 2004, and particularly at a topical session organised in November 2004 on funding issues associated with the decommissioning of nuclear power facilities. The report also draws on the experience of the NEA Working Party on Decommissioning and Dismantling (WPDD). This report offers, in a concise form, an overview of relevant considerations on decommissioning funding mechanisms with regard to ethics, implementation and uncertainties. Underlying ethical principles found in international agreements are identified, and factors influencing the accumulation and management of funds for decommissioning nuclear facilities are discussed together with the main sources of uncertainties of funding systems

  16. Correlated uncertainties in integral data

    McCracken, A.K.

    1978-01-01

    The use of correlated uncertainties in calculational data is shown in cases investigated to lead to a reduction in the uncertainty of calculated quantities of importance to reactor design. It is stressed however that such reductions are likely to be important in a minority of cases of practical interest. The effect of uncertainties in detector cross-sections is considered and is seen to be, in some cases, of equal importance to that in the data used in calculations. Numerical investigations have been limited by the sparse information available on data correlations; some comparisons made of these data reveal quite large inconsistencies for both detector cross-sections and cross-section of interest for reactor calculations

  17. Uncertainty and Sensitivity Analyses Plan

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  18. PhoneGap for enterprise

    Shotts, Kerri

    2014-01-01

    This book is intended for developers who wish to use PhoneGap to develop useful, rich, secure mobile applications for their enterprise environment. The book assumes you have working knowledge of PhoneGap, HTML5, CSS3, and JavaScript, and a reasonable understanding of networking and n-tier architectures.

  19. Summary of existing uncertainty methods

    Glaeser, Horst

    2013-01-01

    A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions

  20. Uncertainty analysis in safety assessment

    Lemos, Francisco Luiz de [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN), Belo Horizonte, MG (Brazil); Sullivan, Terry [Brookhaven National Lab., Upton, NY (United States)

    1997-12-31

    Nuclear waste disposal is a very complex subject which requires the study of many different fields of science, like hydro geology, meteorology, geochemistry, etc. In addition, the waste disposal facilities are designed to last for a very long period of time. Both of these conditions make safety assessment projections filled with uncertainty. This paper addresses approaches for treatment of uncertainties in the safety assessment modeling due to the variability of data and some current approaches used to deal with this problem. (author) 13 refs.; e-mail: lemos at bnl.gov; sulliva1 at bnl.gov

  1. Awe, uncertainty, and agency detection.

    Valdesolo, Piercarlo; Graham, Jesse

    2014-01-01

    Across five studies, we found that awe increases both supernatural belief (Studies 1, 2, and 5) and intentional-pattern perception (Studies 3 and 4)-two phenomena that have been linked to agency detection, or the tendency to interpret events as the consequence of intentional and purpose-driven agents. Effects were both directly and conceptually replicated, and mediational analyses revealed that these effects were driven by the influence of awe on tolerance for uncertainty. Experiences of awe decreased tolerance for uncertainty, which, in turn, increased the tendency to believe in nonhuman agents and to perceive human agency in random events.

  2. Uncertainty and sensitivity analysis of the nuclear fuel thermal behavior

    Boulore, A., E-mail: antoine.boulore@cea.fr [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Struzik, C. [Commissariat a l' Energie Atomique (CEA), DEN, Fuel Research Department, 13108 Saint-Paul-lez-Durance (France); Gaudier, F. [Commissariat a l' Energie Atomique (CEA), DEN, Systems and Structure Modeling Department, 91191 Gif-sur-Yvette (France)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer A complete quantitative method for uncertainty propagation and sensitivity analysis is applied. Black-Right-Pointing-Pointer The thermal conductivity of UO{sub 2} is modeled as a random variable. Black-Right-Pointing-Pointer The first source of uncertainty is the linear heat rate. Black-Right-Pointing-Pointer The second source of uncertainty is the thermal conductivity of the fuel. - Abstract: In the global framework of nuclear fuel behavior simulation, the response of the models describing the physical phenomena occurring during the irradiation in reactor is mainly conditioned by the confidence in the calculated temperature of the fuel. Amongst all parameters influencing the temperature calculation in our fuel rod simulation code (METEOR V2), several sources of uncertainty have been identified as being the most sensitive: thermal conductivity of UO{sub 2}, radial distribution of power in the fuel pellet, local linear heat rate in the fuel rod, geometry of the pellet and thermal transfer in the gap. Expert judgment and inverse methods have been used to model the uncertainty of these parameters using theoretical distributions and correlation matrices. Propagation of these uncertainties in the METEOR V2 code using the URANIE framework and a Monte-Carlo technique has been performed in different experimental irradiations of UO{sub 2} fuel. At every time step of the simulated experiments, we get a temperature statistical distribution which results from the initial distributions of the uncertain parameters. We then can estimate confidence intervals of the calculated temperature. In order to quantify the sensitivity of the calculated temperature to each of the uncertain input parameters and data, we have also performed a sensitivity analysis using the Sobol' indices at first order.

  3. Linear Programming Problems for Generalized Uncertainty

    Thipwiwatpotjana, Phantipa

    2010-01-01

    Uncertainty occurs when there is more than one realization that can represent an information. This dissertation concerns merely discrete realizations of an uncertainty. Different interpretations of an uncertainty and their relationships are addressed when the uncertainty is not a probability of each realization. A well known model that can handle…

  4. Survey and Evaluate Uncertainty Quantification Methodologies

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon

  5. The fluctuating gap model

    Cao, Xiaobin

    2011-01-15

    The quasi-one-dimensional systems exhibit some unusual phenomenon, such as the Peierls instability, the pseudogap phenomena and the absence of a Fermi-Dirac distribution function line shape in the photoemission spectroscopy. Ever since the discovery of materials with highly anisotropic properties, it has been recognized that fluctuations play an important role above the three-dimensional phase transition. This regime where the precursor fluctuations are presented can be described by the so called fluctuating gap model (FGM) which was derived from the Froehlich Hamiltonian to study the low energy physics of the one-dimensional electron-phonon system. Not only is the FGM of great interest in the context of quasi-one-dimensional materials, liquid metal and spin waves above T{sub c} in ferromagnets, but also in the semiclassical approximation of superconductivity, it is possible to replace the original three-dimensional problem by a directional average over effectively one-dimensional problem which in the weak coupling limit is described by the FGM. In this work, we investigate the FGM in a wide temperature range with different statistics of the order parameter fluctuations. We derive a formally exact solution to this problem and calculate the density of states, the spectral function and the optical conductivity. In our calculation, we show that a Dyson singularity appears in the low energy density of states for Gaussian fluctuations in the commensurate case. In the incommensurate case, there is no such kind of singularity, and the zero frequency density of states varies differently as a function of the correlation lengths for different statistics of the order parameter fluctuations. Using the density of states we calculated with non-Gaussian order parameter fluctuations, we are able to calculate the static spin susceptibility which agrees with the experimental data very well. In the calculation of the spectral functions, we show that as the correlation increases, the

  6. The fluctuating gap model

    Cao, Xiaobin

    2011-01-01

    The quasi-one-dimensional systems exhibit some unusual phenomenon, such as the Peierls instability, the pseudogap phenomena and the absence of a Fermi-Dirac distribution function line shape in the photoemission spectroscopy. Ever since the discovery of materials with highly anisotropic properties, it has been recognized that fluctuations play an important role above the three-dimensional phase transition. This regime where the precursor fluctuations are presented can be described by the so called fluctuating gap model (FGM) which was derived from the Froehlich Hamiltonian to study the low energy physics of the one-dimensional electron-phonon system. Not only is the FGM of great interest in the context of quasi-one-dimensional materials, liquid metal and spin waves above T c in ferromagnets, but also in the semiclassical approximation of superconductivity, it is possible to replace the original three-dimensional problem by a directional average over effectively one-dimensional problem which in the weak coupling limit is described by the FGM. In this work, we investigate the FGM in a wide temperature range with different statistics of the order parameter fluctuations. We derive a formally exact solution to this problem and calculate the density of states, the spectral function and the optical conductivity. In our calculation, we show that a Dyson singularity appears in the low energy density of states for Gaussian fluctuations in the commensurate case. In the incommensurate case, there is no such kind of singularity, and the zero frequency density of states varies differently as a function of the correlation lengths for different statistics of the order parameter fluctuations. Using the density of states we calculated with non-Gaussian order parameter fluctuations, we are able to calculate the static spin susceptibility which agrees with the experimental data very well. In the calculation of the spectral functions, we show that as the correlation increases, the quasi

  7. Bridging the terahertz gap

    Davies, Giles; Linfield, Edmund

    2004-01-01

    Over the last century or so, physicists and engineers have progressively explored and conquered the electromagnetic spectrum. Starting with visible light, we have encroached outwards, developing techniques for generating and detecting radiation at both higher and lower frequencies. And as each successive region of the spectrum has been colonized, we have developed technology to exploit the radiation found there. X-rays, for example, are routinely used to image hidden objects. Near-infrared radiation is used in fibre-optic communications and in compact-disc players, while microwaves are used to transmit signals from your mobile phone. But there is one part of the electromagnetic spectrum that has steadfastly resisted our advances. This is the terahertz region, which ranges from frequencies of about 300 GHz to 10 THz (10 x 10 sup 1 sup 2 Hz). This corresponds to wavelengths of between about 1 and 0.03 mm, and lies between the microwave and infrared regions of the spectrum. However, the difficulties involved in making suitably compact terahertz sources and detectors has meant that this region of the spectrum has only begun to be explored thoroughly over the last decade. A particularly intriguing feature of terahertz radiation is that the semiconductor devices that generate radiation at frequencies above and below this range operate in completely different ways. At lower frequencies, microwaves and millimetre- waves can be generated by 'electronic' devices such as those found in mobile phones. At higher frequencies, near-infrared and visible light are generated by 'optical' devices such as semiconductor laser diodes, in which electrons emit light when they jump across the semiconductor band gap. Unfortunately, neither electronic nor optical devices can conveniently be made to work in the terahertz region because the terahertz frequency range sits between the electronic and optical regions of the electromagnetic spectrum. Developing a terahertz source is therefore a

  8. Differentiating intolerance of uncertainty from three related but distinct constructs.

    Rosen, Natalie O; Ivanova, Elena; Knäuper, Bärbel

    2014-01-01

    Individual differences in uncertainty have been associated with heightened anxiety, stress and approach-oriented coping. Intolerance of uncertainty (IU) is a trait characteristic that arises from negative beliefs about uncertainty and its consequences. Researchers have established the central role of IU in the development of problematic worry and maladaptive coping, highlighting the importance of this construct to anxiety disorders. However, there is a need to improve our understanding of the phenomenology of IU. The goal of this paper was to present hypotheses regarding the similarities and differences between IU and three related constructs--intolerance of ambiguity, uncertainty orientation, and need for cognitive closure--and to call for future empirical studies to substantiate these hypotheses. To assist with achieving this goal, we conducted a systematic review of the literature, which also served to identify current gaps in knowledge. This paper differentiates these constructs by outlining each definition and general approaches to assessment, reviewing the existing empirical relations, and proposing theoretical similarities and distinctions. Findings may assist researchers in selecting the appropriate construct to address their research questions. Future research directions for the application of these constructs, particularly within the field of clinical and health psychology, are discussed.

  9. Harvest Regulations and Implementation Uncertainty in Small Game Harvest Management

    Pål F. Moa

    2017-09-01

    Full Text Available A main challenge in harvest management is to set policies that maximize the probability that management goals are met. While the management cycle includes multiple sources of uncertainty, only some of these has received considerable attention. Currently, there is a large gap in our knowledge about implemention of harvest regulations, and to which extent indirect control methods such as harvest regulations are actually able to regulate harvest in accordance with intended management objectives. In this perspective article, we first summarize and discuss hunting regulations currently used in management of grouse species (Tetraonidae in Europe and North America. Management models suggested for grouse are most often based on proportional harvest or threshold harvest principles. These models are all built on theoretical principles for sustainable harvesting, and provide in the end an estimate on a total allowable catch. However, implementation uncertainty is rarely examined in empirical or theoretical harvest studies, and few general findings have been reported. Nevertheless, circumstantial evidence suggest that many of the most popular regulations are acting depensatory so that harvest bag sizes is more limited in years (or areas where game density is high, contrary to general recommendations. A better understanding of the implementation uncertainty related to harvest regulations is crucial in order to establish sustainable management systems. We suggest that scenario tools like Management System Evaluation (MSE should be more frequently used to examine robustness of currently applied harvest regulations to such implementation uncertainty until more empirical evidence is available.

  10. Designing Phononic Crystals with Wide and Robust Band Gaps

    Jia, Zian; Chen, Yanyu; Yang, Haoxiang; Wang, Lifeng

    2018-04-01

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  11. Minimum period and the gap in periods of Cataclysmic binaries

    Paczynski, B.; Sienkiewicz, R.

    1983-01-01

    The 81 minute cutoff to the orbital periods of hydrogen-rich cataclysmic binaries is consistent with evolution of those systems being dominated by angular momentum losses due to gravitational radiation. Unfortunately, many uncertainties, mainly poorly known atmospheric opacities below 2000 K, make is physically impossible to verify the quadrupole formula for gravitational radiation by using the observed cutoff at 81 minutes. The upper boundary of the gap in orbital periods observed at about 3 hours is almost certainly due to enhanced angular momentum losses from cataclysmic binaries which have longer periods. The physical mechanism of those losses is not identified, but a possible importance of stellar winds is pointed out. The lower boundary of the gap may be explained with the oldest cataclysmic binaries, whose periods evolved past the minimum at 81 minutes and reached the value of 2 hours within about 12 x 10 9 years after the binary had formed. Those binaries should have secondary components of only 0.02 solar masses, and their periods could be used to estimate ages of the oldest cataclysmic stars, and presumably the age of Galaxy. An alternative explanation for the gap requires that binaries should be detached while crossing the gap. A possible mechanism for this phenomenon is discussed. It requires the secondary components to be about 0.2 solar masses in the binaries just below the gap

  12. Designing Phononic Crystals with Wide and Robust Band Gaps

    Chen, Yanyu [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jia, Zian [State University of New York at Stony Brook; Yang, Haoxiang [State University of New York at Stony Brook; Wang, Lifeng [State University of New York at Stony Brook

    2018-04-16

    Phononic crystals (PnCs) engineered to manipulate and control the propagation of mechanical waves have enabled the design of a range of novel devices, such as waveguides, frequency modulators, and acoustic cloaks, for which wide and robust phononic band gaps are highly preferable. While numerous PnCs have been designed in recent decades, to the best of our knowledge, PnCs that possess simultaneous wide and robust band gaps (to randomness and deformations) have not yet been reported. Here, we demonstrate that by combining the band-gap formation mechanisms of Bragg scattering and local resonances (the latter one is dominating), PnCs with wide and robust phononic band gaps can be established. The robustness of the phononic band gaps are then discussed from two aspects: robustness to geometric randomness (manufacture defects) and robustness to deformations (mechanical stimuli). Analytical formulations further predict the optimal design parameters, and an uncertainty analysis quantifies the randomness effect of each designing parameter. Moreover, we show that the deformation robustness originates from a local resonance-dominant mechanism together with the suppression of structural instability. Importantly, the proposed PnCs require only a small number of layers of elements (three unit cells) to obtain broad, robust, and strong attenuation bands, which offer great potential in designing flexible and deformable phononic devices.

  13. Understanding the Uncertainty of an Effectiveness-Cost Ratio in Educational Resource Allocation: A Bayesian Approach

    Pan, Yilin

    2016-01-01

    Given the necessity to bridge the gap between what happened and what is likely to happen, this paper aims to explore how to apply Bayesian inference to cost-effectiveness analysis so as to capture the uncertainty of a ratio-type efficiency measure. The first part of the paper summarizes the characteristics of the evaluation data that are commonly…

  14. Interpolation in Time Series : An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    Lepot, M.J.; Aubin, Jean Baptiste; Clemens, F.H.L.R.

    2017-01-01

    A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are

  15. Chapter 3: Traceability and uncertainty

    McEwen, Malcolm

    2014-01-01

    Chapter 3 presents: an introduction; Traceability (measurement standard, role of the Bureau International des Poids et Mesures, Secondary Standards Laboratories, documentary standards and traceability as process review); Uncertainty (Example 1 - Measurement, M raw (SSD), Example 2 - Calibration data, N D.w 60 Co, kQ, Example 3 - Correction factor, P TP ) and Conclusion

  16. Competitive Capacity Investment under Uncertainty

    X. Li (Xishu); R.A. Zuidwijk (Rob); M.B.M. de Koster (René); R. Dekker (Rommert)

    2016-01-01

    textabstractWe consider a long-term capacity investment problem in a competitive market under demand uncertainty. Two firms move sequentially in the competition and a firm’s capacity decision interacts with the other firm’s current and future capacity. Throughout the investment race, a firm can

  17. Uncertainty quantification and error analysis

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  18. Uncertainties in radioecological assessment models

    Hoffman, F.O.; Miller, C.W.; Ng, Y.C.

    1983-01-01

    Environmental radiological assessments rely heavily on the use of mathematical models. The predictions of these models are inherently uncertain because models are inexact representations of real systems. The major sources of this uncertainty are related to bias in model formulation and imprecision in parameter estimation. The magnitude of uncertainty is a function of the questions asked of the model and the specific radionuclides and exposure pathways of dominant importance. It is concluded that models developed as research tools should be distinguished from models developed for assessment applications. Furthermore, increased model complexity does not necessarily guarantee increased accuracy. To improve the realism of assessment modeling, stochastic procedures are recommended that translate uncertain parameter estimates into a distribution of predicted values. These procedures also permit the importance of model parameters to be ranked according to their relative contribution to the overall predicted uncertainty. Although confidence in model predictions can be improved through site-specific parameter estimation and increased model validation, health risk factors and internal dosimetry models will probably remain important contributors to the amount of uncertainty that is irreducible. 41 references, 4 figures, 4 tables

  19. Numerical modeling of economic uncertainty

    Schjær-Jacobsen, Hans

    2007-01-01

    Representation and modeling of economic uncertainty is addressed by different modeling methods, namely stochastic variables and probabilities, interval analysis, and fuzzy numbers, in particular triple estimates. Focusing on discounted cash flow analysis numerical results are presented, comparisons...... are made between alternative modeling methods, and characteristics of the methods are discussed....

  20. Uncertainty covariances in robotics applications

    Smith, D.L.

    1984-01-01

    The application of uncertainty covariance matrices in the analysis of robot trajectory errors is explored. First, relevant statistical concepts are reviewed briefly. Then, a simple, hypothetical robot model is considered to illustrate methods for error propagation and performance test data evaluation. The importance of including error correlations is emphasized

  1. Regulating renewable resources under uncertainty

    Hansen, Lars Gårn

    ) that a pro-quota result under uncertainty about prices and marginal costs is unlikely, requiring that the resource growth function is highly concave locally around the optimum and, 3) that quotas are always preferred if uncertainly about underlying structural economic parameters dominates. These results...... showing that quotas are preferred in a number of situations qualify the pro fee message dominating prior studies....

  2. Uncertainty in the Real World

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 2. Uncertainty in the Real World - Fuzzy Sets. Satish Kumar. General Article Volume 4 Issue 2 February 1999 pp 37-47. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/004/02/0037-0047 ...

  3. Uncertainty of dustfall monitoring results

    Martin A. van Nierop

    2017-06-01

    Full Text Available Fugitive dust has the ability to cause a nuisance and pollute the ambient environment, particularly from human activities including construction and industrial sites and mining operations. As such, dustfall monitoring has occurred for many decades in South Africa; little has been published on the repeatability, uncertainty, accuracy and precision of dustfall monitoring. Repeatability assesses the consistency associated with the results of a particular measurement under the same conditions; the consistency of the laboratory is assessed to determine the uncertainty associated with dustfall monitoring conducted by the laboratory. The aim of this study was to improve the understanding of the uncertainty in dustfall monitoring; thereby improving the confidence in dustfall monitoring. Uncertainty of dustfall monitoring was assessed through a 12-month study of 12 sites that were located on the boundary of the study area. Each site contained a directional dustfall sampler, which was modified by removing the rotating lid, with four buckets (A, B, C and D installed. Having four buckets on one stand allows for each bucket to be exposed to the same conditions, for the same period of time; therefore, should have equal amounts of dust deposited in these buckets. The difference in the weight (mg of the dust recorded from each bucket at each respective site was determined using the American Society for Testing and Materials method D1739 (ASTM D1739. The variability of the dust would provide the confidence level of dustfall monitoring when reporting to clients.

  4. Knowledge Uncertainty and Composed Classifier

    Klimešová, Dana; Ocelíková, E.

    2007-01-01

    Roč. 1, č. 2 (2007), s. 101-105 ISSN 1998-0140 Institutional research plan: CEZ:AV0Z10750506 Keywords : Boosting architecture * contextual modelling * composed classifier * knowledge management, * knowledge * uncertainty Subject RIV: IN - Informatics, Computer Science

  5. Uncertainty propagation in nuclear forensics

    Pommé, S.; Jerome, S.M.; Venchiarutti, C.

    2014-01-01

    Uncertainty propagation formulae are presented for age dating in support of nuclear forensics. The age of radioactive material in this context refers to the time elapsed since a particular radionuclide was chemically separated from its decay product(s). The decay of the parent radionuclide and ingrowth of the daughter nuclide are governed by statistical decay laws. Mathematical equations allow calculation of the age of specific nuclear material through the atom ratio between parent and daughter nuclides, or through the activity ratio provided that the daughter nuclide is also unstable. The derivation of the uncertainty formulae of the age may present some difficulty to the user community and so the exact solutions, some approximations, a graphical representation and their interpretation are presented in this work. Typical nuclides of interest are actinides in the context of non-proliferation commitments. The uncertainty analysis is applied to a set of important parent–daughter pairs and the need for more precise half-life data is examined. - Highlights: • Uncertainty propagation formulae for age dating with nuclear chronometers. • Applied to parent–daughter pairs used in nuclear forensics. • Investigated need for better half-life data

  6. WASH-1400: quantifying the uncertainties

    Erdmann, R.C.; Leverenz, F.L. Jr.; Lellouche, G.S.

    1981-01-01

    The purpose of this paper is to focus on the limitations of the WASH-1400 analysis in estimating the risk from light water reactors (LWRs). This assessment attempts to modify the quantification of the uncertainty in and estimate of risk as presented by the RSS (reactor safety study). 8 refs

  7. Model uncertainty in growth empirics

    Prüfer, P.

    2008-01-01

    This thesis applies so-called Bayesian model averaging (BMA) to three different economic questions substantially exposed to model uncertainty. Chapter 2 addresses a major issue of modern development economics: the analysis of the determinants of pro-poor growth (PPG), which seeks to combine high

  8. Uncertainty governance: an integrated framework for managing and communicating uncertainties

    Umeki, H.; Naito, M.; Takase, H.

    2004-01-01

    Treatment of uncertainty, or in other words, reasoning with imperfect information is widely recognised as being of great importance within performance assessment (PA) of the geological disposal mainly because of the time scale of interest and spatial heterogeneity that geological environment exhibits. A wide range of formal methods have been proposed for the optimal processing of incomplete information. Many of these methods rely on the use of numerical information, the frequency based concept of probability in particular, to handle the imperfections. However, taking quantitative information as a base for models that solve the problem of handling imperfect information merely creates another problem, i.e., how to provide the quantitative information. In many situations this second problem proves more resistant to solution, and in recent years several authors have looked at a particularly ingenious way in accordance with the rules of well-founded methods such as Bayesian probability theory, possibility theory, and the Dempster-Shafer theory of evidence. Those methods, while drawing inspiration from quantitative methods, do not require the kind of complete numerical information required by quantitative methods. Instead they provide information that, though less precise than that provided by quantitative techniques, is often, if not sufficient, the best that could be achieved. Rather than searching for the best method for handling all imperfect information, our strategy for uncertainty management, that is recognition and evaluation of uncertainties associated with PA followed by planning and implementation of measures to reduce them, is to use whichever method best fits the problem at hand. Such an eclectic position leads naturally to integration of the different formalisms. While uncertainty management based on the combination of semi-quantitative methods forms an important part of our framework for uncertainty governance, it only solves half of the problem

  9. The generaltion gap in nursing

    J.G.P. van Niekerk

    1984-09-01

    Full Text Available Generation gap is one of those catch phrases that we so often use, and misuse, to excuse ourselves or to cover up for our shortcomings. It is like the shortage of nurses behind which we hide from all our nursing problems. Although it is such a commonly used phrase, do we really know what it means? When you consult the Oxford Dictionary, you will find that it defines generation gap as: differences of opinion between those of different generations. It will surprise most people that the generation gap becomes a problem only when there are differences of opinion.

  10. Wide gap semiconductor microwave devices

    Buniatyan, V V; Aroutiounian, V M

    2007-01-01

    A review of properties of wide gap semiconductor materials such as diamond, diamond-like carbon films, SiC, GaP, GaN and AlGaN/GaN that are relevant to electronic, optoelectronic and microwave applications is presented. We discuss the latest situation and perspectives based on experimental and theoretical results obtained for wide gap semiconductor devices. Parameters are taken from the literature and from some of our theoretical works. The correspondence between theoretical results and parameters of devices is critically analysed. (review article)

  11. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Elert, M.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  12. Stereo-particle image velocimetry uncertainty quantification

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  13. Experimental uncertainty estimation and statistics for data having interval uncertainty.

    Kreinovich, Vladik (Applied Biomathematics, Setauket, New York); Oberkampf, William Louis (Applied Biomathematics, Setauket, New York); Ginzburg, Lev (Applied Biomathematics, Setauket, New York); Ferson, Scott (Applied Biomathematics, Setauket, New York); Hajagos, Janos (Applied Biomathematics, Setauket, New York)

    2007-05-01

    This report addresses the characterization of measurements that include epistemic uncertainties in the form of intervals. It reviews the application of basic descriptive statistics to data sets which contain intervals rather than exclusively point estimates. It describes algorithms to compute various means, the median and other percentiles, variance, interquartile range, moments, confidence limits, and other important statistics and summarizes the computability of these statistics as a function of sample size and characteristics of the intervals in the data (degree of overlap, size and regularity of widths, etc.). It also reviews the prospects for analyzing such data sets with the methods of inferential statistics such as outlier detection and regressions. The report explores the tradeoff between measurement precision and sample size in statistical results that are sensitive to both. It also argues that an approach based on interval statistics could be a reasonable alternative to current standard methods for evaluating, expressing and propagating measurement uncertainties.

  14. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    Elert, M. [Kemakta Konsult AB, Stockholm (Sweden)] [ed.

    1996-09-01

    In the Model Complexity subgroup of BIOMOVS II, models of varying complexity have been applied to the problem of downward transport of radionuclides in soils. A scenario describing a case of surface contamination of a pasture soil was defined. Three different radionuclides with different environmental behavior and radioactive half-lives were considered: Cs-137, Sr-90 and I-129. The intention was to give a detailed specification of the parameters required by different kinds of model, together with reasonable values for the parameter uncertainty. A total of seven modelling teams participated in the study using 13 different models. Four of the modelling groups performed uncertainty calculations using nine different modelling approaches. The models used range in complexity from analytical solutions of a 2-box model using annual average data to numerical models coupling hydrology and transport using data varying on a daily basis. The complex models needed to consider all aspects of radionuclide transport in a soil with a variable hydrology are often impractical to use in safety assessments. Instead simpler models, often box models, are preferred. The comparison of predictions made with the complex models and the simple models for this scenario show that the predictions in many cases are very similar, e g in the predictions of the evolution of the root zone concentration. However, in other cases differences of many orders of magnitude can appear. One example is the prediction of the flux to the groundwater of radionuclides being transported through the soil column. Some issues that have come to focus in this study: There are large differences in the predicted soil hydrology and as a consequence also in the radionuclide transport, which suggests that there are large uncertainties in the calculation of effective precipitation and evapotranspiration. The approach used for modelling the water transport in the root zone has an impact on the predictions of the decline in root

  15. Rapid research and implementation priority setting for wound care uncertainties.

    Trish A Gray

    Full Text Available People with complex wounds are more likely to be elderly, living with multimorbidity and wound related symptoms. A variety of products are available for managing complex wounds and a range of healthcare professionals are involved in wound care, yet there is a lack of good evidence to guide practice and services. These factors create uncertainty for those who deliver and those who manage wound care. Formal priority setting for research and implementation topics is needed to more accurately target the gaps in treatment and services. We solicited practitioner and manager uncertainties in wound care and held a priority setting workshop to facilitate a collaborative approach to prioritising wound care-related uncertainties.We recruited healthcare professionals who regularly cared for patients with complex wounds, were wound care specialists or managed wound care services. Participants submitted up to five wound care uncertainties in consultation with their colleagues, via an on-line survey and attended a priority setting workshop. Submitted uncertainties were collated, sorted and categorised according professional group. On the day of the workshop, participants were divided into four groups depending on their profession. Uncertainties submitted by their professional group were viewed, discussed and amended, prior to the first of three individual voting rounds. Participants cast up to ten votes for the uncertainties they judged as being high priority. Continuing in the professional groups, the top 10 uncertainties from each group were displayed, and the process was repeated. Groups were then brought together for a plenary session in which the final priorities were individually scored on a scale of 0-10 by participants. Priorities were ranked and results presented. Nominal group technique was used for generating the final uncertainties, voting and discussions.Thirty-three participants attended the workshop comprising; 10 specialist nurses, 10 district

  16. Rapid research and implementation priority setting for wound care uncertainties

    Dumville, Jo C.; Christie, Janice; Cullum, Nicky A.

    2017-01-01

    Introduction People with complex wounds are more likely to be elderly, living with multimorbidity and wound related symptoms. A variety of products are available for managing complex wounds and a range of healthcare professionals are involved in wound care, yet there is a lack of good evidence to guide practice and services. These factors create uncertainty for those who deliver and those who manage wound care. Formal priority setting for research and implementation topics is needed to more accurately target the gaps in treatment and services. We solicited practitioner and manager uncertainties in wound care and held a priority setting workshop to facilitate a collaborative approach to prioritising wound care-related uncertainties. Methods We recruited healthcare professionals who regularly cared for patients with complex wounds, were wound care specialists or managed wound care services. Participants submitted up to five wound care uncertainties in consultation with their colleagues, via an on-line survey and attended a priority setting workshop. Submitted uncertainties were collated, sorted and categorised according professional group. On the day of the workshop, participants were divided into four groups depending on their profession. Uncertainties submitted by their professional group were viewed, discussed and amended, prior to the first of three individual voting rounds. Participants cast up to ten votes for the uncertainties they judged as being high priority. Continuing in the professional groups, the top 10 uncertainties from each group were displayed, and the process was repeated. Groups were then brought together for a plenary session in which the final priorities were individually scored on a scale of 0–10 by participants. Priorities were ranked and results presented. Nominal group technique was used for generating the final uncertainties, voting and discussions. Results Thirty-three participants attended the workshop comprising; 10 specialist nurses

  17. Closing the Cybersecurity Skills Gap

    Rebecca Vogel

    2016-05-01

    Full Text Available The current consensus is that there is a worldwide gap in skills needed for a competent cybersecurity workforce. This skills gap has implications for the national security sector, both public and private. Although the view is that this will take a concerted effort to rectify, it presents an opportunity for IT professionals, university students, and aspirants to take-up jobs in national security national intelligence as well military and law enforcement intelligence. This paper examines context of the issue, the nature of the cybersecurity skills gap, and some key responses by governments to address the problem. The paper also examines the emerging employment trends, some of the employment challenges, and what these might mean for practice. The paper argues that the imperative is to close the cyber skills gap by taking advantage of the window of opportunity, allowing individuals interested in moving into the cybersecurity field to do so via education and training.

  18. Gap Surface Plasmon Waveguide Analysis

    Nielsen, Michael Grøndahl; Bozhevolnyi, Sergey I.

    2014-01-01

    Plasmonic waveguides supporting gap surface plasmons (GSPs) localized in a dielectric spacer between metal films are investigated numerically and the waveguiding properties at telecommunication wavelengths are presented. Especially, we emphasize that the mode confinement can advantageously...

  19. Prioritization Assessment for Capability Gaps in Weapon System of Systems Based on the Conditional Evidential Network

    Dong Pei

    2018-02-01

    Full Text Available The prioritization of capability gaps for weapon system of systems is the basis for design and capability planning in the system of systems development process. In order to address input information uncertainties, the prioritization of capability gaps is computed in two steps using the conditional evidential network method. First, we evaluated the belief distribution of degree of required satisfaction for capabilities, and then calculated the reverse conditional belief function between capability hierarchies. We also provided verification for the feasibility and effectiveness of the proposed method through a prioritization of capability gaps calculation using an example of a spatial-navigation-and-positioning system of systems.

  20. Applied research in uncertainty modeling and analysis

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  1. Understanding the carbon dioxide gaps.

    Scheeren, Thomas W L; Wicke, Jannis N; Teboul, Jean-Louis

    2018-06-01

    The current review attempts to demonstrate the value of several forms of carbon dioxide (CO2) gaps in resuscitation of the critically ill patient as monitor for the adequacy of the circulation, as target for fluid resuscitation and also as predictor for outcome. Fluid resuscitation is one of the key treatments in many intensive care patients. It remains a challenge in daily practice as both a shortage and an overload in intravascular volume are potentially harmful. Many different approaches have been developed for use as target of fluid resuscitation. CO2 gaps can be used as surrogate for the adequacy of cardiac output (CO) and as marker for tissue perfusion and are therefore a potential target for resuscitation. CO2 gaps are easily measured via point-of-care analysers. We shed light on its potential use as nowadays it is not widely used in clinical practice despite its potential. Many studies were conducted on partial CO2 pressure differences or CO2 content (cCO2) differences either alone, or in combination with other markers for outcome or resuscitation adequacy. Furthermore, some studies deal with CO2 gap to O2 gap ratios as target for goal-directed fluid therapy or as marker for outcome. CO2 gap is a sensitive marker of tissue hypoperfusion, with added value over traditional markers of tissue hypoxia in situations in which an oxygen diffusion barrier exists such as in tissue oedema and impaired microcirculation. Venous-to-arterial cCO2 or partial pressure gaps can be used to evaluate whether attempts to increase CO should be made. Considering the potential of the several forms of CO2 measurements and its ease of use via point-of-care analysers, it is recommendable to implement CO2 gaps in standard clinical practice.

  2. Colour reconnections and rapidity gaps

    Loennblad, Leif

    1996-01-01

    I argue that the success of recently proposed models describing events with large rapidity gaps in DIS at HERA in terms of non-perturbative colour exchange is heavily reliant on suppression of perturbative gluon emission in the proton direction. There is little or no physical motivation for such suppression and I show that a model without this suppression cannot describe the rapidity gap events at HERA. (author)

  3. Bridging the Gap (BRIEFING CHARTS)

    2007-03-05

    1 Defense Advanced Research Projects Agency “Bridging the Gap ” Dr. Robert F. Leheny Deputy Director Report Documentation Page Form ApprovedOMB No...comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden, to...4. TITLE AND SUBTITLE Bridging the Gap 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER

  4. Improving the driver-automation interaction: an approach using automation uncertainty.

    Beller, Johannes; Heesen, Matthias; Vollrath, Mark

    2013-12-01

    The aim of this study was to evaluate whether communicating automation uncertainty improves the driver-automation interaction. A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap. We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs.low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely. Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance. The presentation of automation uncertaintythrough a symbol improves overall driver-automation cooperation. Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver-automation cooperation.

  5. Uncertainty of the calibration factor

    1995-01-01

    According to present definitions, an error is the difference between a measured value and the ''true'' value. Thus an error has both a numerical value and a sign. In contrast, the uncertainly associated with a measurement is a parameter that characterizes the dispersion of the values ''that could reasonably be attributed to the measurand''. This parameter is normally an estimated standard deviation. An uncertainty, therefore, has no known sign and is usually assumed to be symmetrical. It is a measure of our lack of exact knowledge, after all recognized ''systematic'' effects have been eliminated by applying appropriate corrections. If errors were known exactly, the true value could be determined and there would be no problem left. In reality, errors are estimated in the best possible way and corrections made for them. Therefore, after application of all known corrections, errors need no further consideration (their expectation value being zero) and the only quantities of interest are uncertainties. 3 refs, 2 figs

  6. Quantifying the uncertainty in heritability.

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  7. Uncertainty in hydrological change modelling

    Seaby, Lauren Paige

    applied at the grid scale. Flux and state hydrological outputs which integrate responses over time and space showed more sensitivity to precipitation mean spatial biases and less so on extremes. In the investigated catchments, the projected change of groundwater levels and basin discharge between current......Hydrological change modelling methodologies generally use climate models outputs to force hydrological simulations under changed conditions. There are nested sources of uncertainty throughout this methodology, including choice of climate model and subsequent bias correction methods. This Ph.......D. study evaluates the uncertainty of the impact of climate change in hydrological simulations given multiple climate models and bias correction methods of varying complexity. Three distribution based scaling methods (DBS) were developed and benchmarked against a more simplistic and commonly used delta...

  8. Visualizing Summary Statistics and Uncertainty

    Potter, K.

    2010-08-12

    The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.

  9. Visualizing Summary Statistics and Uncertainty

    Potter, K.; Kniss, J.; Riesenfeld, R.; Johnson, C.R.

    2010-01-01

    The graphical depiction of uncertainty information is emerging as a problem of great importance. Scientific data sets are not considered complete without indications of error, accuracy, or levels of confidence. The visual portrayal of this information is a challenging task. This work takes inspiration from graphical data analysis to create visual representations that show not only the data value, but also important characteristics of the data including uncertainty. The canonical box plot is reexamined and a new hybrid summary plot is presented that incorporates a collection of descriptive statistics to highlight salient features of the data. Additionally, we present an extension of the summary plot to two dimensional distributions. Finally, a use-case of these new plots is presented, demonstrating their ability to present high-level overviews as well as detailed insight into the salient features of the underlying data distribution. © 2010 The Eurographics Association and Blackwell Publishing Ltd.

  10. Statistical uncertainties and unrecognized relationships

    Rankin, J.P.

    1985-01-01

    Hidden relationships in specific designs directly contribute to inaccuracies in reliability assessments. Uncertainty factors at the system level may sometimes be applied in attempts to compensate for the impact of such unrecognized relationships. Often uncertainty bands are used to relegate unknowns to a miscellaneous category of low-probability occurrences. However, experience and modern analytical methods indicate that perhaps the dominant, most probable and significant events are sometimes overlooked in statistical reliability assurances. The author discusses the utility of two unique methods of identifying the otherwise often unforeseeable system interdependencies for statistical evaluations. These methods are sneak circuit analysis and a checklist form of common cause failure analysis. Unless these techniques (or a suitable equivalent) are also employed along with the more widely-known assurance tools, high reliability of complex systems may not be adequately assured. This concern is indicated by specific illustrations. 8 references, 5 figures

  11. The uncertainty budget in pharmaceutical industry

    Heydorn, Kaj

    of their uncertainty, exactly as described in GUM [2]. Pharmaceutical industry has therefore over the last 5 years shown increasing interest in accreditation according to ISO 17025 [3], and today uncertainty budgets are being developed for all so-called critical measurements. The uncertainty of results obtained...... that the uncertainty of a particular result is independent of the method used for its estimation. Several examples of uncertainty budgets for critical parameters based on the bottom-up procedure will be discussed, and it will be shown how the top-down method is used as a means of verifying uncertainty budgets, based...

  12. Improvement of uncertainty relations for mixed states

    Park, Yong Moon

    2005-01-01

    We study a possible improvement of uncertainty relations. The Heisenberg uncertainty relation employs commutator of a pair of conjugate observables to set the limit of quantum measurement of the observables. The Schroedinger uncertainty relation improves the Heisenberg uncertainty relation by adding the correlation in terms of anti-commutator. However both relations are insensitive whether the state used is pure or mixed. We improve the uncertainty relations by introducing additional terms which measure the mixtureness of the state. For the momentum and position operators as conjugate observables and for the thermal state of quantum harmonic oscillator, it turns out that the equalities in the improved uncertainty relations hold

  13. Adjoint-Based Uncertainty Quantification with MCNP

    Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  14. Conditional Betas and Investor Uncertainty

    Fernando D. Chague

    2013-01-01

    We derive theoretical expressions for market betas from a rational expectation equilibrium model where the representative investor does not observe if the economy is in a recession or an expansion. Market betas in this economy are time-varying and related to investor uncertainty about the state of the economy. The dynamics of betas will also vary across assets according to the assets' cash-flow structure. In a calibration exercise, we show that value and growth firms have cash-flow structures...

  15. Aggregate Uncertainty, Money and Banking

    Hongfei Sun

    2006-01-01

    This paper studies the problem of monitoring the monitor in a model of money and banking with aggregate uncertainty. It shows that when inside money is required as a means of bank loan repayment, a market of inside money is entailed at the repayment stage and generates information-revealing prices that perfectly discipline the bank. The incentive problem of a bank is costlessly overcome simply by involving inside money in repayment. Inside money distinguishes itself from outside money by its ...

  16. Decision Under Uncertainty in Diagnosis

    Kalme, Charles I.

    2013-01-01

    This paper describes the incorporation of uncertainty in diagnostic reasoning based on the set covering model of Reggia et. al. extended to what in the Artificial Intelligence dichotomy between deep and compiled (shallow, surface) knowledge based diagnosis may be viewed as the generic form at the compiled end of the spectrum. A major undercurrent in this is advocating the need for a strong underlying model and an integrated set of support tools for carrying such a model in order to deal with ...

  17. Forecast Accuracy Uncertainty and Momentum

    Bing Han; Dong Hong; Mitch Warachka

    2009-01-01

    We demonstrate that stock price momentum and earnings momentum can result from uncertainty surrounding the accuracy of cash flow forecasts. Our model has multiple information sources issuing cash flow forecasts for a stock. The investor combines these forecasts into an aggregate cash flow estimate that has minimal mean-squared forecast error. This aggregate estimate weights each cash flow forecast by the estimated accuracy of its issuer, which is obtained from their past forecast errors. Mome...

  18. Microeconomic Uncertainty and Macroeconomic Indeterminacy

    Fagnart, Jean-François; Pierrard, Olivier; Sneessens, Henri

    2005-01-01

    The paper proposes a stylized intertemporal macroeconomic model wherein the combination of decentralized trading and microeconomic uncertainty (taking the form of privately observed and uninsured idiosyncratic shocks) creates an information problem between agents and generates indeterminacy of the macroeconomic equilibrium. For a given value of the economic fundamentals, the economy admits a continuum of equilibria that can be indexed by the sales expectations of firms at the time of investme...

  19. LOFT differential pressure uncertainty analysis

    Evans, R.P.; Biladeau, G.L.; Quinn, P.A.

    1977-03-01

    A performance analysis of the LOFT differential pressure (ΔP) measurement is presented. Along with completed descriptions of test programs and theoretical studies that have been conducted on the ΔP, specific sources of measurement uncertainty are identified, quantified, and combined to provide an assessment of the ability of this measurement to satisfy the SDD 1.4.1C (June 1975) requirement of measurement of differential pressure

  20. Knowledge, decision making, and uncertainty

    Fox, J.

    1986-01-01

    Artificial intelligence (AI) systems depend heavily upon the ability to make decisions. Decisions require knowledge, yet there is no knowledge-based theory of decision making. To the extent that AI uses a theory of decision-making it adopts components of the traditional statistical view in which choices are made by maximizing some function of the probabilities of decision options. A knowledge-based scheme for reasoning about uncertainty is proposed, which extends the traditional framework but is compatible with it

  1. Accommodating Uncertainty in Prior Distributions

    Picard, Richard Roy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vander Wiel, Scott Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-19

    A fundamental premise of Bayesian methodology is that a priori information is accurately summarized by a single, precisely de ned prior distribution. In many cases, especially involving informative priors, this premise is false, and the (mis)application of Bayes methods produces posterior quantities whose apparent precisions are highly misleading. We examine the implications of uncertainty in prior distributions, and present graphical methods for dealing with them.

  2. Managing project risks and uncertainties

    Mike Mentis

    2015-01-01

    Full Text Available This article considers threats to a project slipping on budget, schedule and fit-for-purpose. Threat is used here as the collective for risks (quantifiable bad things that can happen and uncertainties (poorly or not quantifiable bad possible events. Based on experience with projects in developing countries this review considers that (a project slippage is due to uncertainties rather than risks, (b while eventuation of some bad things is beyond control, managed execution and oversight are still the primary means to keeping within budget, on time and fit-for-purpose, (c improving project delivery is less about bigger and more complex and more about coordinated focus, effectiveness and developing thought-out heuristics, and (d projects take longer and cost more partly because threat identification is inaccurate, the scope of identified threats is too narrow, and the threat assessment product is not integrated into overall project decision-making and execution. Almost by definition, what is poorly known is likely to cause problems. Yet it is not just the unquantifiability and intangibility of uncertainties causing project slippage, but that they are insufficiently taken into account in project planning and execution that cause budget and time overruns. Improving project performance requires purpose-driven and managed deployment of scarce seasoned professionals. This can be aided with independent oversight by deeply experienced panelists who contribute technical insights and can potentially show that diligence is seen to be done.

  3. Chemical model reduction under uncertainty

    Malpica Galassi, Riccardo

    2017-03-06

    A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.

  4. Detection capacity, information gaps and the design of surveillance programs for invasive forest pests

    Denys Yemshanov; Frank Koch; Yakov Ben-Haim; William Smith

    2010-01-01

    Integrated pest risk maps and their underlying assessments provide broad guidance for establishing surveillance programs for invasive species, but they rarely account for knowledge gaps regarding the pest of interest or how these can be reduced. In this study we demonstrate how the somewhat competing notions of robustness to uncertainty and potential knowledge gains...

  5. Characterizing Epistemic Uncertainty for Launch Vehicle Designs

    Novack, Steven D.; Rogers, Jim; Hark, Frank; Al Hassan, Mohammad

    2016-01-01

    NASA Probabilistic Risk Assessment (PRA) has the task of estimating the aleatory (randomness) and epistemic (lack of knowledge) uncertainty of launch vehicle loss of mission and crew risk and communicating the results. Launch vehicles are complex engineered systems designed with sophisticated subsystems that are built to work together to accomplish mission success. Some of these systems or subsystems are in the form of heritage equipment, while some have never been previously launched. For these cases, characterizing the epistemic uncertainty is of foremost importance, and it is anticipated that the epistemic uncertainty of a modified launch vehicle design versus a design of well understood heritage equipment would be greater. For reasons that will be discussed, standard uncertainty propagation methods using Monte Carlo simulation produce counter intuitive results and significantly underestimate epistemic uncertainty for launch vehicle models. Furthermore, standard PRA methods such as Uncertainty-Importance analyses used to identify components that are significant contributors to uncertainty are rendered obsolete since sensitivity to uncertainty changes are not reflected in propagation of uncertainty using Monte Carlo methods.This paper provides a basis of the uncertainty underestimation for complex systems and especially, due to nuances of launch vehicle logic, for launch vehicles. It then suggests several alternative methods for estimating uncertainty and provides examples of estimation results. Lastly, the paper shows how to implement an Uncertainty-Importance analysis using one alternative approach, describes the results, and suggests ways to reduce epistemic uncertainty by focusing on additional data or testing of selected components.

  6. Is there a gap in the gap? Regional differences in the gender pay gap

    Hirsch, Boris; König, Marion; Möller, Joachim

    2009-01-01

    In this paper, we investigate regional differences in the gender pay gap both theoretically and empirically. Within a spatial oligopsony model, we show that more densely populated labour markets are more competitive and constrain employers' ability to discriminate against women. Utilising a large administrative data set for western Germany and a flexible semi-parametric propensity score matching approach, we find that the unexplained gender pay gap for young workers is substantially lower in ...

  7. Metrology and analytical chemistry: Bridging the cultural gap

    King, Bernard

    2002-01-01

    Metrology in general and issues such as traceability and measurement uncertainty in particular are new to most analytical chemists and many remain to be convinced of their value. There is a danger of the cultural gap between metrologists and analytical chemists widening with unhelpful consequences and it is important that greater collaboration and cross-fertilisation is encouraged. This paper discusses some of the similarities and differences in the approaches adopted by metrologists and analytical chemists and indicates how these approaches can be combined to establish a unique metrology of chemical measurement which could be accepted by both cultures. (author)

  8. Robust Energy Hub Management Using Information Gap Decision Theory

    Javadi, Mohammad Sadegh; Anvari-Moghaddam, Amjad; Guerrero, Josep M.

    2017-01-01

    This paper proposes a robust optimization framework for energy hub management. It is well known that the operation of energy systems can be negatively affected by uncertain parameters, such as stochastic load demand or generation. In this regard, it is of high significance to propose efficient...... tools in order to deal with uncertainties and to provide reliable operating conditions. On a broader scale, an energy hub includes diverse energy sources for supplying both electrical load and heating/cooling demands with stochastic behaviors. Therefore, this paper utilizes the Information Decision Gap...

  9. Uncertainty analysis of the FRAP code

    Peck, S.O.

    1978-01-01

    A user oriented, automated uncertainty analysis capability has been built into the FRAP code (Fuel Rod Analysis Program) and applied to a PWR fuel rod undergoing a LOCA. The method of uncertainty analysis is the Response Surface Method (RSM). (author)

  10. Two multi-dimensional uncertainty relations

    Skala, L; Kapsa, V

    2008-01-01

    Two multi-dimensional uncertainty relations, one related to the probability density and the other one related to the probability density current, are derived and discussed. Both relations are stronger than the usual uncertainty relations for the coordinates and momentum

  11. Change and uncertainty in quantum systems

    Franson, J.D.

    1996-01-01

    A simple inequality shows that any change in the expectation value of an observable quantity must be associated with some degree of uncertainty. This inequality is often more restrictive than the Heisenberg uncertainty principle. copyright 1996 The American Physical Society

  12. Measure of uncertainty in regional grade variability

    Tutmez, B.; Kaymak, U.; Melin, P.; Castillo, O.; Gomez Ramirez, E.; Kacprzyk, J.; Pedrycz, W.

    2007-01-01

    Because the geological events are neither homogeneous nor isotropic, the geological investigations are characterized by particularly high uncertainties. This paper presents a hybrid methodology for measuring of uncertainty in regional grade variability. In order to evaluate the fuzziness in grade

  13. Explaining the Gender Wealth Gap

    Ruel, Erin; Hauser, Robert M.

    2013-01-01

    To assess and explain the United States’ gender wealth gap, we use the Wisconsin Longitudinal Study to examine wealth accumulated by a single cohort over 50 years by gender, by marital status, and limited to the respondents who are their family’s best financial reporters. We find large gender wealth gaps between currently married men and women, and never-married men and women. The never-married accumulate less wealth than the currently married, and there is a marital disruption cost to wealth accumulation. The status-attainment model shows the most power in explaining gender wealth gaps between these groups explaining about one-third to one-half of the gap, followed by the human-capital explanation. In other words, a lifetime of lower earnings for women translates into greatly reduced wealth accumulation. A gender wealth gap remains between married men and women after controlling for the full model that we speculate may be related to gender differences in investment strategies and selection effects. PMID:23264038

  14. Uncertainty and its propagation in dynamics models

    Devooght, J.

    1994-01-01

    The purpose of this paper is to bring together some characteristics due to uncertainty when we deal with dynamic models and therefore to propagation of uncertainty. The respective role of uncertainty and inaccuracy is examined. A mathematical formalism based on Chapman-Kolmogorov equation allows to define a open-quotes subdynamicsclose quotes where the evolution equation takes the uncertainty into account. The problem of choosing or combining models is examined through a loss function associated to a decision

  15. Some illustrative examples of model uncertainty

    Bier, V.M.

    1994-01-01

    In this paper, we first discuss the view of model uncertainty proposed by Apostolakis. We then present several illustrative examples related to model uncertainty, some of which are not well handled by this formalism. Thus, Apostolakis' approach seems to be well suited to describing some types of model uncertainty, but not all. Since a comprehensive approach for characterizing and quantifying model uncertainty is not yet available, it is hoped that the examples presented here will service as a springboard for further discussion

  16. The Uncertainty Multiplier and Business Cycles

    Saijo, Hikaru

    2013-01-01

    I study a business cycle model where agents learn about the state of the economy by accumulating capital. During recessions, agents invest less, and this generates noisier estimates of macroeconomic conditions and an increase in uncertainty. The endogenous increase in aggregate uncertainty further reduces economic activity, which in turn leads to more uncertainty, and so on. Thus, through changes in uncertainty, learning gives rise to a multiplier effect that amplifies business cycles. I use ...

  17. Prediction of Gap Asymmetry in Differential Micro Accelerometers

    Xiaoping He

    2012-05-01

    Full Text Available Gap asymmetry in differential capacitors is the primary source of the zero bias output of force-balanced micro accelerometers. It is also used to evaluate the applicability of differential structures in MEMS manufacturing. Therefore, determining the asymmetry level has considerable significance for the design of MEMS devices. This paper proposes an experimental-theoretical method for predicting gap asymmetry in differential sensing capacitors of micro accelerometers. The method involves three processes: first, bi-directional measurement, which can sharply reduce the influence of the feedback circuit on bias output, is proposed. Experiments are then carried out on a centrifuge to obtain the input and output data of an accelerometer. Second, the analytical input-output relationship of the accelerometer with gap asymmetry and circuit error is theoretically derived. Finally, the prediction methodology combines the measurement results and analytical derivation to identify the asymmetric error of 30 accelerometers fabricated by DRIE. Results indicate that the level of asymmetry induced by fabrication uncertainty is about ±5 × 10−2, and that the absolute error is about ±0.2 µm under a 4 µm gap.

  18. The insurance industry and unconventional gas development: Gaps and recommendations

    Wetherell, Daniel; Evensen, Darrick

    2016-01-01

    The increasingly growing and controversial practice of natural gas development by horizontal drilling and high volume hydraulic fracturing (‘fracking’) faces a severe environmental insurance deficit at the industry level. Part of this deficit is arguably inherent to the process, whereas another part is caused by current risk information shortfalls on the processes and impacts associated with development. In the short and long terms, there are several conventional and unconventional methods by which industry-level and governmental-level policy can insure against these risks. Whilst academic attention has been afforded to the potential risks associated with unconventional natural gas development, little consideration has been given to the lack of insurance opportunities against these risks or to the additional risks promulgated by the dearth of insurance options. We chronicle the ways in which insurance options are limited due to unconventional gas development, the problems caused by lack of insurance offerings, and we highlight potential policy remedies for addressing these gaps, including a range of government- and industry-specific approaches. - Highlights: •A gap exists in provision of liability insurance for ‘fracking’-related risks. •The market gap is due primarily to uncertainties about probabilistic risk. •Insurance for risks similar to ‘fracking’ highlight potential policy options. •Government regulation and/or industry agreements can effectively fill the gap. •Policies on insurance and liability coverage necessitate ethical considerations.

  19. Uncertainty Characterization of Reactor Vessel Fracture Toughness

    Li, Fei; Modarres, Mohammad

    2002-01-01

    To perform fracture mechanics analysis of reactor vessel, fracture toughness (K Ic ) at various temperatures would be necessary. In a best estimate approach, K Ic uncertainties resulting from both lack of sufficient knowledge and randomness in some of the variables of K Ic must be characterized. Although it may be argued that there is only one type of uncertainty, which is lack of perfect knowledge about the subject under study, as a matter of practice K Ic uncertainties can be divided into two types: aleatory and epistemic. Aleatory uncertainty is related to uncertainty that is very difficult to reduce, if not impossible; epistemic uncertainty, on the other hand, can be practically reduced. Distinction between aleatory and epistemic uncertainties facilitates decision-making under uncertainty and allows for proper propagation of uncertainties in the computation process. Typically, epistemic uncertainties representing, for example, parameters of a model are sampled (to generate a 'snapshot', single-value of the parameters), but the totality of aleatory uncertainties is carried through the calculation as available. In this paper a description of an approach to account for these two types of uncertainties associated with K Ic has been provided. (authors)

  20. Uncertainty in prediction and in inference

    Hilgevoord, J.; Uffink, J.

    1991-01-01

    The concepts of uncertainty in prediction and inference are introduced and illustrated using the diffraction of light as an example. The close re-lationship between the concepts of uncertainty in inference and resolving power is noted. A general quantitative measure of uncertainty in

  1. Entropic uncertainty relations-a survey

    Wehner, Stephanie; Winter, Andreas

    2010-01-01

    Uncertainty relations play a central role in quantum mechanics. Entropic uncertainty relations in particular have gained significant importance within quantum information, providing the foundation for the security of many quantum cryptographic protocols. Yet, little is known about entropic uncertainty relations with more than two measurement settings. In the present survey, we review known results and open questions.

  2. Flood modelling : Parameterisation and inflow uncertainty

    Mukolwe, M.M.; Di Baldassarre, G.; Werner, M.; Solomatine, D.P.

    2014-01-01

    This paper presents an analysis of uncertainty in hydraulic modelling of floods, focusing on the inaccuracy caused by inflow errors and parameter uncertainty. In particular, the study develops a method to propagate the uncertainty induced by, firstly, application of a stage–discharge rating curve

  3. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Kirchner, G.; Peterson, R.

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  4. Uncertainty and validation. Effect of user interpretation on uncertainty estimates

    Kirchner, G. [Univ. of Bremen (Germany); Peterson, R. [AECL, Chalk River, ON (Canada)] [and others

    1996-11-01

    Uncertainty in predictions of environmental transfer models arises from, among other sources, the adequacy of the conceptual model, the approximations made in coding the conceptual model, the quality of the input data, the uncertainty in parameter values, and the assumptions made by the user. In recent years efforts to quantify the confidence that can be placed in predictions have been increasing, but have concentrated on a statistical propagation of the influence of parameter uncertainties on the calculational results. The primary objective of this Working Group of BIOMOVS II was to test user's influence on model predictions on a more systematic basis than has been done before. The main goals were as follows: To compare differences between predictions from different people all using the same model and the same scenario description with the statistical uncertainties calculated by the model. To investigate the main reasons for different interpretations by users. To create a better awareness of the potential influence of the user on the modeling results. Terrestrial food chain models driven by deposition of radionuclides from the atmosphere were used. Three codes were obtained and run with three scenarios by a maximum of 10 users. A number of conclusions can be drawn some of which are general and independent of the type of models and processes studied, while others are restricted to the few processes that were addressed directly: For any set of predictions, the variation in best estimates was greater than one order of magnitude. Often the range increased from deposition to pasture to milk probably due to additional transfer processes. The 95% confidence intervals about the predictions calculated from the parameter distributions prepared by the participants did not always overlap the observations; similarly, sometimes the confidence intervals on the predictions did not overlap. Often the 95% confidence intervals of individual predictions were smaller than the

  5. Virtual gap dielectric wall accelerator

    Caporaso, George James; Chen, Yu-Jiuan; Nelson, Scott; Sullivan, Jim; Hawkins, Steven A

    2013-11-05

    A virtual, moving accelerating gap is formed along an insulating tube in a dielectric wall accelerator (DWA) by locally controlling the conductivity of the tube. Localized voltage concentration is thus achieved by sequential activation of a variable resistive tube or stalk down the axis of an inductive voltage adder, producing a "virtual" traveling wave along the tube. The tube conductivity can be controlled at a desired location, which can be moved at a desired rate, by light illumination, or by photoconductive switches, or by other means. As a result, an impressed voltage along the tube appears predominantly over a local region, the virtual gap. By making the length of the tube large in comparison to the virtual gap length, the effective gain of the accelerator can be made very large.

  6. Hard diffraction and rapidity gaps

    Brandt, A.

    1995-09-01

    The field of hard diffraction, which studies events with a rapidity gap and a hard scattering, has expanded dramatically recently. A review of new results from CDF, D OE, H1 and ZEUS will be given. These results include diffractive jet production, deep-inelastic scattering in large rapidity gap events, rapidity gaps between high transverse energy jets, and a search for diffractive W-boson production. The combination of these results gives new insight into the exchanged object, believed to be the pomeron. The results axe consistent with factorization and with a hard pomeron that contains both quarks and gluons. There is also evidence for the exchange of a strongly interacting color singlet in high momentum transfer (36 2 ) events

  7. ABORT GAP CLEANING IN RHIC

    DREES, A.; AHRENS, L.; III FLILLER, R.; GASSNER, D.; MCINTYRE, G.T.; MICHNOFF, R.; TRBOJEVIC, D.

    2002-01-01

    During the RHIC Au-run in 2001 the 200 MHz storage cavity system was used for the first time. The rebucketing procedure caused significant beam debunching in addition to amplifying debunching due to other mechanisms. At the end of a four hour store, debunched beam could account for approximately 30%-40% of the total beam intensity. Some of it will be in the abort gap. In order to minimize the risk of magnet quenching due to uncontrolled beam losses at the time of a beam dump, a combination of a fast transverse kicker and copper collimators were used to clean the abort gap. This report gives an overview of the gap cleaning procedure and the achieved performance

  8. Failure probability under parameter uncertainty.

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  9. Integrating uncertainty into public energy research and development decisions

    Anadón, Laura Díaz; Baker, Erin; Bosetti, Valentina

    2017-05-01

    Public energy research and development (R&D) is recognized as a key policy tool for transforming the world's energy system in a cost-effective way. However, managing the uncertainty surrounding technological change is a critical challenge for designing robust and cost-effective energy policies. The design of such policies is particularly important if countries are going to both meet the ambitious greenhouse-gas emissions reductions goals set by the Paris Agreement and achieve the required harmonization with the broader set of objectives dictated by the Sustainable Development Goals. The complexity of informing energy technology policy requires, and is producing, a growing collaboration between different academic disciplines and practitioners. Three analytical components have emerged to support the integration of technological uncertainty into energy policy: expert elicitations, integrated assessment models, and decision frameworks. Here we review efforts to incorporate all three approaches to facilitate public energy R&D decision-making under uncertainty. We highlight emerging insights that are robust across elicitations, models, and frameworks, relating to the allocation of public R&D investments, and identify gaps and challenges that remain.

  10. Quantum Uncertainty and Fundamental Interactions

    Tosto S.

    2013-04-01

    Full Text Available The paper proposes a simplified theoretical approach to infer some essential concepts on the fundamental interactions between charged particles and their relative strengths at comparable energies by exploiting the quantum uncertainty only. The worth of the present approach relies on the way of obtaining the results, rather than on the results themselves: concepts today acknowledged as fingerprints of the electroweak and strong interactions appear indeed rooted in the same theoretical frame including also the basic principles of special and general relativity along with the gravity force.

  11. Uncertainty analysis in seismic tomography

    Owoc, Bartosz; Majdański, Mariusz

    2017-04-01

    Velocity field from seismic travel time tomography depends on several factors like regularization, inversion path, model parameterization etc. The result also strongly depends on an initial velocity model and precision of travel times picking. In this research we test dependence on starting model in layered tomography and compare it with effect of picking precision. Moreover, in our analysis for manual travel times picking the uncertainty distribution is asymmetric. This effect is shifting the results toward faster velocities. For calculation we are using JIVE3D travel time tomographic code. We used data from geo-engineering and industrial scale investigations, which were collected by our team from IG PAS.

  12. Modelling of Transport Projects Uncertainties

    Salling, Kim Bang; Leleur, Steen

    2009-01-01

    This paper proposes a new way of handling the uncertainties present in transport decision making based on infrastructure appraisals. The paper suggests to combine the principle of Optimism Bias, which depicts the historical tendency of overestimating transport related benefits and underestimating...... to supplement Optimism Bias and the associated Reference Class Forecasting (RCF) technique with a new technique that makes use of a scenario-grid. We tentatively introduce and refer to this as Reference Scenario Forecasting (RSF). The final RSF output from the CBA-DK model consists of a set of scenario......-based graphs which function as risk-related decision support for the appraised transport infrastructure project....

  13. Medical Need, Equality, and Uncertainty.

    Horne, L Chad

    2016-10-01

    Many hold that distributing healthcare according to medical need is a requirement of equality. Most egalitarians believe, however, that people ought to be equal on the whole, by some overall measure of well-being or life-prospects; it would be a massive coincidence if distributing healthcare according to medical need turned out to be an effective way of promoting equality overall. I argue that distributing healthcare according to medical need is important for reducing individuals' uncertainty surrounding their future medical needs. In other words, distributing healthcare according to medical need is a natural feature of healthcare insurance; it is about indemnity, not equality. © 2016 John Wiley & Sons Ltd.

  14. The Emissions Gap Report 2014

    Farrell, Timothy Clifford

    This fifth Emissions Gap report has a different focus from previous years. While it updates the 2020 emissions gap analysis, it gives particular attention to the implications of the global carbon dioxide emissions budget for staying within the 2 °C limit beyond 2020. It does so because countries...... are giving increasing attention to where they need to be in 2025, 2030 and beyond. Furthermore, this year’s update of the report benefits from the findings on the emissions budget from the latest series of Intergovernmental Panel on Climate Change (IPCC) reports...

  15. Specific features of accounting probable exercussions of power in periphery WWER rods caused by in-process gap changes between fuel assemblies

    Mikailov, E. F.; Shishkov, L. K.

    2011-01-01

    The paper discusses the way for accounting the uncertainties of calculated WWER-1000 rod powers caused by the changes of fuel assembly shape in the course of operation. The trouble is that the gap affected power distribution does not obey the normal distribution law, while the dispersion does not influence the total uncertainty in power distribution. The paper proposes the methods for accounting the uncertainty and gives the exercise. (Authors)

  16. The economic implications of carbon cycle uncertainty

    Smith, Steven J.; Edmonds, James A.

    2006-01-01

    This paper examines the implications of uncertainty in the carbon cycle for the cost of stabilizing carbon dioxide concentrations. Using a state of the art integrated assessment model, we find that uncertainty in our understanding of the carbon cycle has significant implications for the costs of a climate stabilization policy, with cost differences denominated in trillions of dollars. Uncertainty in the carbon cycle is equivalent to a change in concentration target of up to 100 ppmv. The impact of carbon cycle uncertainties are smaller than those for climate sensitivity, and broadly comparable to the effect of uncertainty in technology availability

  17. Uncertainty budget for k0-NAA

    Robouch, P.; Arana, G.; Eguskiza, M.; Etxebarria, N.

    2000-01-01

    The concepts of the Guide to the expression of Uncertainties in Measurements for chemical measurements (GUM) and the recommendations of the Eurachem document 'Quantifying Uncertainty in Analytical Methods' are applied to set up the uncertainty budget for k 0 -NAA. The 'universally applicable spreadsheet technique', described by KRAGTEN, is applied to the k 0 -NAA basic equations for the computation of uncertainties. The variance components - individual standard uncertainties - highlight the contribution and the importance of the different parameters to be taken into account. (author)

  18. Risk uncertainty analysis methods for NUREG-1150

    Benjamin, A.S.; Boyd, G.J.

    1987-01-01

    Evaluation and display of risk uncertainties for NUREG-1150 constitute a principal focus of the Severe Accident Risk Rebaselining/Risk Reduction Program (SARRP). Some of the principal objectives of the uncertainty evaluation are: (1) to provide a quantitative estimate that reflects, for those areas considered, a credible and realistic range of uncertainty in risk; (2) to rank the various sources of uncertainty with respect to their importance for various measures of risk; and (3) to characterize the state of understanding of each aspect of the risk assessment for which major uncertainties exist. This paper describes the methods developed to fulfill these objectives

  19. Uncertainty Communication. Issues and good practice

    Kloprogge, P.; Van der Sluijs, J.; Wardekker, A.

    2007-12-01

    In 2003 the Netherlands Environmental Assessment Agency (MNP) published the RIVM/MNP Guidance for Uncertainty Assessment and Communication. The Guidance assists in dealing with uncertainty in environmental assessments. Dealing with uncertainty is essential because assessment results regarding complex environmental issues are of limited value if the uncertainties have not been taken into account adequately. A careful analysis of uncertainties in an environmental assessment is required, but even more important is the effective communication of these uncertainties in the presentation of assessment results. The Guidance yields rich and differentiated insights in uncertainty, but the relevance of this uncertainty information may vary across audiences and uses of assessment results. Therefore, the reporting of uncertainties is one of the six key issues that is addressed in the Guidance. In practice, users of the Guidance felt a need for more practical assistance in the reporting of uncertainty information. This report explores the issue of uncertainty communication in more detail, and contains more detailed guidance on the communication of uncertainty. In order to make this a 'stand alone' document several questions that are mentioned in the detailed Guidance have been repeated here. This document thus has some overlap with the detailed Guidance. Part 1 gives a general introduction to the issue of communicating uncertainty information. It offers guidelines for (fine)tuning the communication to the intended audiences and context of a report, discusses how readers of a report tend to handle uncertainty information, and ends with a list of criteria that uncertainty communication needs to meet to increase its effectiveness. Part 2 helps writers to analyze the context in which communication takes place, and helps to map the audiences, and their information needs. It further helps to reflect upon anticipated uses and possible impacts of the uncertainty information on the

  20. When 1+1 can be >2: Uncertainties compound when simulating climate, fisheries and marine ecosystems

    Evans, Karen; Brown, Jaclyn N.; Sen Gupta, Alex; Nicol, Simon J.; Hoyle, Simon; Matear, Richard; Arrizabalaga, Haritz

    2015-03-01

    Multi-disciplinary approaches that combine oceanographic, biogeochemical, ecosystem, fisheries population and socio-economic models are vital tools for modelling whole ecosystems. Interpreting the outputs from such complex models requires an appreciation of the many different types of modelling frameworks being used and their associated limitations and uncertainties. Both users and developers of particular model components will often have little involvement or understanding of other components within such modelling frameworks. Failure to recognise limitations and uncertainties associated with components and how these uncertainties might propagate throughout modelling frameworks can potentially result in poor advice for resource management. Unfortunately, many of the current integrative frameworks do not propagate the uncertainties of their constituent parts. In this review, we outline the major components of a generic whole of ecosystem modelling framework incorporating the external pressures of climate and fishing. We discuss the limitations and uncertainties associated with each component of such a modelling system, along with key research gaps. Major uncertainties in modelling frameworks are broadly categorised into those associated with (i) deficient knowledge in the interactions of climate and ocean dynamics with marine organisms and ecosystems; (ii) lack of observations to assess and advance modelling efforts and (iii) an inability to predict with confidence natural ecosystem variability and longer term changes as a result of external drivers (e.g. greenhouse gases, fishing effort) and the consequences for marine ecosystems. As a result of these uncertainties and intrinsic differences in the structure and parameterisation of models, users are faced with considerable challenges associated with making appropriate choices on which models to use. We suggest research directions required to address these uncertainties, and caution against overconfident predictions

  1. To be or not to be: How do we speak about uncertainty in public?

    Todesco, Micol; Lolli, Barbara; Sheldrake, Tom; Odbert, Henry

    2016-04-01

    One of the challenges related to hazard communication concerns the public perception and understanding of scientific uncertainties, and of its implications in terms of hazard assessment and mitigation. Often science is perceived as an effective dispenser of resolving answers to the main issues posed by the complexities of life and nature. In this perspective, uncertainty is seen as a pernicious lack of knowledge that hinders our ability to face complex problems. From a scientific perspective, however, the definition of uncertainty is the only valuable tool we have to handle errors affecting our data and propagating through the increasingly complex models we develop to describe reality. Through uncertainty, scientists acknowledge the great variability that characterises natural systems and account for it in their assessment of possible scenarios. From this point of view, uncertainty is not ignorance, but it rather provides a great deal of information that is needed to inform decision making. To find effective ways to bridge the gap between these different meaning of uncertainty, we asked high-school students for assistance. With their help, we gathered definitions of the term 'uncertainty' interviewing different categories of peoples, including schoolmates and professors, neighbours, families and friends. These definitions will be compared with those provided by scientists, to find differences and similarity. To understand the role of uncertainty on judgment, a hands-on experiment is performed where students will have to estimate the exact time of explosion of party poppers subjected to a variable degree of pull. At the end of the project, the students will express their own understanding of uncertainty in a video, which will be made available for sharing. Materials collected during all the activities will contribute to our understanding of how uncertainty is portrayed and can be better expressed to improve our hazard communication.

  2. Uncertainties in human health risk assessment of environmental contaminants: A review and perspective.

    Dong, Zhaomin; Liu, Yanju; Duan, Luchun; Bekele, Dawit; Naidu, Ravi

    2015-12-01

    Addressing uncertainties in human health risk assessment is a critical issue when evaluating the effects of contaminants on public health. A range of uncertainties exist through the source-to-outcome continuum, including exposure assessment, hazard and risk characterisation. While various strategies have been applied to characterising uncertainty, classical approaches largely rely on how to maximise the available resources. Expert judgement, defaults and tools for characterising quantitative uncertainty attempt to fill the gap between data and regulation requirements. The experiences of researching 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) illustrated uncertainty sources and how to maximise available information to determine uncertainties, and thereby provide an 'adequate' protection to contaminant exposure. As regulatory requirements and recurring issues increase, the assessment of complex scenarios involving a large number of chemicals requires more sophisticated tools. Recent advances in exposure and toxicology science provide a large data set for environmental contaminants and public health. In particular, biomonitoring information, in vitro data streams and computational toxicology are the crucial factors in the NexGen risk assessment, as well as uncertainties minimisation. Although in this review we cannot yet predict how the exposure science and modern toxicology will develop in the long-term, current techniques from emerging science can be integrated to improve decision-making. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Uncertainty Relations and Possible Experience

    Gregg Jaeger

    2016-06-01

    Full Text Available The uncertainty principle can be understood as a condition of joint indeterminacy of classes of properties in quantum theory. The mathematical expressions most closely associated with this principle have been the uncertainty relations, various inequalities exemplified by the well known expression regarding position and momentum introduced by Heisenberg. Here, recent work involving a new sort of “logical” indeterminacy principle and associated relations introduced by Pitowsky, expressable directly in terms of probabilities of outcomes of measurements of sharp quantum observables, is reviewed and its quantum nature is discussed. These novel relations are derivable from Boolean “conditions of possible experience” of the quantum realm and have been considered both as fundamentally logical and as fundamentally geometrical. This work focuses on the relationship of indeterminacy to the propositions regarding the values of discrete, sharp observables of quantum systems. Here, reasons for favoring each of these two positions are considered. Finally, with an eye toward future research related to indeterminacy relations, further novel approaches grounded in category theory and intended to capture and reconceptualize the complementarity characteristics of quantum propositions are discussed in relation to the former.

  4. Inverse Problems and Uncertainty Quantification

    Litvinenko, Alexander

    2014-01-06

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  5. Uncertainties in the proton lifetime

    Ellis, J.; Nanopoulos, D.V.; Rudaz, S.; Gaillard, M.K.

    1980-04-01

    We discuss the masses of the leptoquark bosons m(x) and the proton lifetime in Grand Unified Theories based principally on SU(5). It is emphasized that estimates of m(x) based on the QCD coupling and the fine structure constant are probably more reliable than those using the experimental value of sin 2 theta(w). Uncertainties in the QCD Λ parameter and the correct value of α are discussed. We estimate higher order effects on the evolution of coupling constants in a momentum space renormalization scheme. It is shown that increasing the number of generations of fermions beyond the minimal three increases m(X) by almost a factor of 2 per generation. Additional uncertainties exist for each generation of technifermions that may exist. We discuss and discount the possibility that proton decay could be 'Cabibbo-rotated' away, and a speculation that Lorentz invariance may be violated in proton decay at a detectable level. We estimate that in the absence of any substantial new physics beyond that in the minimal SU(5) model the proton lifetimes is 8 x 10 30+-2 years

  6. Inverse Problems and Uncertainty Quantification

    Litvinenko, Alexander; Matthies, Hermann G.

    2014-01-01

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  7. Inverse problems and uncertainty quantification

    Litvinenko, Alexander

    2013-12-18

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  8. The Adaptation Gap Report - a Preliminary Assessment

    Alverson, Keith; Olhoff, Anne; Noble, Ian

    This first Adaptation Gap report provides an equally sobering assessment of the gap between adaptation needs and reality, based on preliminary thinking on how baselines, future goals or targets, and gaps between them might be defined for climate change adaptation. The report focuses on gaps...... in developing countries in three important areas: finance, technology and knowledge....

  9. Needs of the CSAU uncertainty method

    Prosek, A.; Mavko, B.

    2000-01-01

    The use of best estimate codes for safety analysis requires quantification of the uncertainties. These uncertainties are inherently linked to the chosen safety analysis methodology. Worldwide, various methods were proposed for this quantification. The purpose of this paper was to identify the needs of the Code Scaling, Applicability, and Uncertainty (CSAU) methodology and then to answer the needs. The specific procedural steps were combined from other methods for uncertainty evaluation and new tools and procedures were proposed. The uncertainty analysis approach and tools were then utilized for confirmatory study. The uncertainty was quantified for the RELAP5/MOD3.2 thermalhydraulic computer code. The results of the adapted CSAU approach to the small-break loss-of-coolant accident (SB LOCA) show that the adapted CSAU can be used for any thermal-hydraulic safety analysis with uncertainty evaluation. However, it was indicated that there are still some limitations in the CSAU approach that need to be resolved. (author)

  10. Decision-Making under Criteria Uncertainty

    Kureychik, V. M.; Safronenkova, I. B.

    2018-05-01

    Uncertainty is an essential part of a decision-making procedure. The paper deals with the problem of decision-making under criteria uncertainty. In this context, decision-making under uncertainty, types and conditions of uncertainty were examined. The decision-making problem under uncertainty was formalized. A modification of the mathematical decision support method under uncertainty via ontologies was proposed. A critical distinction of the developed method is ontology usage as its base elements. The goal of this work is a development of a decision-making method under criteria uncertainty with the use of ontologies in the area of multilayer board designing. This method is oriented to improvement of technical-economic values of the examined domain.

  11. Uncertainty in geological and hydrogeological data

    B. Nilsson

    2007-09-01

    Full Text Available Uncertainty in conceptual model structure and in environmental data is of essential interest when dealing with uncertainty in water resources management. To make quantification of uncertainty possible is it necessary to identify and characterise the uncertainty in geological and hydrogeological data. This paper discusses a range of available techniques to describe the uncertainty related to geological model structure and scale of support. Literature examples on uncertainty in hydrogeological variables such as saturated hydraulic conductivity, specific yield, specific storage, effective porosity and dispersivity are given. Field data usually have a spatial and temporal scale of support that is different from the one on which numerical models for water resources management operate. Uncertainty in hydrogeological data variables is characterised and assessed within the methodological framework of the HarmoniRiB classification.

  12. Strategic environmental assessment and monitoring: Arctic key gaps and bridging pathways

    Azcárate, Juan; Balfors, Berit; Bring, Arvid; Destouni, Georgia

    2013-01-01

    The Arctic region undergoes rapid and unprecedented environmental change. Environmental assessment and monitoring is needed to understand and decide how to mitigate and/or adapt to the changes and their impacts on society and ecosystems. This letter analyzes the application of strategic environmental assessment (SEA) and the monitoring, based on environmental observations, that should be part of SEA, elucidates main gaps in both, and proposes an overarching SEA framework to systematically link and improve both with focus on the rapidly changing Arctic region. Shortcomings in the monitoring of environmental change are concretized by examples of main gaps in the observations of Arctic hydroclimatic changes. For relevant identification and efficient reduction of such gaps and remaining uncertainties under typical conditions of limited monitoring resources, the proposed overarching framework for SEA application includes components for explicit gap/uncertainty handling and monitoring, systematically integrated within all steps of the SEA process. The framework further links to adaptive governance, which should explicitly consider key knowledge and information gaps that are identified through and must be handled in the SEA process, and accordingly (re)formulate and promote necessary new or modified monitoring objectives for bridging these gaps. (letter)

  13. The Widening Income Achievement Gap

    Reardon, Sean F.

    2013-01-01

    Has the academic achievement gap between high-income and low-income students changed over the last few decades? If so, why? And what can schools do about it? Researcher Sean F. Reardon conducted a comprehensive analysis of research to answer these questions and came up with some striking findings. In this article, he shows that income-related…

  14. Closing the Gaps. Research Brief

    Johnston, Howard

    2011-01-01

    Achievement gaps between groups of students (minority and white, rich and poor, English speakers and English language learners) are complex and intractable. Increasingly, they are being seen as a result of disparities between opportunities for learning available to different groups. By changing the opportunity structures of schools and…

  15. The Emissions Gap Report 2015

    Following the historic signing of the 2030 Agenda for Sustainable Development, this sixth edition of the UNEP Emissions Gap Report comes as world leaders start gathering in Paris to establish a new agreement on climate change. The report offers an independent assessment of the mitigation...

  16. Project LOCAL - Bridging The Gap

    Haven, Robert N.

    1975-01-01

    Project LOCAL, a not-for-profit regional consortium, offers a broad spectrum of in-service training courses tailored to meet the needs of educators in various disciplines and levels of experience. The purpose of these offerings is to bridge the communication gap between innovative centers in computer-oriented education and staff members in Boston…

  17. Gender Wealth Gap in Slovakia

    S.K. Trommlerová (Sofia Karina)

    2017-01-01

    markdownabstractNo data on wealth has been available in Slovakia prior to Household Finance and Consumption Survey. Therefore, only studies on labor market participation and wage gender gaps are available to date. These studies indicate that Slovak women earn on average 25% less than men.

  18. Investigations of Pulsed Vacuum Gap.

    1981-02-10

    Violet Spectra of Hot Sparks in Hh’Iacua, ’ ?hys. Rev., Vol. 12, p. 167, (1913). 31A Maitland , "Spark CondiiIoning Equation for Olane ElectrodesI-in...Appl. Phys., Vol. 1, 1291 G. Thecohilus, K. Srivastava, and R. ’ ian Heeswi.k, ’tn-situ Observation of !Microparticles in a Vacuum-Tnsulated Gap Using

  19. Featured Image: Simulating Planetary Gaps

    Kohler, Susanna

    2017-03-01

    The authors model of howthe above disk would look as we observe it in a scattered-light image. The morphology of the gap can be used to estimate the mass of the planet that caused it. [Dong Fung 2017]The above image from a computer simulation reveals the dust structure of a protoplanetary disk (with the star obscured in the center) as a newly formed planet orbits within it. A recent study by Ruobing Dong (Steward Observatory, University of Arizona) and Jeffrey Fung (University of California, Berkeley) examines how we can determine mass of such a planet based on our observations of the gap that the planet opens in the disk as it orbits. The authors models help us to better understand how our observations of gaps might change if the disk is inclined relative to our line of sight, and how we can still constrain the mass of the gap-opening planet and the viscosity of the disk from the scattered-light images we have recently begun to obtain of distant protoplanetary disks. For more information, check out the paper below!CitationRuobing Dong () and Jeffrey Fung () 2017 ApJ 835 146. doi:10.3847/1538-4357/835/2/146

  20. Globalization and the Gender Gap

    Oostendorp, R.H.

    2004-01-01

    There are several theoretical reasons why globalization will have a narrowing as well as a widening effect on the gender wage gap, but little is known about the actual impact, except for some country studies. This study contributes to the literature in three respects. First, it is a large

  1. PSS: beyond the implementation gap

    Geertman, S.C.M.

    2017-01-01

    In the last couple of decades, a large number of papers on planning support systems (PSS) have been published in national and international, scientific and professional journals. What is remarkable about PSS is that for quite some time their history has been dominated by an implementation gap, that

  2. Denmark and the gap year

    Katznelson, Noemi; Juul, Tilde Mette

    2013-01-01

    This paper describes three different educational offers to young people: “The Folk High School”, “The ‘After-school’” and 10th class. All can be considered optional Gap Years. The following diagram shows how the Danish education system is structured. The Folk High School is a training course...

  3. Quantifying uncertainty and trade-offs in resilience assessments

    Craig R. Allen

    2018-03-01

    Full Text Available Several frameworks have been developed to assess the resilience of social-ecological systems, but most require substantial data inputs, time, and technical expertise. Stakeholders and practitioners often lack the resources for such intensive efforts. Furthermore, most end with problem framing and fail to explicitly address trade-offs and uncertainty. To remedy this gap, we developed a rapid survey assessment that compares the relative resilience of social-ecological systems with respect to a number of resilience properties. This approach generates large amounts of information relative to stakeholder inputs. We targeted four stakeholder categories: government (policy, regulation, management, end users (farmers, ranchers, landowners, industry, agency/public science (research, university, extension, and NGOs (environmental, citizen, social justice in four North American watersheds, to assess social-ecological resilience through surveys. Conceptually, social-ecological systems are comprised of components ranging from strictly human to strictly ecological, but that relate directly or indirectly to one another. They have soft boundaries and several important dimensions or axes that together describe the nature of social-ecological interactions, e.g., variability, diversity, modularity, slow variables, feedbacks, capital, innovation, redundancy, and ecosystem services. There is no absolute measure of resilience, so our design takes advantage of cross-watershed comparisons and therefore focuses on relative resilience. Our approach quantifies and compares the relative resilience across watershed systems and potential trade-offs among different aspects of the social-ecological system, e.g., between social, economic, and ecological contributions. This approach permits explicit assessment of several types of uncertainty (e.g., self-assigned uncertainty for stakeholders; uncertainty across respondents, watersheds, and subsystems, and subjectivity in

  4. BOOK REVIEW: Evaluating the Measurement Uncertainty: Fundamentals and practical guidance

    Lira, Ignacio

    2003-08-01

    Evaluating the Measurement Uncertainty is a book written for anyone who makes and reports measurements. It attempts to fill the gaps in the ISO Guide to the Expression of Uncertainty in Measurement, or the GUM, and does a pretty thorough job. The GUM was written with the intent of being applicable by all metrologists, from the shop floor to the National Metrology Institute laboratory; however, the GUM has often been criticized for its lack of user-friendliness because it is primarily filled with statements, but with little explanation. Evaluating the Measurement Uncertainty gives lots of explanations. It is well written and makes use of many good figures and numerical examples. Also important, this book is written by a metrologist from a National Metrology Institute, and therefore up-to-date ISO rules, style conventions and definitions are correctly used and supported throughout. The author sticks very closely to the GUM in topical theme and with frequent reference, so readers who have not read GUM cover-to-cover may feel as if they are missing something. The first chapter consists of a reprinted lecture by T J Quinn, Director of the Bureau International des Poids et Mesures (BIPM), on the role of metrology in today's world. It is an interesting and informative essay that clearly outlines the importance of metrology in our modern society, and why accurate measurement capability, and by definition uncertainty evaluation, should be so important. Particularly interesting is the section on the need for accuracy rather than simply reproducibility. Evaluating the Measurement Uncertainty then begins at the beginning, with basic concepts and definitions. The third chapter carefully introduces the concept of standard uncertainty and includes many derivations and discussion of probability density functions. The author also touches on Monte Carlo methods, calibration correction quantities, acceptance intervals or guardbanding, and many other interesting cases. The book goes

  5. The uncertainty of reference standards--a guide to understanding factors impacting uncertainty, uncertainty calculations, and vendor certifications.

    Gates, Kevin; Chang, Ning; Dilek, Isil; Jian, Huahua; Pogue, Sherri; Sreenivasan, Uma

    2009-10-01

    Certified solution standards are widely used in forensic toxicological, clinical/diagnostic, and environmental testing. Typically, these standards are purchased as ampouled solutions with a certified concentration. Vendors present concentration and uncertainty differently on their Certificates of Analysis. Understanding the factors that impact uncertainty and which factors have been considered in the vendor's assignment of uncertainty are critical to understanding the accuracy of the standard and the impact on testing results. Understanding these variables is also important for laboratories seeking to comply with ISO/IEC 17025 requirements and for those preparing reference solutions from neat materials at the bench. The impact of uncertainty associated with the neat material purity (including residual water, residual solvent, and inorganic content), mass measurement (weighing techniques), and solvent addition (solution density) on the overall uncertainty of the certified concentration is described along with uncertainty calculations.

  6. Addressing uncertainties in the ERICA Integrated Approach

    Oughton, D.H.; Agueero, A.; Avila, R.; Brown, J.E.; Copplestone, D.; Gilek, M.

    2008-01-01

    Like any complex environmental problem, ecological risk assessment of the impacts of ionising radiation is confounded by uncertainty. At all stages, from problem formulation through to risk characterisation, the assessment is dependent on models, scenarios, assumptions and extrapolations. These include technical uncertainties related to the data used, conceptual uncertainties associated with models and scenarios, as well as social uncertainties such as economic impacts, the interpretation of legislation, and the acceptability of the assessment results to stakeholders. The ERICA Integrated Approach has been developed to allow an assessment of the risks of ionising radiation, and includes a number of methods that are intended to make the uncertainties and assumptions inherent in the assessment more transparent to users and stakeholders. Throughout its development, ERICA has recommended that assessors deal openly with the deeper dimensions of uncertainty and acknowledge that uncertainty is intrinsic to complex systems. Since the tool is based on a tiered approach, the approaches to dealing with uncertainty vary between the tiers, ranging from a simple, but highly conservative screening to a full probabilistic risk assessment including sensitivity analysis. This paper gives on overview of types of uncertainty that are manifest in ecological risk assessment and the ERICA Integrated Approach to dealing with some of these uncertainties

  7. Reusable launch vehicle model uncertainties impact analysis

    Chen, Jiaye; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng

    2018-03-01

    Reusable launch vehicle(RLV) has the typical characteristics of complex aerodynamic shape and propulsion system coupling, and the flight environment is highly complicated and intensely changeable. So its model has large uncertainty, which makes the nominal system quite different from the real system. Therefore, studying the influences caused by the uncertainties on the stability of the control system is of great significance for the controller design. In order to improve the performance of RLV, this paper proposes the approach of analyzing the influence of the model uncertainties. According to the typical RLV, the coupling dynamic and kinematics models are built. Then different factors that cause uncertainties during building the model are analyzed and summed up. After that, the model uncertainties are expressed according to the additive uncertainty model. Choosing the uncertainties matrix's maximum singular values as the boundary model, and selecting the uncertainties matrix's norm to show t how much the uncertainty factors influence is on the stability of the control system . The simulation results illustrate that the inertial factors have the largest influence on the stability of the system, and it is necessary and important to take the model uncertainties into consideration before the designing the controller of this kind of aircraft( like RLV, etc).

  8. Image restoration, uncertainty, and information.

    Yu, F T

    1969-01-01

    Some of the physical interpretations about image restoration are discussed. From the theory of information the unrealizability of an inverse filter can be explained by degradation of information, which is due to distortion on the recorded image. The image restoration is a time and space problem, which can be recognized from the theory of relativity (the problem of image restoration is related to Heisenberg's uncertainty principle in quantum mechanics). A detailed discussion of the relationship between information and energy is given. Two general results may be stated: (1) the restoration of the image from the distorted signal is possible only if it satisfies the detectability condition. However, the restored image, at the best, can only approach to the maximum allowable time criterion. (2) The restoration of an image by superimposing the distorted signal (due to smearing) is a physically unrealizable method. However, this restoration procedure may be achieved by the expenditure of an infinite amount of energy.

  9. Modelling of Transport Projects Uncertainties

    Salling, Kim Bang; Leleur, Steen

    2012-01-01

    This paper proposes a new way of handling the uncertainties present in transport decision making based on infrastructure appraisals. The paper suggests to combine the principle of Optimism Bias, which depicts the historical tendency of overestimating transport related benefits and underestimating...... to supplement Optimism Bias and the associated Reference Class Forecasting (RCF) technique with a new technique that makes use of a scenario-grid. We tentatively introduce and refer to this as Reference Scenario Forecasting (RSF). The final RSF output from the CBA-DK model consists of a set of scenario......-based graphs which functions as risk-related decision support for the appraised transport infrastructure project. The presentation of RSF is demonstrated by using an appraisal case concerning a new airfield in the capital of Greenland, Nuuk....

  10. Gap junctions and inhibitory synapses modulate inspiratory motoneuron synchronization.

    Bou-Flores, C; Berger, A J

    2001-04-01

    Interneuronal electrical coupling via gap junctions and chemical synaptic inhibitory transmission are known to have roles in the generation and synchronization of activity in neuronal networks. Uncertainty exists regarding the roles of these two modes of interneuronal communication in the central respiratory rhythm-generating system. To assess their roles, we performed studies on both the neonatal mouse medullary slice and en bloc brain stem-spinal cord preparations where rhythmic inspiratory motor activity can readily be recorded from both hypoglossal and phrenic nerve roots. The rhythmic inspiratory activity observed had two temporal characteristics: the basic respiratory frequency occurring on a long time scale and the synchronous neuronal discharge within the inspiratory burst occurring on a short time scale. In both preparations, we observed that bath application of gap-junction blockers, including 18 alpha-glycyrrhetinic acid, 18 beta-glycyrrhetinic acid, and carbenoxolone, all caused a reduction in respiratory frequency. In contrast, peak integrated phrenic and hypoglossal inspiratory activity was not significantly changed by gap-junction blockade. On a short-time-scale, gap-junction blockade increased the degree of synchronization within an inspiratory burst observed in both nerves. In contrast, opposite results were observed with blockade of GABA(A) and glycine receptors. We found that respiratory frequency increased with receptor blockade, and simultaneous blockade of both receptors consistently resulted in a reduction in short-time-scale synchronized activity observed in phrenic and hypoglossal inspiratory bursts. These results support the concept that the central respiratory system has two components: a rhythm generator responsible for the production of respiratory cycle timing and an inspiratory pattern generator that is involved in short-time-scale synchronization. In the neonatal rodent, properties of both components can be regulated by interneuronal

  11. Attitudes, beliefs, uncertainty and risk

    Greenhalgh, Geoffrey [Down Park Place, Crawley Down (United Kingdom)

    2001-07-01

    There is now unmistakable evidence of a widening split within the Western industrial nations arising from conflicting views of society; for and against change. The argument is over the benefits of 'progress' and growth. On one side are those who seek more jobs, more production and consumption, higher standards of living, an ever-increasing GNP with an increasing globalisation of production and welcome the advances of science and technology confident that any temporary problems that arise can be solved by further technological development - possible energy shortages as a growing population increases energy usage can be met by nuclear power development; food shortages by the increased yields of GM crops. In opposition are those who put the quality of life before GNP, advocate a more frugal life-style, reducing needs and energy consumption, and, pointing to the harm caused by increasing pollution, press for cleaner air and water standards. They seek to reduce the pressure of an ever-increasing population and above all to preserve the natural environment. This view is associated with a growing uncertainty as the established order is challenged with the rise in status of 'alternative' science and medicine. This paper argues that these conflicting views reflect instinctive attitudes. These in turn draw support from beliefs selected from those which uncertainty offers. Where there is scope for argument over the truth or validity of a 'fact', the choice of which of the disputed views to believe will be determined by a value judgement. This applies to all controversial social and political issues. Nuclear waste disposal and biotechnology are but two particular examples in the technological field; joining the EMU is a current political controversy where value judgements based on attitudes determine beliefs. When, or if, a controversy is finally resolved the judgement arrived at will be justified by the belief that the consequences of the course chosen will be more favourable

  12. Attitudes, beliefs, uncertainty and risk

    Greenhalgh, Geoffrey [Down Park Place, Crawley Down (United Kingdom)

    2001-07-01

    There is now unmistakable evidence of a widening split within the Western industrial nations arising from conflicting views of society; for and against change. The argument is over the benefits of 'progress' and growth. On one side are those who seek more jobs, more production and consumption, higher standards of living, an ever-increasing GNP with an increasing globalisation of production and welcome the advances of science and technology confident that any temporary problems that arise can be solved by further technological development - possible energy shortages as a growing population increases energy usage can be met by nuclear power development; food shortages by the increased yields of GM crops. In opposition are those who put the quality of life before GNP, advocate a more frugal life-style, reducing needs and energy consumption, and, pointing to the harm caused by increasing pollution, press for cleaner air and water standards. They seek to reduce the pressure of an ever-increasing population and above all to preserve the natural environment. This view is associated with a growing uncertainty as the established order is challenged with the rise in status of 'alternative' science and medicine. This paper argues that these conflicting views reflect instinctive attitudes. These in turn draw support from beliefs selected from those which uncertainty offers. Where there is scope for argument over the truth or validity of a 'fact', the choice of which of the disputed views to believe will be determined by a value judgement. This applies to all controversial social and political issues. Nuclear waste disposal and biotechnology are but two particular examples in the technological field; joining the EMU is a current political controversy where value judgements based on attitudes determine beliefs. When, or if, a controversy is finally resolved the judgement arrived at will be justified by the belief that the consequences of the course

  13. Attitudes, beliefs, uncertainty and risk

    Greenhalgh, Geoffrey

    2001-01-01

    There is now unmistakable evidence of a widening split within the Western industrial nations arising from conflicting views of society; for and against change. The argument is over the benefits of 'progress' and growth. On one side are those who seek more jobs, more production and consumption, higher standards of living, an ever-increasing GNP with an increasing globalisation of production and welcome the advances of science and technology confident that any temporary problems that arise can be solved by further technological development - possible energy shortages as a growing population increases energy usage can be met by nuclear power development; food shortages by the increased yields of GM crops. In opposition are those who put the quality of life before GNP, advocate a more frugal life-style, reducing needs and energy consumption, and, pointing to the harm caused by increasing pollution, press for cleaner air and water standards. They seek to reduce the pressure of an ever-increasing population and above all to preserve the natural environment. This view is associated with a growing uncertainty as the established order is challenged with the rise in status of 'alternative' science and medicine. This paper argues that these conflicting views reflect instinctive attitudes. These in turn draw support from beliefs selected from those which uncertainty offers. Where there is scope for argument over the truth or validity of a 'fact', the choice of which of the disputed views to believe will be determined by a value judgement. This applies to all controversial social and political issues. Nuclear waste disposal and biotechnology are but two particular examples in the technological field; joining the EMU is a current political controversy where value judgements based on attitudes determine beliefs. When, or if, a controversy is finally resolved the judgement arrived at will be justified by the belief that the consequences of the course chosen will be more favourable

  14. Emplacement Gantry Gap Analysis Study

    Thornley, R.

    2005-01-01

    To date, the project has established important to safety (ITS) performance requirements for structures, systems, and components (SSCs) based on the identification and categorization of event sequences that may result in a radiological release. These performance requirements are defined within the ''Nuclear Safety Design Bases for License Application'' (NSDB) (BSC 2005 [DIRS 171512], Table A-11). Further, SSCs credited with performing safety functions are classified as ITS. In turn, assurance that these SSCs will perform as required is sought through the use of consensus codes and standards. This gap analysis is based on the design completed for license application only. Accordingly, identification of ITS SSCs beyond those defined within the NSDB are based on designs that may be subject to further development during detail design. Furthermore, several design alternatives may still be under consideration to satisfy certain safety functions, and final selection will not be determined until further design development has occurred. Therefore, for completeness, alternative designs currently under consideration will be discussed throughout this study. This gap analysis will evaluate each code and standard identified within the ''Emplacement Gantry ITS Standards Identification Study'' (BSC 2005 [DIRS 173586]) to ensure each ITS performance requirement is fully satisfied. When a performance requirement is not fully satisfied, a gap is highlighted. This study will identify requirements to supplement or augment the code or standard to meet performance requirements. Further, this gap analysis will identify nonstandard areas of the design that will be subject to a design development plan. Nonstandard components and nonstandard design configurations are defined as areas of the design that do not follow standard industry practices or codes and standards. Whereby, assurance that an SSC will perform as required may not be readily sought though the use of consensus standards. This

  15. Uncertainty quantification metrics for whole product life cycle cost estimates in aerospace innovation

    Schwabe, O.; Shehab, E.; Erkoyuncu, J.

    2015-08-01

    The lack of defensible methods for quantifying cost estimate uncertainty over the whole product life cycle of aerospace innovations such as propulsion systems or airframes poses a significant challenge to the creation of accurate and defensible cost estimates. Based on the axiomatic definition of uncertainty as the actual prediction error of the cost estimate, this paper provides a comprehensive overview of metrics used for the uncertainty quantification of cost estimates based on a literature review, an evaluation of publicly funded projects such as part of the CORDIS or Horizon 2020 programs, and an analysis of established approaches used by organizations such NASA, the U.S. Department of Defence, the ESA, and various commercial companies. The metrics are categorized based on their foundational character (foundations), their use in practice (state-of-practice), their availability for practice (state-of-art) and those suggested for future exploration (state-of-future). Insights gained were that a variety of uncertainty quantification metrics exist whose suitability depends on the volatility of available relevant information, as defined by technical and cost readiness level, and the number of whole product life cycle phases the estimate is intended to be valid for. Information volatility and number of whole product life cycle phases can hereby be considered as defining multi-dimensional probability fields admitting various uncertainty quantification metric families with identifiable thresholds for transitioning between them. The key research gaps identified were the lacking guidance grounded in theory for the selection of uncertainty quantification metrics and lacking practical alternatives to metrics based on the Central Limit Theorem. An innovative uncertainty quantification framework consisting of; a set-theory based typology, a data library, a classification system, and a corresponding input-output model are put forward to address this research gap as the basis

  16. Gaps in EU Foreign Policy

    Larsen, Henrik

    of Capability-Expectations Gap in the study of European foreign policy. Through examples from relevant literature, Larsen not only demonstrates how this concept sets up standards for the EU as a foreign policy actor (that are not met by most other international actors) but also shows how this curtails analysis...... of EU foreign policy. The author goes on to discuss how the widespread use of the concept of ‘gap' affects the way in which EU foreign policy has been studied; and that it always produces the same result: the EU is an unfulfilled actor outside the realm of “normal” actors in IR. This volume offers new...... perspectives on European foreign policy research and advice and serves as an invaluable resource for students of EU foreign policy and, more broadly, European Studies....

  17. Phenomenon of Uncertainty as a Subjective Experience

    Lifintseva A.A.

    2018-04-01

    Full Text Available The phenomenon of uncertainty in illness of patients is discussed and analyzed in this article. Uncertainty in illness is a condition that accompanies the patient from the moment of appearance of the first somatic symptoms of the disease and could be strengthened or weakened thanks to many psychosocial factors. The level of uncertainty is related to the level of stress, emotional disadaptation, affective states, coping strategies, mechanisms of psychological defense, etc. Uncertainty can perform destructive functions, acting as a trigger for stressful conditions and launching negative emotional experiences. As a positive function of uncertainty, one can note a possible positive interpretation of the patient's disease. In addition, the state of uncertainty allows the patient to activate the resources of coping with the disease, among which the leading role belongs to social support.

  18. Visual Semiotics & Uncertainty Visualization: An Empirical Study.

    MacEachren, A M; Roth, R E; O'Brien, J; Li, B; Swingley, D; Gahegan, M

    2012-12-01

    This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.

  19. Uncertainty quantification theory, implementation, and applications

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  20. Report on the uncertainty methods study

    1998-06-01

    The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes: the Pisa method (based on extrapolation from integral experiments) and four methods identifying and combining input uncertainties. Three of these, the GRS, IPSN and ENUSA methods, use subjective probability distributions, and one, the AEAT method, performs a bounding analysis. Each method has been used to calculate the uncertainty in specified parameters for the LSTF SB-CL-18 5% cold leg small break LOCA experiment in the ROSA-IV Large Scale Test Facility (LSTF). The uncertainty analysis was conducted essentially blind and the participants did not use experimental measurements from the test as input apart from initial and boundary conditions. Participants calculated uncertainty ranges for experimental parameters including pressurizer pressure, primary circuit inventory and clad temperature (at a specified position) as functions of time