WorldWideScience

Sample records for ensemble-based deterministic ozone

  1. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  2. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody; Tembine, Hamidou; Tempone, Raul

    2016-01-01

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  3. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    Directory of Open Access Journals (Sweden)

    I. Kioutsioukis

    2016-12-01

    Full Text Available Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism. Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3, nitrogen dioxide (NO2 and particulate matter (PM10. Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII. The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each station's best deterministic model at no more than 60 % of the sites, indicating a combination of members with unbalanced skill difference and error dependence for the rest. The promotion of the right amount of accuracy and diversity within the ensemble results in an average additional skill of up to 31 % compared to using the full ensemble in an unconditional way. The skill improvements were higher for O3 and lower for PM10, associated with the extent of potential changes in the joint

  4. Towards deterministic optical quantum computation with coherently driven atomic ensembles

    International Nuclear Information System (INIS)

    Petrosyan, David

    2005-01-01

    Scalable and efficient quantum computation with photonic qubits requires (i) deterministic sources of single photons, (ii) giant nonlinearities capable of entangling pairs of photons, and (iii) reliable single-photon detectors. In addition, an optical quantum computer would need a robust reversible photon storage device. Here we discuss several related techniques, based on the coherent manipulation of atomic ensembles in the regime of electromagnetically induced transparency, that are capable of implementing all of the above prerequisites for deterministic optical quantum computation with single photons

  5. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  6. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko

    2011-03-17

    We develop a physically motivated statistical model for regional ozone air pollution by separating the ground-level pollutant concentration field into three components, namely: transport, local production and large-scale mean trend mostly dominated by emission rates. The model is novel in the field of environmental spatial statistics in that it is a combined deterministic-statistical model, which gives a new perspective to the modelling of air pollution. The model is presented in a Bayesian hierarchical formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate that the model vastly outperforms existing, simpler modelling approaches. Our study highlights the importance of simultaneously considering different aspects of an air pollution problem as well as taking into account the physical bases that govern the processes of interest. © 2011 John Wiley & Sons, Ltd..

  7. Multimodel ensemble simulations of present-day and near-future tropospheric ozone

    NARCIS (Netherlands)

    Stevenson, D.S.; Dentener, F.J.; Schultz, M.G.; Ellingsen, K.; Noije, van T.P.C.; Wild, O.; Zeng, G.; Amann, M.; Atherton, C.S.; Bell, N.; Bergmann, D.J.; Bey, I.; Butler, T.; Cofala, J.; Collins, W.J.; Derwent, R.G.; Doherty, R.M.; Drevet, J.; Eskes, H.J.; Fiore, A.M.; Gauss, M.; Hauglustaine, D.A.; Horowitz, L.W.; Isaksen, I.S.A.; Krol, M.C.; Lamarque, J.F.; Lawrence, M.G.; Montanaro, V.; Muller, J.F.; Pitari, G.; Prather, M.J.; Pyle, J.A.; Rast, S.; Rodriguez, J.M.; Sanderson, M.G.; Savage, N.H.; Shindell, D.T.; Strahan, S.E.; Sudo, K.; Szopa, S.

    2006-01-01

    Global tropospheric ozone distributions, budgets, and radiative forcings from an ensemble of 26 state-of-the-art atmospheric chemistry models have been intercompared and synthesized as part of a wider study into both the air quality and climate roles of ozone. Results from three 2030 emissions

  8. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset documents the source of the data analyzed in the manuscript " Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII...

  9. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    Science.gov (United States)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability

  10. Multimodel ensemble simulations of of present-day and near-future tropospheric ozone

    NARCIS (Netherlands)

    Stevenson, D.S.; Dentener, F.J.; van Noije, T.P.C.; Eskes, H.J.; Krol, M.C.

    2006-01-01

    Global tropospheric ozone distributions, budgets, and radiative forcings from an ensemble of 26 state-of-the-art atmospheric chemistry models have been intercompared and synthesized as part of a wider study into both the air quality and climate roles of ozone. Results from three 2030 emissions

  11. Robust Deterministic Controlled Phase-Flip Gate and Controlled-Not Gate Based on Atomic Ensembles Embedded in Double-Sided Optical Cavities

    Science.gov (United States)

    Liu, A.-Peng; Cheng, Liu-Yong; Guo, Qi; Zhang, Shou

    2018-02-01

    We first propose a scheme for controlled phase-flip gate between a flying photon qubit and the collective spin wave (magnon) of an atomic ensemble assisted by double-sided cavity quantum systems. Then we propose a deterministic controlled-not gate on magnon qubits with parity-check building blocks. Both the gates can be accomplished with 100% success probability in principle. Atomic ensemble is employed so that light-matter coupling is remarkably improved by collective enhancement. We assess the performance of the gates and the results show that they can be faithfully constituted with current experimental techniques.

  12. EnsembleGraph: Interactive Visual Analysis of Spatial-Temporal Behavior for Ensemble Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Shu, Qingya; Guo, Hanqi; Che, Limei; Yuan, Xiaoru; Liu, Junfeng; Liang, Jie

    2016-04-19

    We present a novel visualization framework—EnsembleGraph— for analyzing ensemble simulation data, in order to help scientists understand behavior similarities between ensemble members over space and time. A graph-based representation is used to visualize individual spatiotemporal regions with similar behaviors, which are extracted by hierarchical clustering algorithms. A user interface with multiple-linked views is provided, which enables users to explore, locate, and compare regions that have similar behaviors between and then users can investigate and analyze the selected regions in detail. The driving application of this paper is the studies on regional emission influences over tropospheric ozone, which is based on ensemble simulations conducted with different anthropogenic emission absences using the MOZART-4 (model of ozone and related tracers, version 4) model. We demonstrate the effectiveness of our method by visualizing the MOZART-4 ensemble simulation data and evaluating the relative regional emission influences on tropospheric ozone concentrations. Positive feedbacks from domain experts and two case studies prove efficiency of our method.

  13. Hong-Ou-Mandel Interference between Two Deterministic Collective Excitations in an Atomic Ensemble

    Science.gov (United States)

    Li, Jun; Zhou, Ming-Ti; Jing, Bo; Wang, Xu-Jie; Yang, Sheng-Jun; Jiang, Xiao; Mølmer, Klaus; Bao, Xiao-Hui; Pan, Jian-Wei

    2016-10-01

    We demonstrate deterministic generation of two distinct collective excitations in one atomic ensemble, and we realize the Hong-Ou-Mandel interference between them. Using Rydberg blockade we create single collective excitations in two different Zeeman levels, and we use stimulated Raman transitions to perform a beam-splitter operation between the excited atomic modes. By converting the atomic excitations into photons, the two-excitation interference is measured by photon coincidence detection with a visibility of 0.89(6). The Hong-Ou-Mandel interference witnesses an entangled NOON state of the collective atomic excitations, and we demonstrate its two times enhanced sensitivity to a magnetic field compared with a single excitation. Our work implements a minimal instance of boson sampling and paves the way for further multimode and multiexcitation studies with collective excitations of atomic ensembles.

  14. Cluster-based analysis of multi-model climate ensembles

    Science.gov (United States)

    Hyde, Richard; Hossaini, Ryan; Leeson, Amber A.

    2018-06-01

    Clustering - the automated grouping of similar data - can provide powerful and unique insight into large and complex data sets, in a fast and computationally efficient manner. While clustering has been used in a variety of fields (from medical image processing to economics), its application within atmospheric science has been fairly limited to date, and the potential benefits of the application of advanced clustering techniques to climate data (both model output and observations) has yet to be fully realised. In this paper, we explore the specific application of clustering to a multi-model climate ensemble. We hypothesise that clustering techniques can provide (a) a flexible, data-driven method of testing model-observation agreement and (b) a mechanism with which to identify model development priorities. We focus our analysis on chemistry-climate model (CCM) output of tropospheric ozone - an important greenhouse gas - from the recent Atmospheric Chemistry and Climate Model Intercomparison Project (ACCMIP). Tropospheric column ozone from the ACCMIP ensemble was clustered using the Data Density based Clustering (DDC) algorithm. We find that a multi-model mean (MMM) calculated using members of the most-populous cluster identified at each location offers a reduction of up to ˜ 20 % in the global absolute mean bias between the MMM and an observed satellite-based tropospheric ozone climatology, with respect to a simple, all-model MMM. On a spatial basis, the bias is reduced at ˜ 62 % of all locations, with the largest bias reductions occurring in the Northern Hemisphere - where ozone concentrations are relatively large. However, the bias is unchanged at 9 % of all locations and increases at 29 %, particularly in the Southern Hemisphere. The latter demonstrates that although cluster-based subsampling acts to remove outlier model data, such data may in fact be closer to observed values in some locations. We further demonstrate that clustering can provide a viable and

  15. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  16. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  17. Parameter estimation for stiff deterministic dynamical systems via ensemble Kalman filter

    International Nuclear Information System (INIS)

    Arnold, Andrea; Calvetti, Daniela; Somersalo, Erkki

    2014-01-01

    A commonly encountered problem in numerous areas of applications is to estimate the unknown coefficients of a dynamical system from direct or indirect observations at discrete times of some of the components of the state vector. A related problem is to estimate unobserved components of the state. An egregious example of such a problem is provided by metabolic models, in which the numerous model parameters and the concentrations of the metabolites in tissue are to be estimated from concentration data in the blood. A popular method for addressing similar questions in stochastic and turbulent dynamics is the ensemble Kalman filter (EnKF), a particle-based filtering method that generalizes classical Kalman filtering. In this work, we adapt the EnKF algorithm for deterministic systems in which the numerical approximation error is interpreted as a stochastic drift with variance based on classical error estimates of numerical integrators. This approach, which is particularly suitable for stiff systems where the stiffness may depend on the parameters, allows us to effectively exploit the parallel nature of particle methods. Moreover, we demonstrate how spatial prior information about the state vector, which helps the stability of the computed solution, can be incorporated into the filter. The viability of the approach is shown by computed examples, including a metabolic system modeling an ischemic episode in skeletal muscle, with a high number of unknown parameters. (paper)

  18. Establishing and storing of deterministic quantum entanglement among three distant atomic ensembles.

    Science.gov (United States)

    Yan, Zhihui; Wu, Liang; Jia, Xiaojun; Liu, Yanhong; Deng, Ruijie; Li, Shujing; Wang, Hai; Xie, Changde; Peng, Kunchi

    2017-09-28

    It is crucial for the physical realization of quantum information networks to first establish entanglement among multiple space-separated quantum memories and then, at a user-controlled moment, to transfer the stored entanglement to quantum channels for distribution and conveyance of information. Here we present an experimental demonstration on generation, storage, and transfer of deterministic quantum entanglement among three spatially separated atomic ensembles. The off-line prepared multipartite entanglement of optical modes is mapped into three distant atomic ensembles to establish entanglement of atomic spin waves via electromagnetically induced transparency light-matter interaction. Then the stored atomic entanglement is transferred into a tripartite quadrature entangled state of light, which is space-separated and can be dynamically allocated to three quantum channels for conveying quantum information. The existence of entanglement among three released optical modes verifies that the system has the capacity to preserve multipartite entanglement. The presented protocol can be directly extended to larger quantum networks with more nodes.Continuous-variable encoding is a promising approach for quantum information and communication networks. Here, the authors show how to map entanglement from three spatial optical modes to three separated atomic samples via electromagnetically induced transparency, releasing it later on demand.

  19. Risk-based and deterministic regulation

    International Nuclear Information System (INIS)

    Fischer, L.E.; Brown, N.W.

    1995-07-01

    Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose

  20. On evaluation of ensemble precipitation forecasts with observation-based ensembles

    Directory of Open Access Journals (Sweden)

    S. Jaun

    2007-04-01

    Full Text Available Spatial interpolation of precipitation data is uncertain. How important is this uncertainty and how can it be considered in evaluation of high-resolution probabilistic precipitation forecasts? These questions are discussed by experimental evaluation of the COSMO consortium's limited-area ensemble prediction system COSMO-LEPS. The applied performance measure is the often used Brier skill score (BSS. The observational references in the evaluation are (a analyzed rain gauge data by ordinary Kriging and (b ensembles of interpolated rain gauge data by stochastic simulation. This permits the consideration of either a deterministic reference (the event is observed or not with 100% certainty or a probabilistic reference that makes allowance for uncertainties in spatial averaging. The evaluation experiments show that the evaluation uncertainties are substantial even for the large area (41 300 km2 of Switzerland with a mean rain gauge distance as good as 7 km: the one- to three-day precipitation forecasts have skill decreasing with forecast lead time but the one- and two-day forecast performances differ not significantly.

  1. Creating a Satellite-Based Record of Tropospheric Ozone

    Science.gov (United States)

    Oetjen, Hilke; Payne, Vivienne H.; Kulawik, Susan S.; Eldering, Annmarie; Worden, John; Edwards, David P.; Francis, Gene L.; Worden, Helen M.

    2013-01-01

    The TES retrieval algorithm has been applied to IASI radiances. We compare the retrieved ozone profiles with ozone sonde profiles for mid-latitudes for the year 2008. We find a positive bias in the IASI ozone profiles in the UTLS region of up to 22 %. The spatial coverage of the IASI instrument allows sampling of effectively the same air mass with several IASI scenes simultaneously. Comparisons of the root-mean-square of an ensemble of IASI profiles to theoretical errors indicate that the measurement noise and the interference of temperature and water vapour on the retrieval together mostly explain the empirically derived random errors. The total degrees of freedom for signal of the retrieval for ozone are 3.1 +/- 0.2 and the tropospheric degrees of freedom are 1.0 +/- 0.2 for the described cases. IASI ozone profiles agree within the error bars with coincident ozone profiles derived from a TES stare sequence for the ozone sonde station at Bratt's Lake (50.2 deg N, 104.7 deg W).

  2. Learning to Run with Actor-Critic Ensemble

    OpenAIRE

    Huang, Zhewei; Zhou, Shuchang; Zhuang, BoEr; Zhou, Xinyu

    2017-01-01

    We introduce an Actor-Critic Ensemble(ACE) method for improving the performance of Deep Deterministic Policy Gradient(DDPG) algorithm. At inference time, our method uses a critic ensemble to select the best action from proposals of multiple actors running in parallel. By having a larger candidate set, our method can avoid actions that have fatal consequences, while staying deterministic. Using ACE, we have won the 2nd place in NIPS'17 Learning to Run competition, under the name of "Megvii-hzw...

  3. Dispersion of aerosol particles in the free atmosphere using ensemble forecasts

    Directory of Open Access Journals (Sweden)

    T. Haszpra

    2013-10-01

    Full Text Available The dispersion of aerosol particle pollutants is studied using 50 members of an ensemble forecast in the example of a hypothetical free atmospheric emission above Fukushima over a period of 2.5 days. Considerable differences are found among the dispersion predictions of the different ensemble members, as well as between the ensemble mean and the deterministic result at the end of the observation period. The variance is found to decrease with the particle size. The geographical area where a threshold concentration is exceeded in at least one ensemble member expands to a 5–10 times larger region than the area from the deterministic forecast, both for air column "concentration" and in the "deposition" field. We demonstrate that the root-mean-square distance of any particle from its own clones in the ensemble members can reach values on the order of one thousand kilometers. Even the centers of mass of the particle cloud of the ensemble members deviate considerably from that obtained by the deterministic forecast. All these indicate that an investigation of the dispersion of aerosol particles in the spirit of ensemble forecast contains useful hints for the improvement of risk assessment.

  4. Deterministic and Storable Single-Photon Source Based on a Quantum Memory

    International Nuclear Information System (INIS)

    Chen Shuai; Chen, Y.-A.; Strassel, Thorsten; Zhao Bo; Yuan Zhensheng; Pan Jianwei; Schmiedmayer, Joerg

    2006-01-01

    A single-photon source is realized with a cold atomic ensemble ( 87 Rb atoms). A single excitation, written in an atomic quantum memory by Raman scattering of a laser pulse, is retrieved deterministically as a single photon at a predetermined time. It is shown that the production rate of single photons can be enhanced considerably by a feedback circuit while the single-photon quality is conserved. Such a single-photon source is well suited for future large-scale realization of quantum communication and linear optical quantum computation

  5. Probabilistic Predictions of PM2.5 Using a Novel Ensemble Design for the NAQFC

    Science.gov (United States)

    Kumar, R.; Lee, J. A.; Delle Monache, L.; Alessandrini, S.; Lee, P.

    2017-12-01

    Poor air quality (AQ) in the U.S. is estimated to cause about 60,000 premature deaths with costs of 100B-150B annually. To reduce such losses, the National AQ Forecasting Capability (NAQFC) at the National Oceanic and Atmospheric Administration (NOAA) produces forecasts of ozone, particulate matter less than 2.5 mm in diameter (PM2.5), and other pollutants so that advance notice and warning can be issued to help individuals and communities limit the exposure and reduce air pollution-caused health problems. The current NAQFC, based on the U.S. Environmental Protection Agency Community Multi-scale AQ (CMAQ) modeling system, provides only deterministic AQ forecasts and does not quantify the uncertainty associated with the predictions, which could be large due to the chaotic nature of atmosphere and nonlinearity in atmospheric chemistry. This project aims to take NAQFC a step further in the direction of probabilistic AQ prediction by exploring and quantifying the potential value of ensemble predictions of PM2.5, and perturbing three key aspects of PM2.5 modeling: the meteorology, emissions, and CMAQ secondary organic aerosol formulation. This presentation focuses on the impact of meteorological variability, which is represented by three members of NOAA's Short-Range Ensemble Forecast (SREF) system that were down-selected by hierarchical cluster analysis. These three SREF members provide the physics configurations and initial/boundary conditions for the Weather Research and Forecasting (WRF) model runs that generate required output variables for driving CMAQ that are missing in operational SREF output. We conducted WRF runs for Jan, Apr, Jul, and Oct 2016 to capture seasonal changes in meteorology. Estimated emissions of trace gases and aerosols via the Sparse Matrix Operator Kernel (SMOKE) system were developed using the WRF output. WRF and SMOKE output drive a 3-member CMAQ mini-ensemble of once-daily, 48-h PM2.5 forecasts for the same four months. The CMAQ mini-ensemble

  6. Appearance of deterministic mixing behavior from ensembles of fluctuating hydrodynamics simulations of the Richtmyer-Meshkov instability

    KAUST Repository

    Narayanan, Kiran

    2018-04-19

    We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z(x,t) of the form Z(x,t)→1/-tN(ih,nΔt) for spatial interval h, time interval Δt, h, and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Zx,t→1/-tmaxh3,h03Nih,nΔt, with h0=ξhdeterministic mixing behavior emerges as the ensemble-averaged behavior of several fluctuating instances, whereas when Bo≈1, a deviation from deterministic behavior is observed. For all cases, the FCNS solution provides bounds on the growth rate of the amplitude of the mixing layer.

  7. Appearance of deterministic mixing behavior from ensembles of fluctuating hydrodynamics simulations of the Richtmyer-Meshkov instability

    KAUST Repository

    Narayanan, Kiran; Samtaney, Ravi

    2018-01-01

    We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z(x,t) of the form Z(x,t)→1/-tN(ih,nΔt) for spatial interval h, time interval Δt, h, and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Zx,t→1/-tmaxh3,h03Nih,nΔt, with h0=ξhdeterministic mixing behavior emerges as the ensemble-averaged behavior of several fluctuating instances, whereas when Bo≈1, a deviation from deterministic behavior is observed. For all cases, the FCNS solution provides bounds on the growth rate of the amplitude of the mixing layer.

  8. Advances in sequential data assimilation and numerical weather forecasting: An Ensemble Transform Kalman-Bucy Filter, a study on clustering in deterministic ensemble square root filters, and a test of a new time stepping scheme in an atmospheric model

    Science.gov (United States)

    Amezcua, Javier

    This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn't represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion

  9. Deterministic multimode photonic device for quantum-information processing

    DEFF Research Database (Denmark)

    Nielsen, Anne E. B.; Mølmer, Klaus

    2010-01-01

    We propose the implementation of a light source that can deterministically generate a rich variety of multimode quantum states. The desired states are encoded in the collective population of different ground hyperfine states of an atomic ensemble and converted to multimode photonic states by exci...

  10. "OZONE SOURCE APPORTIONMENT IN CMAQ' | Science ...

    Science.gov (United States)

    Ozone source attribution has been used to support various policy purposes including interstate transport (Cross State Air Pollution Rule) by U.S. EPA and ozone nonattainment area designations by State agencies. Common scientific applications include tracking intercontinental transport of ozone and ozone precursors and delineating anthropogenic and non-anthropogenic contribution to ozone in North America. As in the public release due in September 2013, CMAQ’s Integrated Source Apportionment Method (ISAM) attributes PM EC/OC, sulfate, nitrate, ammonium, ozone and its precursors NOx and VOC, to sectors/regions of users’ interest. Although the peroxide-to-nitric acid productions ratio has been the most common indicator to distinguish NOx-limited ozone production from VOC-limited one, other indicators are implemented in addition to allowing for an ensemble decision based on a total of 9 available indicator ratios. Moreover, an alternative approach of ozone attribution based on the idea of chemical sensitivity in a linearized system that has formed the basis of chemical treatment in forward DDM/backward adjoint tools has been implemented in CMAQ. This method does not require categorization into either ozone regime. In this study, ISAM will simulate the 2010 North America ozone using all of the above gas-phase attribution methods. The results are to be compared with zero-out difference out of those sectors in the host model runs. In addition, ozone contribution wil

  11. Ozone sensitivity to varying greenhouse gases and ozone-depleting substances in CCMI-1 simulations

    Directory of Open Access Journals (Sweden)

    O. Morgenstern

    2018-01-01

    Full Text Available Ozone fields simulated for the first phase of the Chemistry-Climate Model Initiative (CCMI-1 will be used as forcing data in the 6th Coupled Model Intercomparison Project. Here we assess, using reference and sensitivity simulations produced for CCMI-1, the suitability of CCMI-1 model results for this process, investigating the degree of consistency amongst models regarding their responses to variations in individual forcings. We consider the influences of methane, nitrous oxide, a combination of chlorinated or brominated ozone-depleting substances, and a combination of carbon dioxide and other greenhouse gases. We find varying degrees of consistency in the models' responses in ozone to these individual forcings, including some considerable disagreement. In particular, the response of total-column ozone to these forcings is less consistent across the multi-model ensemble than profile comparisons. We analyse how stratospheric age of air, a commonly used diagnostic of stratospheric transport, responds to the forcings. For this diagnostic we find some salient differences in model behaviour, which may explain some of the findings for ozone. The findings imply that the ozone fields derived from CCMI-1 are subject to considerable uncertainties regarding the impacts of these anthropogenic forcings. We offer some thoughts on how to best approach the problem of generating a consensus ozone database from a multi-model ensemble such as CCMI-1.

  12. Ozone Sensitivity to Varying Greenhouse Gases and Ozone-Depleting Substances in CCMI-1 Simulations

    Science.gov (United States)

    Morgenstern, Olaf; Stone, Kane A.; Schofield, Robyn; Akiyoshi, Hideharu; Yamashita, Yousuke; Kinnison, Douglas E.; Garcia, Rolando R.; Sudo, Kengo; Plummer, David A.; Scinocca, John; hide

    2018-01-01

    Ozone fields simulated for the first phase of the Chemistry-Climate Model Initiative (CCMI-1) will be used as forcing data in the 6th Coupled Model Intercomparison Project. Here we assess, using reference and sensitivity simulations produced for CCMI-1, the suitability of CCMI-1 model results for this process, investigating the degree of consistency amongst models regarding their responses to variations in individual forcings. We consider the influences of methane, nitrous oxide, a combination of chlorinated or brominated ozone-depleting substances, and a combination of carbon dioxide and other greenhouse gases. We find varying degrees of consistency in the models' responses in ozone to these individual forcings, including some considerable disagreement. In particular, the response of total-column ozone to these forcings is less consistent across the multi-model ensemble than profile comparisons. We analyse how stratospheric age of air, a commonly used diagnostic of stratospheric transport, responds to the forcings. For this diagnostic we find some salient differences in model behaviour, which may explain some of the findings for ozone. The findings imply that the ozone fields derived from CCMI-1 are subject to considerable uncertainties regarding the impacts of these anthropogenic forcings. We offer some thoughts on how to best approach the problem of generating a consensus ozone database from a multi-model ensemble such as CCMI-1.

  13. Development of a regional ensemble prediction method for probabilistic weather prediction

    International Nuclear Information System (INIS)

    Nohara, Daisuke; Tamura, Hidetoshi; Hirakuchi, Hiromaru

    2015-01-01

    A regional ensemble prediction method has been developed to provide probabilistic weather prediction using a numerical weather prediction model. To obtain consistent perturbations with the synoptic weather pattern, both of initial and lateral boundary perturbations were given by differences between control and ensemble member of the Japan Meteorological Agency (JMA)'s operational one-week ensemble forecast. The method provides a multiple ensemble member with a horizontal resolution of 15 km for 48-hour based on a downscaling of the JMA's operational global forecast accompanied with the perturbations. The ensemble prediction was examined in the case of heavy snow fall event in Kanto area on January 14, 2013. The results showed that the predictions represent different features of high-resolution spatiotemporal distribution of precipitation affected by intensity and location of extra-tropical cyclone in each ensemble member. Although the ensemble prediction has model bias of mean values and variances in some variables such as wind speed and solar radiation, the ensemble prediction has a potential to append a probabilistic information to a deterministic prediction. (author)

  14. Wind power application research on the fusion of the determination and ensemble prediction

    Science.gov (United States)

    Lan, Shi; Lina, Xu; Yuzhu, Hao

    2017-07-01

    The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.

  15. Gridded Calibration of Ensemble Wind Vector Forecasts Using Ensemble Model Output Statistics

    Science.gov (United States)

    Lazarus, S. M.; Holman, B. P.; Splitt, M. E.

    2017-12-01

    A computationally efficient method is developed that performs gridded post processing of ensemble wind vector forecasts. An expansive set of idealized WRF model simulations are generated to provide physically consistent high resolution winds over a coastal domain characterized by an intricate land / water mask. Ensemble model output statistics (EMOS) is used to calibrate the ensemble wind vector forecasts at observation locations. The local EMOS predictive parameters (mean and variance) are then spread throughout the grid utilizing flow-dependent statistical relationships extracted from the downscaled WRF winds. Using data withdrawal and 28 east central Florida stations, the method is applied to one year of 24 h wind forecasts from the Global Ensemble Forecast System (GEFS). Compared to the raw GEFS, the approach improves both the deterministic and probabilistic forecast skill. Analysis of multivariate rank histograms indicate the post processed forecasts are calibrated. Two downscaling case studies are presented, a quiescent easterly flow event and a frontal passage. Strengths and weaknesses of the approach are presented and discussed.

  16. Developing an Ensemble Prediction System based on COSMO-DE

    Science.gov (United States)

    Theis, S.; Gebhardt, C.; Buchhold, M.; Ben Bouallègue, Z.; Ohl, R.; Paulat, M.; Peralta, C.

    2010-09-01

    The numerical weather prediction model COSMO-DE is a configuration of the COSMO model with a horizontal grid size of 2.8 km. It has been running operationally at DWD since 2007, it covers the area of Germany and produces forecasts with a lead time of 0-21 hours. The model COSMO-DE is convection-permitting, which means that it does without a parametrisation of deep convection and simulates deep convection explicitly. One aim is an improved forecast of convective heavy rain events. Convection-permitting models are in operational use at several weather services, but currently not in ensemble mode. It is expected that an ensemble system could reveal the advantages of a convection-permitting model even better. The probabilistic approach is necessary, because the explicit simulation of convective processes for more than a few hours cannot be viewed as a deterministic forecast anymore. This is due to the chaotic behaviour and short life cycle of the processes which are simulated explicitly now. In the framework of the project COSMO-DE-EPS, DWD is developing and implementing an ensemble prediction system (EPS) for the model COSMO-DE. The project COSMO-DE-EPS comprises the generation of ensemble members, as well as the verification and visualization of the ensemble forecasts and also statistical postprocessing. A pre-operational mode of the EPS with 20 ensemble members is foreseen to start in 2010. Operational use is envisaged to start in 2012, after an upgrade to 40 members and inclusion of statistical postprocessing. The presentation introduces the project COSMO-DE-EPS and describes the design of the ensemble as it is planned for the pre-operational mode. In particular, the currently implemented method for the generation of ensemble members will be explained and discussed. The method includes variations of initial conditions, lateral boundary conditions, and model physics. At present, pragmatic methods are applied which resemble the basic ideas of a multi-model approach

  17. Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution

    Science.gov (United States)

    Oh, Seok-Geun; Suh, Myoung-Seok

    2017-07-01

    The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.

  18. Ensemble Kalman filtering with one-step-ahead smoothing

    KAUST Repository

    Raboudi, Naila F.

    2018-01-11

    The ensemble Kalman filter (EnKF) is widely used for sequential data assimilation. It operates as a succession of forecast and analysis steps. In realistic large-scale applications, EnKFs are implemented with small ensembles and poorly known model error statistics. This limits their representativeness of the background error covariances and, thus, their performance. This work explores the efficiency of the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem to enhance the data assimilation performance of EnKFs. Filtering with OSA smoothing introduces an updated step with future observations, conditioning the ensemble sampling with more information. This should provide an improved background ensemble in the analysis step, which may help to mitigate the suboptimal character of EnKF-based methods. Here, the authors demonstrate the efficiency of a stochastic EnKF with OSA smoothing for state estimation. They then introduce a deterministic-like EnKF-OSA based on the singular evolutive interpolated ensemble Kalman (SEIK) filter. The authors show that the proposed SEIK-OSA outperforms both SEIK, as it efficiently exploits the data twice, and the stochastic EnKF-OSA, as it avoids observational error undersampling. They present extensive assimilation results from numerical experiments conducted with the Lorenz-96 model to demonstrate SEIK-OSA’s capabilities.

  19. Deterministically entangling multiple remote quantum memories inside an optical cavity

    Science.gov (United States)

    Yan, Zhihui; Liu, Yanhong; Yan, Jieli; Jia, Xiaojun

    2018-01-01

    Quantum memory for the nonclassical state of light and entanglement among multiple remote quantum nodes hold promise for a large-scale quantum network, however, continuous-variable (CV) memory efficiency and entangled degree are limited due to imperfect implementation. Here we propose a scheme to deterministically entangle multiple distant atomic ensembles based on CV cavity-enhanced quantum memory. The memory efficiency can be improved with the help of cavity-enhanced electromagnetically induced transparency dynamics. A high degree of entanglement among multiple atomic ensembles can be obtained by mapping the quantum state from multiple entangled optical modes into a collection of atomic spin waves inside optical cavities. Besides being of interest in terms of unconditional entanglement among multiple macroscopic objects, our scheme paves the way towards the practical application of quantum networks.

  20. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  1. Pseudo-random number generator based on asymptotic deterministic randomness

    Science.gov (United States)

    Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming

    2008-06-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.

  2. Pseudo-random number generator based on asymptotic deterministic randomness

    International Nuclear Information System (INIS)

    Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming

    2008-01-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks

  3. Ensemble-based Kalman Filters in Strongly Nonlinear Dynamics

    Institute of Scientific and Technical Information of China (English)

    Zhaoxia PU; Joshua HACKER

    2009-01-01

    This study examines the effectiveness of ensemble Kalman filters in data assimilation with the strongly nonlinear dynamics of the Lorenz-63 model, and in particular their use in predicting the regime transition that occurs when the model jumps from one basin of attraction to the other. Four configurations of the ensemble-based Kalman filtering data assimilation techniques, including the ensemble Kalman filter, ensemble adjustment Kalman filter, ensemble square root filter and ensemble transform Kalman filter, are evaluated with their ability in predicting the regime transition (also called phase transition) and also are compared in terms of their sensitivity to both observational and sampling errors. The sensitivity of each ensemble-based filter to the size of the ensemble is also examined.

  4. An evaluation of the Canadian global meteorological ensemble prediction system for short-term hydrological forecasting

    Directory of Open Access Journals (Sweden)

    F. Anctil

    2009-11-01

    Full Text Available Hydrological forecasting consists in the assessment of future streamflow. Current deterministic forecasts do not give any information concerning the uncertainty, which might be limiting in a decision-making process. Ensemble forecasts are expected to fill this gap.

    In July 2007, the Meteorological Service of Canada has improved its ensemble prediction system, which has been operational since 1998. It uses the GEM model to generate a 20-member ensemble on a 100 km grid, at mid-latitudes. This improved system is used for the first time for hydrological ensemble predictions. Five watersheds in Quebec (Canada are studied: Chaudière, Châteauguay, Du Nord, Kénogami and Du Lièvre. An interesting 17-day rainfall event has been selected in October 2007. Forecasts are produced in a 3 h time step for a 3-day forecast horizon. The deterministic forecast is also available and it is compared with the ensemble ones. In order to correct the bias of the ensemble, an updating procedure has been applied to the output data. Results showed that ensemble forecasts are more skilful than the deterministic ones, as measured by the Continuous Ranked Probability Score (CRPS, especially for 72 h forecasts. However, the hydrological ensemble forecasts are under dispersed: a situation that improves with the increasing length of the prediction horizons. We conjecture that this is due in part to the fact that uncertainty in the initial conditions of the hydrological model is not taken into account.

  5. Deterministic chaos in entangled eigenstates

    Science.gov (United States)

    Schlegel, K. G.; Förster, S.

    2008-05-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  6. Deterministic chaos in entangled eigenstates

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, K.G. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)], E-mail: guenter.schlegel@arcor.de; Foerster, S. [Fakultaet fuer Physik, Universitaet Bielefeld, Postfach 100131, D-33501 Bielefeld (Germany)

    2008-05-12

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator.

  7. Deterministic chaos in entangled eigenstates

    International Nuclear Information System (INIS)

    Schlegel, K.G.; Foerster, S.

    2008-01-01

    We investigate the problem of deterministic chaos in connection with entangled states using the Bohmian formulation of quantum mechanics. We show for a two particle system in a harmonic oscillator potential, that in a case of entanglement and three energy eigen-values the maximum Lyapunov-parameters of a representative ensemble of trajectories for large times develops to a narrow positive distribution, which indicates nearly complete chaotic dynamics. We also present in short results from two time-dependent systems, the anisotropic and the Rabi oscillator

  8. Wave ensemble forecast system for tropical cyclones in the Australian region

    Science.gov (United States)

    Zieger, Stefan; Greenslade, Diana; Kepert, Jeffrey D.

    2018-05-01

    Forecasting of waves under extreme conditions such as tropical cyclones is vitally important for many offshore industries, but there remain many challenges. For Northwest Western Australia (NW WA), wave forecasts issued by the Australian Bureau of Meteorology have previously been limited to products from deterministic operational wave models forced by deterministic atmospheric models. The wave models are run over global (resolution 1/4∘) and regional (resolution 1/10∘) domains with forecast ranges of + 7 and + 3 day respectively. Because of this relatively coarse resolution (both in the wave models and in the forcing fields), the accuracy of these products is limited under tropical cyclone conditions. Given this limited accuracy, a new ensemble-based wave forecasting system for the NW WA region has been developed. To achieve this, a new dedicated 8-km resolution grid was nested in the global wave model. Over this grid, the wave model is forced with winds from a bias-corrected European Centre for Medium Range Weather Forecast atmospheric ensemble that comprises 51 ensemble members to take into account the uncertainties in location, intensity and structure of a tropical cyclone system. A unique technique is used to select restart files for each wave ensemble member. The system is designed to operate in real time during the cyclone season providing + 10-day forecasts. This paper will describe the wave forecast components of this system and present the verification metrics and skill for specific events.

  9. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  10. Time-optimal path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-01-06

    An ensemble-based approach is developed to conduct time-optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where a set deterministic predictions is used to model and quantify uncertainty in the predictions. In the operational setting, much about dynamics, topography and forcing of the ocean environment is uncertain, and as a result a single path produced by a model simulation has limited utility. To overcome this limitation, we rely on a finitesize ensemble of deterministic forecasts to quantify the impact of variability in the dynamics. The uncertainty of flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each the resulting realizations of the uncertain current field, we predict the optimal path by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of sampling strategy, and develop insight into extensions dealing with regional or general circulation models. In particular, the ensemble method enables us to perform a statistical analysis of travel times, and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  11. Multi-model ensembles for assessment of flood losses and associated uncertainty

    Science.gov (United States)

    Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi

    2018-05-01

    Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.

  12. Appearance of deterministic mixing behavior from ensembles of fluctuating hydrodynamics simulations of the Richtmyer-Meshkov instability

    Science.gov (United States)

    Narayanan, Kiran; Samtaney, Ravi

    2018-04-01

    We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z (x ,t ) of the form Z (x ,t ) →1 /√{h ▵ t }N (i h ,n Δ t ) for spatial interval h , time interval Δ t , h , and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Z (h3,h03)>x ,t →1 /√ ▵ t max(i h ,n Δ t ) , with h0=ξ h ∀h mixing behavior emerges as the ensemble-averaged behavior of several fluctuating instances, whereas when Bo≈1 , a deviation from deterministic behavior is observed. For all cases, the FCNS solution provides bounds on the growth rate of the amplitude of the mixing layer.

  13. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    Science.gov (United States)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of

  14. Mixed deterministic statistical modelling of regional ozone air pollution

    KAUST Repository

    Kalenderski, Stoitchko; Steyn, Douw G.

    2011-01-01

    formalism, and explicitly accounts for advection of pollutants, using the advection equation. We apply the model to a specific case of regional ozone pollution-the Lower Fraser valley of British Columbia, Canada. As a predictive tool, we demonstrate

  15. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-01-01

    KF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same

  16. A WRF/Chem sensitivity study using ensemble modelling for a high ozone episode in Slovenia and the Northern Adriatic area

    Science.gov (United States)

    Žabkar, Rahela; Koračin, Darko; Rakovec, Jože

    2013-10-01

    A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.

  17. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    Science.gov (United States)

    Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.

    2018-04-01

    A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  18. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    Directory of Open Access Journals (Sweden)

    A. Gelfan

    2018-04-01

    Full Text Available A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia, is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast, and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast. We have studied the following: (1 whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2 whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form outperform the operational forecasts of the April–June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  19. Towards an Australian ensemble streamflow forecasting system for flood prediction and water management

    Science.gov (United States)

    Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.

    2016-12-01

    Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.

  20. Ensemble-Based Data Assimilation in Reservoir Characterization: A Review

    Directory of Open Access Journals (Sweden)

    Seungpil Jung

    2018-02-01

    Full Text Available This paper presents a review of ensemble-based data assimilation for strongly nonlinear problems on the characterization of heterogeneous reservoirs with different production histories. It concentrates on ensemble Kalman filter (EnKF and ensemble smoother (ES as representative frameworks, discusses their pros and cons, and investigates recent progress to overcome their drawbacks. The typical weaknesses of ensemble-based methods are non-Gaussian parameters, improper prior ensembles and finite population size. Three categorized approaches, to mitigate these limitations, are reviewed with recent accomplishments; improvement of Kalman gains, add-on of transformation functions, and independent evaluation of observed data. The data assimilation in heterogeneous reservoirs, applying the improved ensemble methods, is discussed on predicting unknown dynamic data in reservoir characterization.

  1. Quantum deterministic key distribution protocols based on the authenticated entanglement channel

    International Nuclear Information System (INIS)

    Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua

    2010-01-01

    Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.

  2. Quantum deterministic key distribution protocols based on the authenticated entanglement channel

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Nanrun; Wang Lijun; Ding Jie; Gong Lihua [Department of Electronic Information Engineering, Nanchang University, Nanchang 330031 (China)], E-mail: znr21@163.com, E-mail: znr21@hotmail.com

    2010-04-15

    Based on the quantum entanglement channel, two secure quantum deterministic key distribution (QDKD) protocols are proposed. Unlike quantum random key distribution (QRKD) protocols, the proposed QDKD protocols can distribute the deterministic key securely, which is of significant importance in the field of key management. The security of the proposed QDKD protocols is analyzed in detail using information theory. It is shown that the proposed QDKD protocols can safely and effectively hand over the deterministic key to the specific receiver and their physical implementation is feasible with current technology.

  3. The GMAO Hybrid Ensemble-Variational Atmospheric Data Assimilation System: Version 2.0

    Science.gov (United States)

    Todling, Ricardo; El Akkraoui, Amal

    2018-01-01

    This document describes the implementation and usage of the Goddard Earth Observing System (GEOS) Hybrid Ensemble-Variational Atmospheric Data Assimilation System (Hybrid EVADAS). Its aim is to provide comprehensive guidance to users of GEOS ADAS interested in experimenting with its hybrid functionalities. The document is also aimed at providing a short summary of the state-of-science in this release of the hybrid system. As explained here, the ensemble data assimilation system (EnADAS) mechanism added to GEOS ADAS to enable hybrid data assimilation applications has been introduced to the pre-existing machinery of GEOS in the most non-intrusive possible way. Only very minor changes have been made to the original scripts controlling GEOS ADAS with the objective of facilitating its usage by both researchers and the GMAO's near-real-time Forward Processing applications. In a hybrid scenario two data assimilation systems run concurrently in a two-way feedback mode such that: the ensemble provides background ensemble perturbations required by the ADAS deterministic (typically high resolution) hybrid analysis; and the deterministic ADAS provides analysis information for recentering of the EnADAS analyses and information necessary to ensure that observation bias correction procedures are consistent between both the deterministic ADAS and the EnADAS. The nonintrusive approach to introducing hybrid capability to GEOS ADAS means, in particular, that previously existing features continue to be available. Thus, not only is this upgraded version of GEOS ADAS capable of supporting new applications such as Hybrid 3D-Var, 3D-EnVar, 4D-EnVar and Hybrid 4D-EnVar, it remains possible to use GEOS ADAS in its traditional 3D-Var mode which has been used in both MERRA and MERRA-2. Furthermore, as described in this document, GEOS ADAS also supports a configuration for exercising a purely ensemble-based assimilation strategy which can be fully decoupled from its variational component. We

  4. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2008-01-01

    We revisit the deterministic graphical games of Washburn. A deterministic graphical game can be described as a simple stochastic game (a notion due to Anne Condon), except that we allow arbitrary real payoffs but disallow moves of chance. We study the complexity of solving deterministic graphical...... games and obtain an almost-linear time comparison-based algorithm for computing an equilibrium of such a game. The existence of a linear time comparison-based algorithm remains an open problem....

  5. A Comparison of Ensemble Kalman Filters for Storm Surge Assimilation

    KAUST Repository

    Altaf, Muhammad

    2014-08-01

    This study evaluates and compares the performances of several variants of the popular ensembleKalman filter for the assimilation of storm surge data with the advanced circulation (ADCIRC) model. Using meteorological data from Hurricane Ike to force the ADCIRC model on a domain including the Gulf ofMexico coastline, the authors implement and compare the standard stochastic ensembleKalman filter (EnKF) and three deterministic square root EnKFs: the singular evolutive interpolated Kalman (SEIK) filter, the ensemble transform Kalman filter (ETKF), and the ensemble adjustment Kalman filter (EAKF). Covariance inflation and localization are implemented in all of these filters. The results from twin experiments suggest that the square root ensemble filters could lead to very comparable performances with appropriate tuning of inflation and localization, suggesting that practical implementation details are at least as important as the choice of the square root ensemble filter itself. These filters also perform reasonably well with a relatively small ensemble size, whereas the stochastic EnKF requires larger ensemble sizes to provide similar accuracy for forecasts of storm surge.

  6. A Comparison of Ensemble Kalman Filters for Storm Surge Assimilation

    KAUST Repository

    Altaf, Muhammad; Butler, T.; Mayo, T.; Luo, X.; Dawson, C.; Heemink, A. W.; Hoteit, Ibrahim

    2014-01-01

    This study evaluates and compares the performances of several variants of the popular ensembleKalman filter for the assimilation of storm surge data with the advanced circulation (ADCIRC) model. Using meteorological data from Hurricane Ike to force the ADCIRC model on a domain including the Gulf ofMexico coastline, the authors implement and compare the standard stochastic ensembleKalman filter (EnKF) and three deterministic square root EnKFs: the singular evolutive interpolated Kalman (SEIK) filter, the ensemble transform Kalman filter (ETKF), and the ensemble adjustment Kalman filter (EAKF). Covariance inflation and localization are implemented in all of these filters. The results from twin experiments suggest that the square root ensemble filters could lead to very comparable performances with appropriate tuning of inflation and localization, suggesting that practical implementation details are at least as important as the choice of the square root ensemble filter itself. These filters also perform reasonably well with a relatively small ensemble size, whereas the stochastic EnKF requires larger ensemble sizes to provide similar accuracy for forecasts of storm surge.

  7. Dynamic principle for ensemble control tools.

    Science.gov (United States)

    Samoletov, A; Vasiev, B

    2017-11-28

    Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.

  8. The Use of Artificial-Intelligence-Based Ensembles for Intrusion Detection: A Review

    Directory of Open Access Journals (Sweden)

    Gulshan Kumar

    2012-01-01

    Full Text Available In supervised learning-based classification, ensembles have been successfully employed to different application domains. In the literature, many researchers have proposed different ensembles by considering different combination methods, training datasets, base classifiers, and many other factors. Artificial-intelligence-(AI- based techniques play prominent role in development of ensemble for intrusion detection (ID and have many benefits over other techniques. However, there is no comprehensive review of ensembles in general and AI-based ensembles for ID to examine and understand their current research status to solve the ID problem. Here, an updated review of ensembles and their taxonomies has been presented in general. The paper also presents the updated review of various AI-based ensembles for ID (in particular during last decade. The related studies of AI-based ensembles are compared by set of evaluation metrics driven from (1 architecture & approach followed; (2 different methods utilized in different phases of ensemble learning; (3 other measures used to evaluate classification performance of the ensembles. The paper also provides the future directions of the research in this area. The paper will help the better understanding of different directions in which research of ensembles has been done in general and specifically: field of intrusion detection systems (IDSs.

  9. Deterministic and efficient quantum cryptography based on Bell's theorem

    International Nuclear Information System (INIS)

    Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg

    2006-01-01

    We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology

  10. The effects of greenhouse gases on the Antarctic ozone hole in the past, present, and future

    Science.gov (United States)

    Newman, P. A.; Li, F.; Lait, L. R.; Oman, L.

    2017-12-01

    The Antarctic ozone hole is primarily caused by human-produced ozone depleting substances such as chlorine-containing chlorofluorocarbons (CFCs) and bromine-containing halons. The large ozone spring-time depletion relies on the very-cold conditions of the Antarctic lower stratosphere, and the general containment of air by the polar night jet over Antarctica. Here we show the Goddard Earth Observing System Chemistry Climate Model (GEOSCCM) coupled ocean-atmosphere-chemistry model for exploring the impact of increasing greenhouse gases (GHGs). Model simulations covering the 1960-2010 period are shown for: 1) a control ensemble with observed levels of ODSs and GHGs, 2) an ensemble with fixed 1960 GHG concentrations, and 3) an ensemble with fixed 1960 ODS levels. We look at a similar set of simulations (control, 2005 fixed GHG levels, and 2005 fixed ODS levels) with a new version of GEOSCCM over the period 2005-2100. These future simulations show that the decrease of ODSs leads to similar ozone recovery for both the control run and the fixed GHG scenarios, in spite of GHG forced changes to stratospheric ozone levels. These simulations demonstrate that GHG levels will have major impacts on the stratosphere by 2100, but have only small impacts on the Antarctic ozone hole.

  11. An application of ensemble/multi model approach for wind power production forecast.

    Science.gov (United States)

    Alessandrini, S.; Decimi, G.; Hagedorn, R.; Sperati, S.

    2010-09-01

    The wind power forecast of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast is based on a mesoscale meteorological models that provides the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. The corrected wind data are then used as input in the wind farm power curve to obtain the power forecast. These computations require historical time series of wind measured data (by an anemometer located in the wind farm or on the nacelle) and power data in order to be able to perform the statistical analysis on the past. For this purpose a Neural Network (NN) is trained on the past data and then applied in the forecast task. Considering that the anemometer measurements are not always available in a wind farm a different approach has also been adopted. A training of the NN to link directly the forecasted meteorological data and the power data has also been performed. The normalized RMSE forecast error seems to be lower in most cases by following the second approach. We have examined two wind farms, one located in Denmark on flat terrain and one located in a mountain area in the south of Italy (Sicily). In both cases we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by using two or more models (RAMS, ECMWF deterministic, LAMI, HIRLAM). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error of at least 1% compared to the singles models approach. Moreover the use of a deterministic global model, (e.g. ECMWF deterministic

  12. An application of ensemble/multi model approach for wind power production forecasting

    Science.gov (United States)

    Alessandrini, S.; Pinson, P.; Hagedorn, R.; Decimi, G.; Sperati, S.

    2011-02-01

    The wind power forecasts of the 3 days ahead period are becoming always more useful and important in reducing the problem of grid integration and energy price trading due to the increasing wind power penetration. Therefore it's clear that the accuracy of this forecast is one of the most important requirements for a successful application. The wind power forecast applied in this study is based on meteorological models that provide the 3 days ahead wind data. A Model Output Statistic correction is then performed to reduce systematic error caused, for instance, by a wrong representation of surface roughness or topography in the meteorological models. For this purpose a training of a Neural Network (NN) to link directly the forecasted meteorological data and the power data has been performed. One wind farm has been examined located in a mountain area in the south of Italy (Sicily). First we compare the performances of a prediction based on meteorological data coming from a single model with those obtained by the combination of models (RAMS, ECMWF deterministic, LAMI). It is shown that the multi models approach reduces the day-ahead normalized RMSE forecast error (normalized by nominal power) of at least 1% compared to the singles models approach. Finally we have focused on the possibility of using the ensemble model system (EPS by ECMWF) to estimate the hourly, three days ahead, power forecast accuracy. Contingency diagram between RMSE of the deterministic power forecast and the ensemble members spread of wind forecast have been produced. From this first analysis it seems that ensemble spread could be used as an indicator of the forecast's accuracy at least for the first three days ahead period.

  13. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  14. Path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-08-20

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  15. Evaluation of the Plant-Craig stochastic convection scheme in an ensemble forecasting system

    Science.gov (United States)

    Keane, R. J.; Plant, R. S.; Tennant, W. J.

    2015-12-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic element only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  16. Evaluation of the Plant-Craig stochastic convection scheme (v2.0) in the ensemble forecasting system MOGREPS-R (24 km) based on the Unified Model (v7.3)

    Science.gov (United States)

    Keane, Richard J.; Plant, Robert S.; Tennant, Warren J.

    2016-05-01

    The Plant-Craig stochastic convection parameterization (version 2.0) is implemented in the Met Office Regional Ensemble Prediction System (MOGREPS-R) and is assessed in comparison with the standard convection scheme with a simple stochastic scheme only, from random parameter variation. A set of 34 ensemble forecasts, each with 24 members, is considered, over the month of July 2009. Deterministic and probabilistic measures of the precipitation forecasts are assessed. The Plant-Craig parameterization is found to improve probabilistic forecast measures, particularly the results for lower precipitation thresholds. The impact on deterministic forecasts at the grid scale is neutral, although the Plant-Craig scheme does deliver improvements when forecasts are made over larger areas. The improvements found are greater in conditions of relatively weak synoptic forcing, for which convective precipitation is likely to be less predictable.

  17. A class of energy-based ensembles in Tsallis statistics

    International Nuclear Information System (INIS)

    Chandrashekar, R; Naina Mohammed, S S

    2011-01-01

    A comprehensive investigation is carried out on the class of energy-based ensembles. The eight ensembles are divided into two main classes. In the isothermal class of ensembles the individual members are at the same temperature. A unified framework is evolved to describe the four isothermal ensembles using the currently accepted third constraint formalism. The isothermal–isobaric, grand canonical and generalized ensembles are illustrated through a study of the classical nonrelativistic and extreme relativistic ideal gas models. An exact calculation is possible only in the case of the isothermal–isobaric ensemble. The study of the ideal gas models in the grand canonical and the generalized ensembles has been carried out using a perturbative procedure with the nonextensivity parameter (1 − q) as the expansion parameter. Though all the thermodynamic quantities have been computed up to a particular order in (1 − q) the procedure can be extended up to any arbitrary order in the expansion parameter. In the adiabatic class of ensembles the individual members of the ensemble have the same value of the heat function and a unified formulation to described all four ensembles is given. The nonrelativistic and the extreme relativistic ideal gases are studied in the isoenthalpic–isobaric ensemble, the adiabatic ensemble with number fluctuations and the adiabatic ensemble with number and particle fluctuations

  18. Deterministic Echo State Networks Based Stock Price Forecasting

    Directory of Open Access Journals (Sweden)

    Jingpei Dan

    2014-01-01

    Full Text Available Echo state networks (ESNs, as efficient and powerful computational models for approximating nonlinear dynamical systems, have been successfully applied in financial time series forecasting. Reservoir constructions in standard ESNs rely on trials and errors in real applications due to a series of randomized model building stages. A novel form of ESN with deterministically constructed reservoir is competitive with standard ESN by minimal complexity and possibility of optimizations for ESN specifications. In this paper, forecasting performances of deterministic ESNs are investigated in stock price prediction applications. The experiment results on two benchmark datasets (Shanghai Composite Index and S&P500 demonstrate that deterministic ESNs outperform standard ESN in both accuracy and efficiency, which indicate the prospect of deterministic ESNs for financial prediction.

  19. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    Science.gov (United States)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a

  20. Flood Forecasting Based on TIGGE Precipitation Ensemble Forecast

    Directory of Open Access Journals (Sweden)

    Jinyin Ye

    2016-01-01

    Full Text Available TIGGE (THORPEX International Grand Global Ensemble was a major part of the THORPEX (Observing System Research and Predictability Experiment. It integrates ensemble precipitation products from all the major forecast centers in the world and provides systematic evaluation on the multimodel ensemble prediction system. Development of meteorologic-hydrologic coupled flood forecasting model and early warning model based on the TIGGE precipitation ensemble forecast can provide flood probability forecast, extend the lead time of the flood forecast, and gain more time for decision-makers to make the right decision. In this study, precipitation ensemble forecast products from ECMWF, NCEP, and CMA are used to drive distributed hydrologic model TOPX. We focus on Yi River catchment and aim to build a flood forecast and early warning system. The results show that the meteorologic-hydrologic coupled model can satisfactorily predict the flow-process of four flood events. The predicted occurrence time of peak discharges is close to the observations. However, the magnitude of the peak discharges is significantly different due to various performances of the ensemble prediction systems. The coupled forecasting model can accurately predict occurrence of the peak time and the corresponding risk probability of peak discharge based on the probability distribution of peak time and flood warning, which can provide users a strong theoretical foundation and valuable information as a promising new approach.

  1. Evaluation of ACCMIP ozone simulations and ozonesonde sampling biases using a satellite-based multi-constituent chemical reanalysis

    Science.gov (United States)

    Miyazaki, Kazuyuki; Bowman, Kevin

    2017-07-01

    The Atmospheric Chemistry Climate Model Intercomparison Project (ACCMIP) ensemble ozone simulations for the present day from the 2000 decade simulation results are evaluated by a state-of-the-art multi-constituent atmospheric chemical reanalysis that ingests multiple satellite data including the Tropospheric Emission Spectrometer (TES), the Microwave Limb Sounder (MLS), the Ozone Monitoring Instrument (OMI), and the Measurement of Pollution in the Troposphere (MOPITT) for 2005-2009. Validation of the chemical reanalysis against global ozonesondes shows good agreement throughout the free troposphere and lower stratosphere for both seasonal and year-to-year variations, with an annual mean bias of less than 0.9 ppb in the middle and upper troposphere at the tropics and mid-latitudes. The reanalysis provides comprehensive spatiotemporal evaluation of chemistry-model performance that compliments direct ozonesonde comparisons, which are shown to suffer from significant sampling bias. The reanalysis reveals that the ACCMIP ensemble mean overestimates ozone in the northern extratropics by 6-11 ppb while underestimating by up to 18 ppb in the southern tropics over the Atlantic in the lower troposphere. Most models underestimate the spatial variability of the annual mean lower tropospheric concentrations in the extratropics of both hemispheres by up to 70 %. The ensemble mean also overestimates the seasonal amplitude by 25-70 % in the northern extratropics and overestimates the inter-hemispheric gradient by about 30 % in the lower and middle troposphere. A part of the discrepancies can be attributed to the 5-year reanalysis data for the decadal model simulations. However, these differences are less evident with the current sonde network. To estimate ozonesonde sampling biases, we computed model bias separately for global coverage and the ozonesonde network. The ozonesonde sampling bias in the evaluated model bias for the seasonal mean concentration relative to global

  2. Kinetics of pulp mill effluent treatment by ozone-based processes

    International Nuclear Information System (INIS)

    Ko, Chun-Han; Hsieh, Po-Hung; Chang, Meng-Wen; Chern, Jia-Ming; Chiang, Shih-Min; Tzeng, Chewn-Jeng

    2009-01-01

    The wastewaters generated from wood pulping and paper production processes are traditionally treated by biological and physicochemical processes. In order to reduce chemical oxygen demand (COD) and color to meet increasingly strict discharge standards, advanced oxidation processes (AOPs) are being adapted as polishing treatment units. Various ozone-based processes were used in this study to treat simulated wastewaters prepared from black liquor from a hardwood Kraft pulp mill in Taiwan. The experimental results showed that the COD and color were primarily removed by direct ozone oxidation and activated carbon adsorption. While the addition of activated carbon could enhance the COD and color removal during ozonation, the addition of hydrogen peroxide improved the color removal only. For the various ozone-based treatment processes, kinetic models were developed to satisfactorily predict the COD and color removal rates. According to the kinetic parameters obtained from the various ozone-based processes, the enhanced COD and color removal of ozonation in the presence of activated carbon was attributed to the regeneration of the activated carbon by ozonation. These kinetic models can be used for reactor design and process design to treat pulping wastewater using ozone-based processes.

  3. Simulation of quantum computation : A deterministic event-based approach

    NARCIS (Netherlands)

    Michielsen, K; De Raedt, K; De Raedt, H

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  4. Simulation of Quantum Computation : A Deterministic Event-Based Approach

    NARCIS (Netherlands)

    Michielsen, K.; Raedt, K. De; Raedt, H. De

    2005-01-01

    We demonstrate that locally connected networks of machines that have primitive learning capabilities can be used to perform a deterministic, event-based simulation of quantum computation. We present simulation results for basic quantum operations such as the Hadamard and the controlled-NOT gate, and

  5. Extraction of wind and temperature information from hybrid 4D-Var assimilation of stratospheric ozone using NAVGEM

    Science.gov (United States)

    Allen, Douglas R.; Hoppel, Karl W.; Kuhl, David D.

    2018-03-01

    Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM) hybrid 4-D variational assimilation (4D-Var) data assimilation (DA) system. Ozone can improve the wind and temperature through two different DA mechanisms: (1) through the flow-of-the-day ensemble background error covariance that is blended together with the static background error covariance and (2) via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE), which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error) global hourly ozone data constrains the stratospheric wind and temperature to within ˜ 2 m s-1 and ˜ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study), rather than 0.0 (no ensemble background error covariance) or 1.0 (no static background error covariance), which is consistent with other hybrid DA studies. When perfect global ozone is

  6. Performance assessment of deterministic and probabilistic weather predictions for the short-term optimization of a tropical hydropower reservoir

    Science.gov (United States)

    Mainardi Fan, Fernando; Schwanenberg, Dirk; Alvarado, Rodolfo; Assis dos Reis, Alberto; Naumann, Steffi; Collischonn, Walter

    2016-04-01

    Hydropower is the most important electricity source in Brazil. During recent years, it accounted for 60% to 70% of the total electric power supply. Marginal costs of hydropower are lower than for thermal power plants, therefore, there is a strong economic motivation to maximize its share. On the other hand, hydropower depends on the availability of water, which has a natural variability. Its extremes lead to the risks of power production deficits during droughts and safety issues in the reservoir and downstream river reaches during flood events. One building block of the proper management of hydropower assets is the short-term forecast of reservoir inflows as input for an online, event-based optimization of its release strategy. While deterministic forecasts and optimization schemes are the established techniques for the short-term reservoir management, the use of probabilistic ensemble forecasts and stochastic optimization techniques receives growing attention and a number of researches have shown its benefit. The present work shows one of the first hindcasting and closed-loop control experiments for a multi-purpose hydropower reservoir in a tropical region in Brazil. The case study is the hydropower project (HPP) Três Marias, located in southeast Brazil. The HPP reservoir is operated with two main objectives: (i) hydroelectricity generation and (ii) flood control at Pirapora City located 120 km downstream of the dam. In the experiments, precipitation forecasts based on observed data, deterministic and probabilistic forecasts with 50 ensemble members of the ECMWF are used as forcing of the MGB-IPH hydrological model to generate streamflow forecasts over a period of 2 years. The online optimization depends on a deterministic and multi-stage stochastic version of a model predictive control scheme. Results for the perfect forecasts show the potential benefit of the online optimization and indicate a desired forecast lead time of 30 days. In comparison, the use of

  7. Extreme value analysis for evaluating ozone control strategies.

    Science.gov (United States)

    Reich, Brian; Cooley, Daniel; Foley, Kristen; Napelenok, Sergey; Shaby, Benjamin

    2013-06-01

    Tropospheric ozone is one of six criteria pollutants regulated by the US EPA, and has been linked to respiratory and cardiovascular endpoints and adverse effects on vegetation and ecosystems. Regional photochemical models have been developed to study the impacts of emission reductions on ozone levels. The standard approach is to run the deterministic model under new emission levels and attribute the change in ozone concentration to the emission control strategy. However, running the deterministic model requires substantial computing time, and this approach does not provide a measure of uncertainty for the change in ozone levels. Recently, a reduced form model (RFM) has been proposed to approximate the complex model as a simple function of a few relevant inputs. In this paper, we develop a new statistical approach to make full use of the RFM to study the effects of various control strategies on the probability and magnitude of extreme ozone events. We fuse the model output with monitoring data to calibrate the RFM by modeling the conditional distribution of monitoring data given the RFM using a combination of flexible semiparametric quantile regression for the center of the distribution where data are abundant and a parametric extreme value distribution for the tail where data are sparse. Selected parameters in the conditional distribution are allowed to vary by the RFM value and the spatial location. Also, due to the simplicity of the RFM, we are able to embed the RFM in our Bayesian hierarchical framework to obtain a full posterior for the model input parameters, and propagate this uncertainty to the estimation of the effects of the control strategies. We use the new framework to evaluate three potential control strategies, and find that reducing mobile-source emissions has a larger impact than reducing point-source emissions or a combination of several emission sources.

  8. Radiative forcing and climate metrics for ozone precursor emissions: the impact of multi-model averaging

    Directory of Open Access Journals (Sweden)

    C. R. MacIntosh

    2015-04-01

    Full Text Available Multi-model ensembles are frequently used to assess understanding of the response of ozone and methane lifetime to changes in emissions of ozone precursors such as NOx, VOCs (volatile organic compounds and CO. When these ozone changes are used to calculate radiative forcing (RF (and climate metrics such as the global warming potential (GWP and global temperature-change potential (GTP there is a methodological choice, determined partly by the available computing resources, as to whether the mean ozone (and methane concentration changes are input to the radiation code, or whether each model's ozone and methane changes are used as input, with the average RF computed from the individual model RFs. We use data from the Task Force on Hemispheric Transport of Air Pollution source–receptor global chemical transport model ensemble to assess the impact of this choice for emission changes in four regions (East Asia, Europe, North America and South Asia. We conclude that using the multi-model mean ozone and methane responses is accurate for calculating the mean RF, with differences up to 0.6% for CO, 0.7% for VOCs and 2% for NOx. Differences of up to 60% for NOx 7% for VOCs and 3% for CO are introduced into the 20 year GWP. The differences for the 20 year GTP are smaller than for the GWP for NOx, and similar for the other species. However, estimates of the standard deviation calculated from the ensemble-mean input fields (where the standard deviation at each point on the model grid is added to or subtracted from the mean field are almost always substantially larger in RF, GWP and GTP metrics than the true standard deviation, and can be larger than the model range for short-lived ozone RF, and for the 20 and 100 year GWP and 100 year GTP. The order of averaging has most impact on the metrics for NOx, as the net values for these quantities is the residual of the sum of terms of opposing signs. For example, the standard deviation for the 20 year GWP is 2–3

  9. Using statistical models to explore ensemble uncertainty in climate impact studies: the example of air pollution in Europe

    Directory of Open Access Journals (Sweden)

    V. E. P. Lemaire

    2016-03-01

    Full Text Available Because of its sensitivity to unfavorable weather patterns, air pollution is sensitive to climate change so that, in the future, a climate penalty could jeopardize the expected efficiency of air pollution mitigation measures. A common method to assess the impact of climate on air quality consists in implementing chemistry-transport models forced by climate projections. However, the computing cost of such methods requires optimizing ensemble exploration techniques. By using a training data set from a deterministic projection of climate and air quality over Europe, we identified the main meteorological drivers of air quality for eight regions in Europe and developed statistical models that could be used to predict air pollutant concentrations. The evolution of the key climate variables driving either particulate or gaseous pollution allows selecting the members of the EuroCordex ensemble of regional climate projections that should be used in priority for future air quality projections (CanESM2/RCA4; CNRM-CM5-LR/RCA4 and CSIRO-Mk3-6-0/RCA4 and MPI-ESM-LR/CCLM following the EuroCordex terminology. After having tested the validity of the statistical model in predictive mode, we can provide ranges of uncertainty attributed to the spread of the regional climate projection ensemble by the end of the century (2071–2100 for the RCP8.5. In the three regions where the statistical model of the impact of climate change on PM2.5 offers satisfactory performances, we find a climate benefit (a decrease of PM2.5 concentrations under future climate of −1.08 (±0.21, −1.03 (±0.32, −0.83 (±0.14 µg m−3, for respectively Eastern Europe, Mid-Europe and Northern Italy. In the British-Irish Isles, Scandinavia, France, the Iberian Peninsula and the Mediterranean, the statistical model is not considered skillful enough to draw any conclusion for PM2.5. In Eastern Europe, France, the Iberian Peninsula, Mid-Europe and Northern Italy, the statistical model of the

  10. A note on the multi model super ensemble technique for reducing forecast errors

    International Nuclear Information System (INIS)

    Kantha, L.; Carniel, S.; Sclavo, M.

    2008-01-01

    The multi model super ensemble (S E) technique has been used with considerable success to improve meteorological forecasts and is now being applied to ocean models. Although the technique has been shown to produce deterministic forecasts that can be superior to the individual models in the ensemble or a simple multi model ensemble forecast, there is a clear need to understand its strengths and limitations. This paper is an attempt to do so in simple, easily understood contexts. The results demonstrate that the S E forecast is almost always better than the simple ensemble forecast, the degree of improvement depending on the properties of the models in the ensemble. However, the skill of the S E forecast with respect to the true forecast depends on a number of factors, principal among which is the skill of the models in the ensemble. As can be expected, if the ensemble consists of models with poor skill, the S E forecast will also be poor, although better than the ensemble forecast. On the other hand, the inclusion of even a single skillful model in the ensemble increases the forecast skill significantly.

  11. Pre- and post-processing of hydro-meteorological ensembles for the Norwegian flood forecasting system in 145 basins.

    Science.gov (United States)

    Jahr Hegdahl, Trine; Steinsland, Ingelin; Merete Tallaksen, Lena; Engeland, Kolbjørn

    2016-04-01

    Probabilistic flood forecasting has an added value for decision making. The Norwegian flood forecasting service is based on a flood forecasting model that run for 145 basins. Covering all of Norway the basins differ in both size and hydrological regime. Currently the flood forecasting is based on deterministic meteorological forecasts, and an auto-regressive procedure is used to achieve probabilistic forecasts. An alternative approach is to use meteorological and hydrological ensemble forecasts to quantify the uncertainty in forecasted streamflow. The hydrological ensembles are based on forcing a hydrological model with meteorological ensemble forecasts of precipitation and temperature. However, the ensembles of precipitation are often biased and the spread is too small, especially for the shortest lead times, i.e. they are not calibrated. These properties will, to some extent, propagate to hydrological ensembles, that most likely will be uncalibrated as well. Pre- and post-processing methods are commonly used to obtain calibrated meteorological and hydrological ensembles respectively. Quantitative studies showing the effect of the combined processing of the meteorological (pre-processing) and the hydrological (post-processing) ensembles are however few. The aim of this study is to evaluate the influence of pre- and post-processing on the skill of streamflow predictions, and we will especially investigate if the forecasting skill depends on lead-time, basin size and hydrological regime. This aim is achieved by applying the 51 medium-range ensemble forecast of precipitation and temperature provided by the European Center of Medium-Range Weather Forecast (ECMWF). These ensembles are used as input to the operational Norwegian flood forecasting model, both raw and pre-processed. Precipitation ensembles are calibrated using a zero-adjusted gamma distribution. Temperature ensembles are calibrated using a Gaussian distribution and altitude corrected by a constant gradient

  12. Potential of an ensemble Kalman smoother for stratospheric chemical-dynamical data assimilation

    Directory of Open Access Journals (Sweden)

    Thomas Milewski

    2013-02-01

    Full Text Available A new stratospheric ensemble Kalman smoother (EnKS system is introduced, and the potential of assimilating posterior stratospheric observations to better constrain the whole model state at analysis time is investigated. A set of idealised perfect-model Observation System Simulation Experiments (OSSE assimilating synthetic limb-sounding temperature or ozone retrievals are performed with a chemistry–climate model. The impact during the analysis step is characterised in terms of the root mean square error reduction between the forecast state and the analysis state. The performances of (1 a fixed-lag EnKS assimilating observations spread over 48 hours and (2 an ensemble Kalman Filter (EnKF assimilating a denser network of observations are compared with a reference EnKF. The ozone assimilation with EnKS shows a significant additional reduction of analysis error of the order of 10% for dynamical and chemical variables in the extratropical upper troposphere lower stratosphere (UTLS and Polar Vortex regions when compared to the reference EnKF. This reduction has similar magnitude to the one achieved by the denser-network EnKF assimilation. Similarly, the temperature assimilation with EnKS significantly decreases the error in the UTLS for the wind variables like the denser-network EnKF assimilation. However, the temperature assimilation with EnKS has little or no significant impact on the temperature and ozone analyses, whereas the denser-network EnKF shows improvement with respect to the reference EnKF. The different analysis impacts from the assimilation of current and posterior ozone observations indicate the capacity of time-lagged background-error covariances to represent temporal interactions up to 48 hours between variables during the ensemble data assimilation analysis step, and the possibility to use posterior observations whenever additional current observations are unavailable. The possible application of the EnKS for reanalyses is

  13. Cluster Ensemble-Based Image Segmentation

    Directory of Open Access Journals (Sweden)

    Xiaoru Wang

    2013-07-01

    Full Text Available Image segmentation is the foundation of computer vision applications. In this paper, we propose a new cluster ensemble-based image segmentation algorithm, which overcomes several problems of traditional methods. We make two main contributions in this paper. First, we introduce the cluster ensemble concept to fuse the segmentation results from different types of visual features effectively, which can deliver a better final result and achieve a much more stable performance for broad categories of images. Second, we exploit the PageRank idea from Internet applications and apply it to the image segmentation task. This can improve the final segmentation results by combining the spatial information of the image and the semantic similarity of regions. Our experiments on four public image databases validate the superiority of our algorithm over conventional single type of feature or multiple types of features-based algorithms, since our algorithm can fuse multiple types of features effectively for better segmentation results. Moreover, our method is also proved to be very competitive in comparison with other state-of-the-art segmentation algorithms.

  14. Simultaneous calibration of ensemble river flow predictions over an entire range of lead times

    Science.gov (United States)

    Hemri, S.; Fundel, F.; Zappa, M.

    2013-10-01

    Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.

  15. JuPOETs: a constrained multiobjective optimization approach to estimate biochemical model ensembles in the Julia programming language.

    Science.gov (United States)

    Bassen, David M; Vilkhovoy, Michael; Minot, Mason; Butcher, Jonathan T; Varner, Jeffrey D

    2017-01-25

    Ensemble modeling is a promising approach for obtaining robust predictions and coarse grained population behavior in deterministic mathematical models. Ensemble approaches address model uncertainty by using parameter or model families instead of single best-fit parameters or fixed model structures. Parameter ensembles can be selected based upon simulation error, along with other criteria such as diversity or steady-state performance. Simulations using parameter ensembles can estimate confidence intervals on model variables, and robustly constrain model predictions, despite having many poorly constrained parameters. In this software note, we present a multiobjective based technique to estimate parameter or models ensembles, the Pareto Optimal Ensemble Technique in the Julia programming language (JuPOETs). JuPOETs integrates simulated annealing with Pareto optimality to estimate ensembles on or near the optimal tradeoff surface between competing training objectives. We demonstrate JuPOETs on a suite of multiobjective problems, including test functions with parameter bounds and system constraints as well as for the identification of a proof-of-concept biochemical model with four conflicting training objectives. JuPOETs identified optimal or near optimal solutions approximately six-fold faster than a corresponding implementation in Octave for the suite of test functions. For the proof-of-concept biochemical model, JuPOETs produced an ensemble of parameters that gave both the mean of the training data for conflicting data sets, while simultaneously estimating parameter sets that performed well on each of the individual objective functions. JuPOETs is a promising approach for the estimation of parameter and model ensembles using multiobjective optimization. JuPOETs can be adapted to solve many problem types, including mixed binary and continuous variable types, bilevel optimization problems and constrained problems without altering the base algorithm. JuPOETs is open

  16. Extraction of wind and temperature information from hybrid 4D-Var assimilation of stratospheric ozone using NAVGEM

    Directory of Open Access Journals (Sweden)

    D. R. Allen

    2018-03-01

    Full Text Available Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM hybrid 4-D variational assimilation (4D-Var data assimilation (DA system. Ozone can improve the wind and temperature through two different DA mechanisms: (1 through the flow-of-the-day ensemble background error covariance that is blended together with the static background error covariance and (2 via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE, which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error global hourly ozone data constrains the stratospheric wind and temperature to within ∼ 2 m s−1 and ∼ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study, rather than 0.0 (no ensemble background error covariance or 1.0 (no static background error covariance, which is consistent with other hybrid DA studies. When

  17. Quasi-static ensemble variational data assimilation: a theoretical and numerical study with the iterative ensemble Kalman smoother

    Science.gov (United States)

    Fillion, Anthony; Bocquet, Marc; Gratton, Serge

    2018-04-01

    The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.

  18. The DMSP/MFR total ozone and radiance data base

    International Nuclear Information System (INIS)

    Ellis, J.S.; Lovill, J.E.; Luther, F.M.; Sullivan, T.J.; Taylor, S.S.; Weichel, R.L.

    1992-01-01

    The radiance measurements by the multichannel filter radiometer (MFR), a scanning instrument carried on the Defense Meteorological Satellite Program (DMSP) Block 5D series of satellites (flight models F1, F2, F3 and F4), were used to calculate the total column ozone globally for the period March 1977 through February 1980. These data were then calibrated and mapped to earth coordinates at LLNL. Total column ozone was derived from these calibrated radiance data and placed both the ozone and calibrated radiance data into a computer data base called SOAC (Satellite Ozone Analysis Center) using the FRAMIS database manager. The uncalibrated radiance data tapes were initially sent on to the National Climate Center, Asheville, North Carolina and then to the Satellite Data Services Branch /EDS/NOAA in Suitland, Maryland where they were archived. Copies of the data base containing the total ozone and the calibrated radiance data reside both at LLNL and at the National Space Science Data Center, NASA Goddard Space Flight Center, Greenbelt, Maryland. This report describes the entries into the data base in sufficient detail so that the data base might be useful to others. The characteristics of the MFR sensor are briefly discussed and a complete index to the data base tapes is given

  19. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  20. On the forecast skill of a convection-permitting ensemble

    Science.gov (United States)

    Schellander-Gorgas, Theresa; Wang, Yong; Meier, Florian; Weidle, Florian; Wittmann, Christoph; Kann, Alexander

    2017-01-01

    The 2.5 km convection-permitting (CP) ensemble AROME-EPS (Applications of Research to Operations at Mesoscale - Ensemble Prediction System) is evaluated by comparison with the regional 11 km ensemble ALADIN-LAEF (Aire Limitée Adaption dynamique Développement InterNational - Limited Area Ensemble Forecasting) to show whether a benefit is provided by a CP EPS. The evaluation focuses on the abilities of the ensembles to quantitatively predict precipitation during a 3-month convective summer period over areas consisting of mountains and lowlands. The statistical verification uses surface observations and 1 km × 1 km precipitation analyses, and the verification scores involve state-of-the-art statistical measures for deterministic and probabilistic forecasts as well as novel spatial verification methods. The results show that the convection-permitting ensemble with higher-resolution AROME-EPS outperforms its mesoscale counterpart ALADIN-LAEF for precipitation forecasts. The positive impact is larger for the mountainous areas than for the lowlands. In particular, the diurnal precipitation cycle is improved in AROME-EPS, which leads to a significant improvement of scores at the concerned times of day (up to approximately one-third of the scored verification measure). Moreover, there are advantages for higher precipitation thresholds at small spatial scales, which are due to the improved simulation of the spatial structure of precipitation.

  1. An analog ensemble for short-term probabilistic solar power forecast

    International Nuclear Information System (INIS)

    Alessandrini, S.; Delle Monache, L.; Sperati, S.; Cervone, G.

    2015-01-01

    Highlights: • A novel method for solar power probabilistic forecasting is proposed. • The forecast accuracy does not depend on the nominal power. • The impact of climatology on forecast accuracy is evaluated. - Abstract: The energy produced by photovoltaic farms has a variable nature depending on astronomical and meteorological factors. The former are the solar elevation and the solar azimuth, which are easily predictable without any uncertainty. The amount of liquid water met by the solar radiation within the troposphere is the main meteorological factor influencing the solar power production, as a fraction of short wave solar radiation is reflected by the water particles and cannot reach the earth surface. The total cloud cover is a meteorological variable often used to indicate the presence of liquid water in the troposphere and has a limited predictability, which is also reflected on the global horizontal irradiance and, as a consequence, on solar photovoltaic power prediction. This lack of predictability makes the solar energy integration into the grid challenging. A cost-effective utilization of solar energy over a grid strongly depends on the accuracy and reliability of the power forecasts available to the Transmission System Operators (TSOs). Furthermore, several countries have in place legislation requiring solar power producers to pay penalties proportional to the errors of day-ahead energy forecasts, which makes the accuracy of such predictions a determining factor for producers to reduce their economic losses. Probabilistic predictions can provide accurate deterministic forecasts along with a quantification of their uncertainty, as well as a reliable estimate of the probability to overcome a certain production threshold. In this paper we propose the application of an analog ensemble (AnEn) method to generate probabilistic solar power forecasts (SPF). The AnEn is based on an historical set of deterministic numerical weather prediction (NWP) model

  2. Modelling the Ozone-Based Treatments for Inactivation of Microorganisms

    Directory of Open Access Journals (Sweden)

    Agnieszka Joanna Brodowska

    2017-10-01

    Full Text Available The paper presents the development of a model for ozone treatment in a dynamic bed of different microorganisms (Bacillus subtilis, B. cereus, B. pumilus, Escherichia coli, Pseudomonas fluorescens, Aspergillus niger, Eupenicillium cinnamopurpureum on a heterogeneous matrix (juniper berries, cardamom seeds initially treated with numerous ozone doses during various contact times was studied. Taking into account various microorganism susceptibility to ozone, it was of great importance to develop a sufficiently effective ozone dose to preserve food products using different strains based on the microbial model. For this purpose, we have chosen the Weibull model to describe the survival curves of different microorganisms. Based on the results of microorganism survival modelling after ozone treatment and considering the least susceptible strains to ozone, we selected the critical ones. Among tested strains, those from genus Bacillus were recognized as the most critical strains. In particular, B. subtilis and B. pumilus possessed the highest resistance to ozone treatment because the time needed to achieve the lowest level of its survival was the longest (up to 17.04 min and 16.89 min for B. pumilus reduction on juniper berry and cardamom seed matrix, respectively. Ozone treatment allow inactivate microorganisms to achieving lower survival rates by ozone dose (20.0 g O3/m3 O2, with a flow rate of 0.4 L/min and contact time (up to 20 min. The results demonstrated that a linear correlation between parameters p and k in Weibull distribution, providing an opportunity to calculate a fitted equation of the process.

  3. Modelling the Ozone-Based Treatments for Inactivation of Microorganisms

    Science.gov (United States)

    Brodowska, Agnieszka Joanna; Nowak, Agnieszka; Kondratiuk-Janyska, Alina; Piątkowski, Marcin; Śmigielski, Krzysztof

    2017-01-01

    The paper presents the development of a model for ozone treatment in a dynamic bed of different microorganisms (Bacillus subtilis, B. cereus, B. pumilus, Escherichia coli, Pseudomonas fluorescens, Aspergillus niger, Eupenicillium cinnamopurpureum) on a heterogeneous matrix (juniper berries, cardamom seeds) initially treated with numerous ozone doses during various contact times was studied. Taking into account various microorganism susceptibility to ozone, it was of great importance to develop a sufficiently effective ozone dose to preserve food products using different strains based on the microbial model. For this purpose, we have chosen the Weibull model to describe the survival curves of different microorganisms. Based on the results of microorganism survival modelling after ozone treatment and considering the least susceptible strains to ozone, we selected the critical ones. Among tested strains, those from genus Bacillus were recognized as the most critical strains. In particular, B. subtilis and B. pumilus possessed the highest resistance to ozone treatment because the time needed to achieve the lowest level of its survival was the longest (up to 17.04 min and 16.89 min for B. pumilus reduction on juniper berry and cardamom seed matrix, respectively). Ozone treatment allow inactivate microorganisms to achieving lower survival rates by ozone dose (20.0 g O3/m3 O2, with a flow rate of 0.4 L/min) and contact time (up to 20 min). The results demonstrated that a linear correlation between parameters p and k in Weibull distribution, providing an opportunity to calculate a fitted equation of the process. PMID:28991199

  4. On the proper use of Ensembles for Predictive Uncertainty assessment

    Science.gov (United States)

    Todini, Ezio; Coccia, Gabriele; Ortiz, Enrique

    2015-04-01

    uncertainty of the ensemble mean and that of the ensemble spread. The results of this new approach are illustrated by using data and forecasts from an operational real time flood forecasting. Coccia, G. and Todini, E. 2011. Recent developments in predictive uncertainty assessment based on the Model Conditional Processor approach. Hydrology and Earth System Sciences, 15, 3253-3274. doi:10.5194/hess-15-3253-2011. Krzysztofowicz, R. 1999 Bayesian theory of probabilistic forecasting via deterministic hydrologic model, Water Resour. Res., 35, 2739-2750. Raftery, A. E., T. Gneiting, F. Balabdaoui, and M. Polakowski, 2005. Using Bayesian model averaging to calibrate forecast ensembles, Mon. Weather Rev., 133, 1155-1174. Reggiani, P., Renner, M., Weerts, A., and van Gelder, P., 2009. Uncertainty assessment via Bayesian revision of ensemble streamflow predictions in the operational river Rhine forecasting system, Water Resour. Res., 45, W02428, doi:10.1029/2007WR006758. Todini E. 2004. Role and treatment of uncertainty in real-time flood forecasting. Hydrological Processes 18(14), 2743_2746 Todini, E. 2008. A model conditional processor to assess predictive uncertainty in flood forecasting. Intl. J. River Basin Management, 6(2): 123-137.

  5. Predictor-weighting strategies for probabilistic wind power forecasting with an analog ensemble

    Directory of Open Access Journals (Sweden)

    Constantin Junk

    2015-04-01

    Full Text Available Unlike deterministic forecasts, probabilistic predictions provide estimates of uncertainty, which is an additional value for decision-making. Previous studies have proposed the analog ensemble (AnEn, which is a technique to generate uncertainty information from a purely deterministic forecast. The objective of this study is to improve the AnEn performance for wind power forecasts by developing static and dynamic weighting strategies, which optimize the predictor combination with a brute-force continuous ranked probability score (CRPS minimization and a principal component analysis (PCA of the predictors. Predictors are taken from the high-resolution deterministic forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF, including forecasts of wind at several heights, geopotential height, pressure, and temperature, among others. The weighting strategies are compared at five wind farms in Europe and the U.S. situated in regions with different terrain complexity, both on and offshore, and significantly improve the deterministic and probabilistic AnEn forecast performance compared to the AnEn with 10‑m wind speed and direction as predictors and compared to PCA-based approaches. The AnEn methodology also provides reliable estimation of the forecast uncertainty. The optimized predictor combinations are strongly dependent on terrain complexity, local wind regimes, and atmospheric stratification. Since the proposed predictor-weighting strategies can accomplish both the selection of relevant predictors as well as finding their optimal weights, the AnEn performance is improved by up to 20 % at on and offshore sites.

  6. Preserving the Boltzmann ensemble in replica-exchange molecular dynamics.

    Science.gov (United States)

    Cooke, Ben; Schmidler, Scott C

    2008-10-28

    We consider the convergence behavior of replica-exchange molecular dynamics (REMD) [Sugita and Okamoto, Chem. Phys. Lett. 314, 141 (1999)] based on properties of the numerical integrators in the underlying isothermal molecular dynamics (MD) simulations. We show that a variety of deterministic algorithms favored by molecular dynamics practitioners for constant-temperature simulation of biomolecules fail either to be measure invariant or irreducible, and are therefore not ergodic. We then show that REMD using these algorithms also fails to be ergodic. As a result, the entire configuration space may not be explored even in an infinitely long simulation, and the simulation may not converge to the desired equilibrium Boltzmann ensemble. Moreover, our analysis shows that for initial configurations with unfavorable energy, it may be impossible for the system to reach a region surrounding the minimum energy configuration. We demonstrate these failures of REMD algorithms for three small systems: a Gaussian distribution (simple harmonic oscillator dynamics), a bimodal mixture of Gaussians distribution, and the alanine dipeptide. Examination of the resulting phase plots and equilibrium configuration densities indicates significant errors in the ensemble generated by REMD simulation. We describe a simple modification to address these failures based on a stochastic hybrid Monte Carlo correction, and prove that this is ergodic.

  7. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi; Kadri, Farid; Khadraoui, Sofiane; Sun, Ying

    2016-01-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  8. Ozone Measurements Monitoring Using Data-Based Approach

    KAUST Repository

    Harrou, Fouzi

    2016-02-01

    The complexity of ozone (O3) formation mechanisms in the troposphere make the fast and accurate modeling of ozone very challenging. In the absence of a process model, principal component analysis (PCA) has been extensively used as a data-based monitoring technique for highly correlated process variables; however conventional PCA-based detection indices often fail to detect small or moderate anomalies. In this work, we propose an innovative method for detecting small anomalies in highly correlated multivariate data. The developed method combine the multivariate exponentially weighted moving average (MEWMA) monitoring scheme with PCA modelling in order to enhance anomaly detection performance. Such a choice is mainly motivated by the greater ability of the MEWMA monitoring scheme to detect small changes in the process mean. The proposed PCA-based MEWMA monitoring scheme is successfully applied to ozone measurements data collected from Upper Normandy region, France, via the network of air quality monitoring stations. The detection results of the proposed method are compared to that declared by Air Normand air monitoring association.

  9. Integrated Deterministic-Probabilistic Safety Assessment Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Kudinov, P.; Vorobyev, Y.; Sanchez-Perea, M.; Queral, C.; Jimenez Varas, G.; Rebollo, M. J.; Mena, L.; Gomez-Magin, J.

    2014-02-01

    IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) is a family of methods which use tightly coupled probabilistic and deterministic approaches to address respective sources of uncertainties, enabling Risk informed decision making in a consistent manner. The starting point of the IDPSA framework is that safety justification must be based on the coupling of deterministic (consequences) and probabilistic (frequency) considerations to address the mutual interactions between stochastic disturbances (e.g. failures of the equipment, human actions, stochastic physical phenomena) and deterministic response of the plant (i.e. transients). This paper gives a general overview of some IDPSA methods as well as some possible applications to PWR safety analyses. (Author)

  10. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near-Sun Conditions With a Simple One-Dimensional "Upwind" Scheme.

    Science.gov (United States)

    Owens, Mathew J; Riley, Pete

    2017-11-01

    Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).

  11. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near-Sun Conditions With a Simple One-Dimensional "Upwind" Scheme

    Science.gov (United States)

    Owens, Mathew J.; Riley, Pete

    2017-11-01

    Long lead-time space-weather forecasting requires accurate prediction of the near-Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near-Sun solar wind and magnetic field conditions provide the inner boundary condition to three-dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics-based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near-Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near-Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near-Sun solar wind speed at a range of latitudes about the sub-Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun-Earth line. Propagating these conditions to Earth by a three-dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one-dimensional "upwind" scheme is used. The variance in the resulting near-Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996-2016, the upwind ensemble is found to provide a more "actionable" forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large).

  12. Assessing uncertainties in flood forecasts for decision making: prototype of an operational flood management system integrating ensemble predictions

    Directory of Open Access Journals (Sweden)

    J. Dietrich

    2009-08-01

    Full Text Available Ensemble forecasts aim at framing the uncertainties of the potential future development of the hydro-meteorological situation. A probabilistic evaluation can be used to communicate forecast uncertainty to decision makers. Here an operational system for ensemble based flood forecasting is presented, which combines forecasts from the European COSMO-LEPS, SRNWP-PEPS and COSMO-DE prediction systems. A multi-model lagged average super-ensemble is generated by recombining members from different runs of these meteorological forecast systems. A subset of the super-ensemble is selected based on a priori model weights, which are obtained from ensemble calibration. Flood forecasts are simulated by the conceptual rainfall-runoff-model ArcEGMO. Parameter uncertainty of the model is represented by a parameter ensemble, which is a priori generated from a comprehensive uncertainty analysis during model calibration. The use of a computationally efficient hydrological model within a flood management system allows us to compute the hydro-meteorological model chain for all members of the sub-ensemble. The model chain is not re-computed before new ensemble forecasts are available, but the probabilistic assessment of the output is updated when new information from deterministic short range forecasts or from assimilation of measured data becomes available. For hydraulic modelling, with the desired result of a probabilistic inundation map with high spatial resolution, a replacement model can help to overcome computational limitations. A prototype of the developed framework has been applied for a case study in the Mulde river basin. However these techniques, in particular the probabilistic assessment and the derivation of decision rules are still in their infancy. Further research is necessary and promising.

  13. Multimodel hydrological ensemble forecasts for the Baskatong catchment in Canada using the TIGGE database.

    Science.gov (United States)

    Tito Arandia Martinez, Fabian

    2014-05-01

    Adequate uncertainty assessment is an important issue in hydrological modelling. An important issue for hydropower producers is to obtain ensemble forecasts which truly grasp the uncertainty linked to upcoming streamflows. If properly assessed, this uncertainty can lead to optimal reservoir management and energy production (ex. [1]). The meteorological inputs to the hydrological model accounts for an important part of the total uncertainty in streamflow forecasting. Since the creation of the THORPEX initiative and the TIGGE database, access to meteorological ensemble forecasts from nine agencies throughout the world have been made available. This allows for hydrological ensemble forecasts based on multiple meteorological ensemble forecasts. Consequently, both the uncertainty linked to the architecture of the meteorological model and the uncertainty linked to the initial condition of the atmosphere can be accounted for. The main objective of this work is to show that a weighted combination of meteorological ensemble forecasts based on different atmospheric models can lead to improved hydrological ensemble forecasts, for horizons from one to ten days. This experiment is performed for the Baskatong watershed, a head subcatchment of the Gatineau watershed in the province of Quebec, in Canada. Baskatong watershed is of great importance for hydro-power production, as it comprises the main reservoir for the Gatineau watershed, on which there are six hydropower plants managed by Hydro-Québec. Since the 70's, they have been using pseudo ensemble forecast based on deterministic meteorological forecasts to which variability derived from past forecasting errors is added. We use a combination of meteorological ensemble forecasts from different models (precipitation and temperature) as the main inputs for hydrological model HSAMI ([2]). The meteorological ensembles from eight of the nine agencies available through TIGGE are weighted according to their individual performance and

  14. An ensemble-based dynamic Bayesian averaging approach for discharge simulations using multiple global precipitation products and hydrological models

    Science.gov (United States)

    Qi, Wei; Liu, Junguo; Yang, Hong; Sweetapple, Chris

    2018-03-01

    Global precipitation products are very important datasets in flow simulations, especially in poorly gauged regions. Uncertainties resulting from precipitation products, hydrological models and their combinations vary with time and data magnitude, and undermine their application to flow simulations. However, previous studies have not quantified these uncertainties individually and explicitly. This study developed an ensemble-based dynamic Bayesian averaging approach (e-Bay) for deterministic discharge simulations using multiple global precipitation products and hydrological models. In this approach, the joint probability of precipitation products and hydrological models being correct is quantified based on uncertainties in maximum and mean estimation, posterior probability is quantified as functions of the magnitude and timing of discharges, and the law of total probability is implemented to calculate expected discharges. Six global fine-resolution precipitation products and two hydrological models of different complexities are included in an illustrative application. e-Bay can effectively quantify uncertainties and therefore generate better deterministic discharges than traditional approaches (weighted average methods with equal and varying weights and maximum likelihood approach). The mean Nash-Sutcliffe Efficiency values of e-Bay are up to 0.97 and 0.85 in training and validation periods respectively, which are at least 0.06 and 0.13 higher than traditional approaches. In addition, with increased training data, assessment criteria values of e-Bay show smaller fluctuations than traditional approaches and its performance becomes outstanding. The proposed e-Bay approach bridges the gap between global precipitation products and their pragmatic applications to discharge simulations, and is beneficial to water resources management in ungauged or poorly gauged regions across the world.

  15. An Integrated Ensemble-Based Operational Framework to Predict Urban Flooding: A Case Study of Hurricane Sandy in the Passaic and Hackensack River Basins

    Science.gov (United States)

    Saleh, F.; Ramaswamy, V.; Georgas, N.; Blumberg, A. F.; Wang, Y.

    2016-12-01

    Advances in computational resources and modeling techniques are opening the path to effectively integrate existing complex models. In the context of flood prediction, recent extreme events have demonstrated the importance of integrating components of the hydrosystem to better represent the interactions amongst different physical processes and phenomena. As such, there is a pressing need to develop holistic and cross-disciplinary modeling frameworks that effectively integrate existing models and better represent the operative dynamics. This work presents a novel Hydrologic-Hydraulic-Hydrodynamic Ensemble (H3E) flood prediction framework that operationally integrates existing predictive models representing coastal (New York Harbor Observing and Prediction System, NYHOPS), hydrologic (US Army Corps of Engineers Hydrologic Modeling System, HEC-HMS) and hydraulic (2-dimensional River Analysis System, HEC-RAS) components. The state-of-the-art framework is forced with 125 ensemble meteorological inputs from numerical weather prediction models including the Global Ensemble Forecast System, the European Centre for Medium-Range Weather Forecasts (ECMWF), the Canadian Meteorological Centre (CMC), the Short Range Ensemble Forecast (SREF) and the North American Mesoscale Forecast System (NAM). The framework produces, within a 96-hour forecast horizon, on-the-fly Google Earth flood maps that provide critical information for decision makers and emergency preparedness managers. The utility of the framework was demonstrated by retrospectively forecasting an extreme flood event, hurricane Sandy in the Passaic and Hackensack watersheds (New Jersey, USA). Hurricane Sandy caused significant damage to a number of critical facilities in this area including the New Jersey Transit's main storage and maintenance facility. The results of this work demonstrate that ensemble based frameworks provide improved flood predictions and useful information about associated uncertainties, thus

  16. A molecular dynamics algorithm for simulation of field theories in the canonical ensemble

    International Nuclear Information System (INIS)

    Kogut, J.B.; Sinclair, D.K.

    1986-01-01

    We add a single scalar degree of freedom (''demon'') to the microcanonical ensemble which converts its molecular dynamics into a simulation method for the canonical ensemble (euclidean path integral) of the underlying field theory. This generalization of the microcanonical molecular dynamics algorithm simulates the field theory at fixed coupling with a completely deterministic procedure. We discuss the finite size effects of the method, the equipartition theorem and ergodicity. The method is applied to the planar model in two dimensions and SU(3) lattice gauge theory with four species of light, dynamical quarks in four dimensions. The method is much less sensitive to its discrete time step than conventional Langevin equation simulations of the canonical ensemble. The method is a straightforward generalization of a procedure introduced by S. Nose for molecular physics. (orig.)

  17. Surface drift prediction in the Adriatic Sea using hyper-ensemble statistics on atmospheric, ocean and wave models: Uncertainties and probability distribution areas

    Science.gov (United States)

    Rixen, M.; Ferreira-Coelho, E.; Signell, R.

    2008-01-01

    Despite numerous and regular improvements in underlying models, surface drift prediction in the ocean remains a challenging task because of our yet limited understanding of all processes involved. Hence, deterministic approaches to the problem are often limited by empirical assumptions on underlying physics. Multi-model hyper-ensemble forecasts, which exploit the power of an optimal local combination of available information including ocean, atmospheric and wave models, may show superior forecasting skills when compared to individual models because they allow for local correction and/or bias removal. In this work, we explore in greater detail the potential and limitations of the hyper-ensemble method in the Adriatic Sea, using a comprehensive surface drifter database. The performance of the hyper-ensembles and the individual models are discussed by analyzing associated uncertainties and probability distribution maps. Results suggest that the stochastic method may reduce position errors significantly for 12 to 72??h forecasts and hence compete with pure deterministic approaches. ?? 2007 NATO Undersea Research Centre (NURC).

  18. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim; Moroz, Irene M.

    2010-01-01

    However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  19. Probabilistic Solar Wind Forecasting Using Large Ensembles of Near‐Sun Conditions With a Simple One‐Dimensional “Upwind” Scheme

    Science.gov (United States)

    Riley, Pete

    2017-01-01

    Abstract Long lead‐time space‐weather forecasting requires accurate prediction of the near‐Earth solar wind. The current state of the art uses a coronal model to extrapolate the observed photospheric magnetic field to the upper corona, where it is related to solar wind speed through empirical relations. These near‐Sun solar wind and magnetic field conditions provide the inner boundary condition to three‐dimensional numerical magnetohydrodynamic (MHD) models of the heliosphere out to 1 AU. This physics‐based approach can capture dynamic processes within the solar wind, which affect the resulting conditions in near‐Earth space. However, this deterministic approach lacks a quantification of forecast uncertainty. Here we describe a complementary method to exploit the near‐Sun solar wind information produced by coronal models and provide a quantitative estimate of forecast uncertainty. By sampling the near‐Sun solar wind speed at a range of latitudes about the sub‐Earth point, we produce a large ensemble (N = 576) of time series at the base of the Sun‐Earth line. Propagating these conditions to Earth by a three‐dimensional MHD model would be computationally prohibitive; thus, a computationally efficient one‐dimensional “upwind” scheme is used. The variance in the resulting near‐Earth solar wind speed ensemble is shown to provide an accurate measure of the forecast uncertainty. Applying this technique over 1996–2016, the upwind ensemble is found to provide a more “actionable” forecast than a single deterministic forecast; potential economic value is increased for all operational scenarios, but particularly when false alarms are important (i.e., where the cost of taking mitigating action is relatively large). PMID:29398982

  20. An adaptive Gaussian process-based iterative ensemble smoother for data assimilation

    Science.gov (United States)

    Ju, Lei; Zhang, Jiangjiang; Meng, Long; Wu, Laosheng; Zeng, Lingzao

    2018-05-01

    Accurate characterization of subsurface hydraulic conductivity is vital for modeling of subsurface flow and transport. The iterative ensemble smoother (IES) has been proposed to estimate the heterogeneous parameter field. As a Monte Carlo-based method, IES requires a relatively large ensemble size to guarantee its performance. To improve the computational efficiency, we propose an adaptive Gaussian process (GP)-based iterative ensemble smoother (GPIES) in this study. At each iteration, the GP surrogate is adaptively refined by adding a few new base points chosen from the updated parameter realizations. Then the sensitivity information between model parameters and measurements is calculated from a large number of realizations generated by the GP surrogate with virtually no computational cost. Since the original model evaluations are only required for base points, whose number is much smaller than the ensemble size, the computational cost is significantly reduced. The applicability of GPIES in estimating heterogeneous conductivity is evaluated by the saturated and unsaturated flow problems, respectively. Without sacrificing estimation accuracy, GPIES achieves about an order of magnitude of speed-up compared with the standard IES. Although subsurface flow problems are considered in this study, the proposed method can be equally applied to other hydrological models.

  1. Space-Based Diagnosis of Surface Ozone Sensitivity to Anthropogenic Emissions

    Science.gov (United States)

    Martin, Randall V.; Fiore, Arlene M.; VanDonkelaar, Aaron

    2004-01-01

    We present a novel capability in satellite remote sensing with implications for air pollution control strategy. We show that the ratio of formaldehyde columns to tropospheric nitrogen dioxide columns is an indicator of the relative sensitivity of surface ozone to emissions of nitrogen oxides (NO(x) = NO + NO2) and volatile organic compounds (VOCs). The diagnosis from these space-based observations is highly consistent with current understanding of surface ozone chemistry based on in situ observations. The satellite-derived ratios indicate that surface ozone is more sensitive to emissions of NO(x) than of VOCs throughout most continental regions of the Northern Hemisphere during summer. Exceptions include Los Angeles and industrial areas of Germany. A seasonal transition occurs in the fall when surface ozone becomes less sensitive to NOx and more sensitive to VOCs.

  2. Siting criteria based on the prevention of deterministic effects from plutonium inhalation exposures

    International Nuclear Information System (INIS)

    Sorensen, S.A.; Low, J.O.

    1998-01-01

    Siting criteria are established by regulatory authorities to evaluate potential accident scenarios associated with proposed nuclear facilities. The 0.25 Sv (25 rem) siting criteria adopted in the United States has been historically based on the prevention of deterministic effects from acute, whole-body exposures. The Department of Energy has extended the applicability of this criterion to radionuclides that deliver chronic, organ-specific irradiation through the specification of a 0.25 Sv (25 rem) committed effective dose equivalent siting criterion. A methodology is developed to determine siting criteria based on the prevention of deterministic effects from inhalation intakes of radionuclides which deliver chronic, organ-specific irradiation. Revised siting criteria, expressed in terms of committed effective dose equivalent, are proposed for nuclear facilities that handle primarily plutonium compounds. The analysis determined that a siting criterion of 1.2 Sv (120 rem) committed effective dose equivalent for inhalation exposures to weapons-grade plutonium meets the historical goal of preventing deterministic effects during a facility accident scenario. The criterion also meets the Nuclear Regulatory Commission and Department of Energy Nuclear Safety Goals provided that the frequency of the accident is sufficiently low

  3. Wave ensemble forecast in the Western Mediterranean Sea, application to an early warning system.

    Science.gov (United States)

    Pallares, Elena; Hernandez, Hector; Moré, Jordi; Espino, Manuel; Sairouni, Abdel

    2015-04-01

    The Western Mediterranean Sea is a highly heterogeneous and variable area, as is reflected on the wind field, the current field, and the waves, mainly in the first kilometers offshore. As a result of this variability, the wave forecast in these regions is quite complicated to perform, usually with some accuracy problems during energetic storm events. Moreover, is in these areas where most of the economic activities take part, including fisheries, sailing, tourism, coastal management and offshore renewal energy platforms. In order to introduce an indicator of the probability of occurrence of the different sea states and give more detailed information of the forecast to the end users, an ensemble wave forecast system is considered. The ensemble prediction systems have already been used in the last decades for the meteorological forecast; to deal with the uncertainties of the initial conditions and the different parametrizations used in the models, which may introduce some errors in the forecast, a bunch of different perturbed meteorological simulations are considered as possible future scenarios and compared with the deterministic forecast. In the present work, the SWAN wave model (v41.01) has been implemented for the Western Mediterranean sea, forced with wind fields produced by the deterministic Global Forecast System (GFS) and Global Ensemble Forecast System (GEFS). The wind fields includes a deterministic forecast (also named control), between 11 and 21 ensemble members, and some intelligent member obtained from the ensemble, as the mean of all the members. Four buoys located in the study area, moored in coastal waters, have been used to validate the results. The outputs include all the time series, with a forecast horizon of 8 days and represented in spaghetti diagrams, the spread of the system and the probability at different thresholds. The main goal of this exercise is to be able to determine the degree of the uncertainty of the wave forecast, meaningful

  4. Spatio-temporal modelling of atmospheric pollution based on observations provided by an air quality monitoring network at a regional scale

    International Nuclear Information System (INIS)

    Coman, A.

    2008-01-01

    This study is devoted to the spatio-temporal modelling of air pollution at a regional scale using a set of statistical methods in order to treat the measurements of pollutant concentrations (NO 2 , O 3 ) provided by an air quality monitoring network (AIRPARIF). The main objective is the improvement of the pollutant fields mapping using either interpolation methods based on the spatial or spatio-temporal structure of the data (spatial or spatio-temporal kriging) or some algorithms taking into account the observations, in order to correct the concentrations simulated by a deterministic model (Ensemble Kalman Filter). The results show that nitrogen dioxide mapping based only on spatial interpolation (kriging) gives the best results, while the spatial repartition of the monitoring sites is good. For the ozone mapping it is the sequential data assimilation that leads us to a better reconstruction of the plume's form and position for the analyzed cases. Complementary to the pollutant mapping, another objective was to perform a local prediction of ozone concentrations on a 24-hour horizon; this task was performed using Artificial Neural Networks. The performance indices obtained using two types of neural architectures indicate a fair accuracy especially for the first 8 hours of prediction horizon. (author)

  5. Ensemble control of the Hardhof well field under constraints

    Science.gov (United States)

    Marti, Beatrice; McLaughlin, Dennis; Kinzelbach, Wolfgang; Kaiser, Hans-Peter

    2013-04-01

    Practical control of flow in aquifers has been based on deterministic models, not including stochastic information in the optimization (Bauser et al., 2010 or Marti et al., 2012). Only recently robust ensemble control of aquatic systems has been analyzed in linear and synthetic problems (Lin, B., 2012). We propose a control under constraints, which takes into account the stochastic information contained in an ensemble of realizations of a groundwater flow model with uncertain parameters, boundary and initial conditions. This control is applied to a real life problem setting (the Hardhof well field in Zurich) and analyzed with regard to efficiency of the control compared to a similar control based on a deterministic model. The Hardhof well field, which lies in the city of Zurich, Switzerland, provides roughly 15% of the town's drinking water demand from the Limmat valley aquifer. Groundwater and river filtrate are withdrawn in four large horizontal wells, each with a capacity of up to 48'000 m3 per day. The well field is threatened by potential pollution from leachate of a nearby land fill, possible accidents on the adjacent rail and road lines, and by diffuse pollution from former industrial sites and sewers located upstream of the well field. A line of recharge wells and basins forms a hydraulic barrier against the potentially contaminated water and increases the capacity of the well field. The amount and distribution of the artificial recharge to 3 infiltration basins and 12 infiltration wells has to be controlled on a daily basis to guarantee the effectiveness of the hydraulic barrier in the highly dynamic flow field. The Hardhof well field is simulated with a 2D-real-time groundwater flow model. The model is coupled to a controller, minimizing the inflow of potentially contaminated groundwater to the drinking water wells under various constraints (i.e. keeping the groundwater level between given thresholds, guaranteeing production of the drinking water demand

  6. Reinterpretation of ozone data from Base Roi Baudouin

    Science.gov (United States)

    Kelder, H.; Muller, C.

    1994-01-01

    The ozone Dobson measurements obtained in Antarctica at the Belgian 'Base Roi Baudouin' (70 deg 26 min S, 24 deg 19 min E) in 1965 and 1966 were retrieved from the KNMI (Royal Netherlands Meteorological Institute) archives in De Bilt. Despite excellent treatment at the time by the meteorologists in charge at the KNMI (Wisse and Meerburg, 1969), a study of the original observers notes was made in order to check possible seasonal ozone phenomena. No systematic anomaly in the first analysis was found; meteorological data from the site together with Brewer-Mast ozone soundings concur that the conditions did not correspond either in 1965 nor 1966 to the current ozone hole (Farman et al., 1985) situation, however, the data yields excellent correlation with stratospheric temperature and shows in 1966 a clear November maximum in opposition to an October value around 344 Dobson units.

  7. Village Building Identification Based on Ensemble Convolutional Neural Networks

    Science.gov (United States)

    Guo, Zhiling; Chen, Qi; Xu, Yongwei; Shibasaki, Ryosuke; Shao, Xiaowei

    2017-01-01

    In this study, we present the Ensemble Convolutional Neural Network (ECNN), an elaborate CNN frame formulated based on ensembling state-of-the-art CNN models, to identify village buildings from open high-resolution remote sensing (HRRS) images. First, to optimize and mine the capability of CNN for village mapping and to ensure compatibility with our classification targets, a few state-of-the-art models were carefully optimized and enhanced based on a series of rigorous analyses and evaluations. Second, rather than directly implementing building identification by using these models, we exploited most of their advantages by ensembling their feature extractor parts into a stronger model called ECNN based on the multiscale feature learning method. Finally, the generated ECNN was applied to a pixel-level classification frame to implement object identification. The proposed method can serve as a viable tool for village building identification with high accuracy and efficiency. The experimental results obtained from the test area in Savannakhet province, Laos, prove that the proposed ECNN model significantly outperforms existing methods, improving overall accuracy from 96.64% to 99.26%, and kappa from 0.57 to 0.86. PMID:29084154

  8. Online cross-validation-based ensemble learning.

    Science.gov (United States)

    Benkeser, David; Ju, Cheng; Lendle, Sam; van der Laan, Mark

    2018-01-30

    Online estimators update a current estimate with a new incoming batch of data without having to revisit past data thereby providing streaming estimates that are scalable to big data. We develop flexible, ensemble-based online estimators of an infinite-dimensional target parameter, such as a regression function, in the setting where data are generated sequentially by a common conditional data distribution given summary measures of the past. This setting encompasses a wide range of time-series models and, as special case, models for independent and identically distributed data. Our estimator considers a large library of candidate online estimators and uses online cross-validation to identify the algorithm with the best performance. We show that by basing estimates on the cross-validation-selected algorithm, we are asymptotically guaranteed to perform as well as the true, unknown best-performing algorithm. We provide extensions of this approach including online estimation of the optimal ensemble of candidate online estimators. We illustrate excellent performance of our methods using simulations and a real data example where we make streaming predictions of infectious disease incidence using data from a large database. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Southern Hemisphere Additional Ozonesondes (SHADOZ) Ozone Climatology (2005-2009): Tropospheric and Tropical Tropopause Layer (TTL) Profiles with Comparisons to Omi-based Ozone Products

    Science.gov (United States)

    Thompson, Anne M.; Miller, Sonya K.; Tilmes, Simone; Kollonige, Debra W.; Witte, Jacquelyn C.; Oltmans, Samuel J.; Johnson, Brian J.; Fujiwara, Masatomo; Schmidlin, F. J.; Coetzee, G. J. R.; hide

    2012-01-01

    We present a regional and seasonal climatology of SHADOZ ozone profiles in the troposphere and tropical tropopause layer (TTL) based on measurements taken during the first five years of Aura, 2005-2009, when new stations joined the network at Hanoi, Vietnam; Hilo, Hawaii; Alajuela Heredia, Costa Rica; Cotonou, Benin. In all, 15 stations operated during that period. A west-to-east progression of decreasing convective influence and increasing pollution leads to distinct tropospheric ozone profiles in three regions: (1) western Pacific eastern Indian Ocean; (2) equatorial Americas (San Cristobal, Alajuela, Paramaribo); (3) Atlantic and Africa. Comparisons in total ozone column from soundings, the Ozone Monitoring Instrument (OMI, on Aura, 2004-) satellite and ground-based instrumentation are presented. Most stations show better agreement with OMI than they did for EPTOMS comparisons (1998-2004; Earth-ProbeTotal Ozone Mapping Spectrometer), partly due to a revised above-burst ozone climatology. Possible station biases in the stratospheric segment of the ozone measurement noted in the first 7 years of SHADOZ ozone profiles are re-examined. High stratospheric bias observed during the TOMS period appears to persist at one station. Comparisons of SHADOZ tropospheric ozone and the daily Trajectory-enhanced Tropospheric Ozone Residual (TTOR) product (based on OMIMLS) show that the satellite-derived column amount averages 25 low. Correlations between TTOR and the SHADOZ sondes are quite good (typical r2 0.5-0.8), however, which may account for why some published residual-based OMI products capture tropospheric interannual variability fairly realistically. On the other hand, no clear explanations emerge for why TTOR-sonde discrepancies vary over a wide range at most SHADOZ sites.

  10. Assimilation of IASI partial tropospheric columns with an Ensemble Kalman Filter over Europe

    Directory of Open Access Journals (Sweden)

    A. Coman

    2012-03-01

    Full Text Available Partial lower tropospheric ozone columns provided by the IASI (Infrared Atmospheric Sounding Interferometer instrument have been assimilated into a chemistry-transport model at continental scale (CHIMERE using an Ensemble Square Root Kalman Filter (EnSRF. Analyses are made for the month of July 2007 over the European domain. Launched in 2006, aboard the MetOp-A satellite, IASI shows high sensitivity for ozone in the free troposphere and low sensitivity at the ground; therefore it is important to evaluate if assimilation of these observations can improve free tropospheric ozone, and possibly surface ozone. The analyses are validated against independent ozone observations from sondes, MOZAIC1 aircraft and ground based stations (AIRBASE – the European Air quality dataBase and compared with respect to the free run of CHIMERE. These comparisons show a decrease in error of 6 parts-per-billion (ppb in the free troposphere over the Frankfurt area, and also a reduction of the root mean square error (respectively bias at the surface of 19% (33% for more than 90% of existing ground stations. This provides evidence of the potential of data assimilation of tropospheric IASI columns to better describe the tropospheric ozone distribution, including surface ozone, despite the lower sensitivity.

    The changes in concentration resulting from the observational constraints were quantified and several geophysical explanations for the findings of this study were drawn. The corrections were most pronounced over Italy and the Mediterranean region, we noted an average reduction of 8–9 ppb in the free troposphere with respect to the free run, and still a reduction of 5.5 ppb at ground, likely due to a longer residence time of air masses in this part associated to the general circulation pattern (i.e. dominant western circulation and to persistent anticyclonic conditions over the Mediterranean basin. This is an important geophysical result, since the

  11. Constructing Support Vector Machine Ensembles for Cancer Classification Based on Proteomic Profiling

    Institute of Scientific and Technical Information of China (English)

    Yong Mao; Xiao-Bo Zhou; Dao-Ying Pi; You-Xian Sun

    2005-01-01

    In this study, we present a constructive algorithm for training cooperative support vector machine ensembles (CSVMEs). CSVME combines ensemble architecture design with cooperative training for individual SVMs in ensembles. Unlike most previous studies on training ensembles, CSVME puts emphasis on both accuracy and collaboration among individual SVMs in an ensemble. A group of SVMs selected on the basis of recursive classifier elimination is used in CSVME, and the number of the individual SVMs selected to construct CSVME is determined by 10-fold cross-validation. This kind of SVME has been tested on two ovarian cancer datasets previously obtained by proteomic mass spectrometry. By combining several individual SVMs, the proposed method achieves better performance than the SVME of all base SVMs.

  12. Comprehensive Study on Lexicon-based Ensemble Classification Sentiment Analysis

    Directory of Open Access Journals (Sweden)

    Łukasz Augustyniak

    2015-12-01

    Full Text Available We propose a novel method for counting sentiment orientation that outperforms supervised learning approaches in time and memory complexity and is not statistically significantly different from them in accuracy. Our method consists of a novel approach to generating unigram, bigram and trigram lexicons. The proposed method, called frequentiment, is based on calculating the frequency of features (words in the document and averaging their impact on the sentiment score as opposed to documents that do not contain these features. Afterwards, we use ensemble classification to improve the overall accuracy of the method. What is important is that the frequentiment-based lexicons with sentiment threshold selection outperform other popular lexicons and some supervised learners, while being 3–5 times faster than the supervised approach. We compare 37 methods (lexicons, ensembles with lexicon’s predictions as input and supervised learners applied to 10 Amazon review data sets and provide the first statistical comparison of the sentiment annotation methods that include ensemble approaches. It is one of the most comprehensive comparisons of domain sentiment analysis in the literature.

  13. Experimental aspects of deterministic secure quantum key distribution

    Energy Technology Data Exchange (ETDEWEB)

    Walenta, Nino; Korn, Dietmar; Puhlmann, Dirk; Felbinger, Timo; Hoffmann, Holger; Ostermeyer, Martin [Universitaet Potsdam (Germany). Institut fuer Physik; Bostroem, Kim [Universitaet Muenster (Germany)

    2008-07-01

    Most common protocols for quantum key distribution (QKD) use non-deterministic algorithms to establish a shared key. But deterministic implementations can allow for higher net key transfer rates and eavesdropping detection rates. The Ping-Pong coding scheme by Bostroem and Felbinger[1] employs deterministic information encoding in entangled states with its characteristic quantum channel from Bob to Alice and back to Bob. Based on a table-top implementation of this protocol with polarization-entangled photons fundamental advantages as well as practical issues like transmission losses, photon storage and requirements for progress towards longer transmission distances are discussed and compared to non-deterministic protocols. Modifications of common protocols towards a deterministic quantum key distribution are addressed.

  14. A multi-model analysis of vertical ozone profiles

    Directory of Open Access Journals (Sweden)

    J. E. Jonson

    2010-06-01

    emissions reduced by 20% by region. Intercontinental transport of ozone is finally determined based on differences in model ensemble calculations. With emissions perturbed by 20% per region, calculated intercontinental contributions to ozone in the free troposphere range from less than 1 ppb to 3 ppb, with small contributions in winter. The results are corroborated by the retroplume calculations. At several locations the seasonal contributions to ozone in the free troposphere from intercontinental transport differ from what was shown earlier at the surface using the same dataset. The large spread in model results points to a need of further evaluation of the chemical and physical processes in order to improve the credibility of global model results.

  15. Ozone time scale decomposition and trend assessment from surface observations

    Science.gov (United States)

    Boleti, Eirini; Hueglin, Christoph; Takahama, Satoshi

    2017-04-01

    Emissions of ozone precursors have been regulated in Europe since around 1990 with control measures primarily targeting to industries and traffic. In order to understand how these measures have affected air quality, it is now important to investigate concentrations of tropospheric ozone in different types of environments, based on their NOx burden, and in different geographic regions. In this study, we analyze high quality data sets for Switzerland (NABEL network) and whole Europe (AirBase) for the last 25 years to calculate long-term trends of ozone concentrations. A sophisticated time scale decomposition method, called the Ensemble Empirical Mode Decomposition (EEMD) (Huang,1998;Wu,2009), is used for decomposition of the different time scales of the variation of ozone, namely the long-term trend, seasonal and short-term variability. This allows subtraction of the seasonal pattern of ozone from the observations and estimation of long-term changes of ozone concentrations with lower uncertainty ranges compared to typical methodologies used. We observe that, despite the implementation of regulations, for most of the measurement sites ozone daily mean values have been increasing until around mid-2000s. Afterwards, we observe a decline or a leveling off in the concentrations; certainly a late effect of limitations in ozone precursor emissions. On the other hand, the peak ozone concentrations have been decreasing for almost all regions. The evolution in the trend exhibits some differences between the different types of measurement. In addition, ozone is known to be strongly affected by meteorology. In the applied approach, some of the meteorological effects are already captured by the seasonal signal and already removed in the de-seasonalized ozone time series. For adjustment of the influence of meteorology on the higher frequency ozone variation, a statistical approach based on Generalized Additive Models (GAM) (Hastie,1990;Wood,2006), which corrects for meteorological

  16. Evaluation of TIGGE Ensemble Forecasts of Precipitation in Distinct Climate Regions in Iran

    Science.gov (United States)

    Aminyavari, Saleh; Saghafian, Bahram; Delavar, Majid

    2018-04-01

    The application of numerical weather prediction (NWP) products is increasing dramatically. Existing reports indicate that ensemble predictions have better skill than deterministic forecasts. In this study, numerical ensemble precipitation forecasts in the TIGGE database were evaluated using deterministic, dichotomous (yes/no), and probabilistic techniques over Iran for the period 2008-16. Thirteen rain gauges spread over eight homogeneous precipitation regimes were selected for evaluation. The Inverse Distance Weighting and Kriging methods were adopted for interpolation of the prediction values, downscaled to the stations at lead times of one to three days. To enhance the forecast quality, NWP values were post-processed via Bayesian Model Averaging. The results showed that ECMWF had better scores than other products. However, products of all centers underestimated precipitation in high precipitation regions while overestimating precipitation in other regions. This points to a systematic bias in forecasts and demands application of bias correction techniques. Based on dichotomous evaluation, NCEP did better at most stations, although all centers overpredicted the number of precipitation events. Compared to those of ECMWF and NCEP, UKMO yielded higher scores in mountainous regions, but performed poorly at other selected stations. Furthermore, the evaluations showed that all centers had better skill in wet than in dry seasons. The quality of post-processed predictions was better than those of the raw predictions. In conclusion, the accuracy of the NWP predictions made by the selected centers could be classified as medium over Iran, while post-processing of predictions is recommended to improve the quality.

  17. Dynamical mean-field theory of noisy spiking neuron ensembles: Application to the Hodgkin-Huxley model

    International Nuclear Information System (INIS)

    Hasegawa, Hideo

    2003-01-01

    A dynamical mean-field approximation (DMA) previously proposed by the present author [H. Hasegawa, Phys. Rev E 67, 041903 (2003)] has been extended to ensembles described by a general noisy spiking neuron model. Ensembles of N-unit neurons, each of which is expressed by coupled K-dimensional differential equations (DEs), are assumed to be subject to spatially correlated white noises. The original KN-dimensional stochastic DEs have been replaced by K(K+2)-dimensional deterministic DEs expressed in terms of means and the second-order moments of local and global variables: the fourth-order contributions are taken into account by the Gaussian decoupling approximation. Our DMA has been applied to an ensemble of Hodgkin-Huxley (HH) neurons (K=4), for which effects of the noise, the coupling strength, and the ensemble size on the response to a single-spike input have been investigated. Numerical results calculated by the DMA theory are in good agreement with those obtained by direct simulations, although the former computation is about a thousand times faster than the latter for a typical HH neuron ensemble with N=100

  18. Deterministic and efficient quantum cryptography based on Bell's theorem

    International Nuclear Information System (INIS)

    Chen, Z.-B.; Zhang, Q.; Bao, X.-H.; Schmiedmayer, J.; Pan, J.-W.

    2005-01-01

    Full text: We propose a novel double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish a key bit with the help of classical communications. Eavesdropping can be detected by checking the violation of local realism for the detected events. We also show that our protocol allows a robust implementation under current technology. (author)

  19. Ozone-UV-catalysis based advanced oxidation process for wastewater treatment.

    Science.gov (United States)

    Tichonovas, Martynas; Krugly, Edvinas; Jankunaite, Dalia; Racys, Viktoras; Martuzevicius, Dainius

    2017-07-01

    A bench-scale advanced oxidation (AO) reactor was investigated for the degradation of six pollutants (2-naphthol, phenol, oxalic acid, phthalate, methylene blue, and D-glucose) in a model wastewater at with the aim to test opportunities for the further upscale to industrial applications. Six experimental conditions were designed to completely examine the experimental reactor, including photolysis, photocatalysis, ozonation, photolytic ozonation, catalytic ozonation, and photocatalytic ozonation. The stationary catalyst construction was made from commercially available TiO 2 nanopowder by mounting it on a glass support and subsequently characterized for morphology (X-ray diffraction analysis and scanning electron microscopy) as well as durability. The ozone was generated in a dielectrical barrier discharge reactor using air as a source of oxygen. The degradation efficiency was estimated by the decrease in total organic carbon (TOC) concentration as well as toxicity using Daphnia magna, and degradation by-products by ultra-performance liquid chromatography-mass spectrometry. The photocatalytic ozonation was the most effective for the treatment of all model wastewater. The photocatalytic ozonation was most effective against ozonation and photolytic ozonation at tested pH values. A complete toxicity loss was obtained after the treatment using photocatalytic ozonation. The possible degradation pathway of the phthalate by oxidation was suggested based on aromatic ring opening reactions. The catalyst used at this experiment confirmed as a durable for continuous use with almost no loss of activity over time. The design of the reactor was found to be very effective for water treatment using photocatalytic ozonation. Such design has a high potential and can be further upscaled to industrial applications due to the simplicity and versatility of manufacturing and maintenance.

  20. Force Sensor Based Tool Condition Monitoring Using a Heterogeneous Ensemble Learning Model

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2014-11-01

    Full Text Available Tool condition monitoring (TCM plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM, hidden Markov model (HMM and radius basis function (RBF are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability.

  1. Comparative Visualization of Vector Field Ensembles Based on Longest Common Subsequence

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Richen; Guo, Hanqi; Zhang, Jiang; Yuan, Xiaoru

    2016-04-19

    We propose a longest common subsequence (LCS) based approach to compute the distance among vector field ensembles. By measuring how many common blocks the ensemble pathlines passing through, the LCS distance defines the similarity among vector field ensembles by counting the number of sharing domain data blocks. Compared to the traditional methods (e.g. point-wise Euclidean distance or dynamic time warping distance), the proposed approach is robust to outlier, data missing, and sampling rate of pathline timestep. Taking the advantages of smaller and reusable intermediate output, visualization based on the proposed LCS approach revealing temporal trends in the data at low storage cost, and avoiding tracing pathlines repeatedly. Finally, we evaluate our method on both synthetic data and simulation data, which demonstrate the robustness of the proposed approach.

  2. Ensemble prediction of floods – catchment non-linearity and forecast probabilities

    Directory of Open Access Journals (Sweden)

    C. Reszler

    2007-07-01

    Full Text Available Quantifying the uncertainty of flood forecasts by ensemble methods is becoming increasingly important for operational purposes. The aim of this paper is to examine how the ensemble distribution of precipitation forecasts propagates in the catchment system, and to interpret the flood forecast probabilities relative to the forecast errors. We use the 622 km2 Kamp catchment in Austria as an example where a comprehensive data set, including a 500 yr and a 1000 yr flood, is available. A spatially-distributed continuous rainfall-runoff model is used along with ensemble and deterministic precipitation forecasts that combine rain gauge data, radar data and the forecast fields of the ALADIN and ECMWF numerical weather prediction models. The analyses indicate that, for long lead times, the variability of the precipitation ensemble is amplified as it propagates through the catchment system as a result of non-linear catchment response. In contrast, for lead times shorter than the catchment lag time (e.g. 12 h and less, the variability of the precipitation ensemble is decreased as the forecasts are mainly controlled by observed upstream runoff and observed precipitation. Assuming that all ensemble members are equally likely, the statistical analyses for five flood events at the Kamp showed that the ensemble spread of the flood forecasts is always narrower than the distribution of the forecast errors. This is because the ensemble forecasts focus on the uncertainty in forecast precipitation as the dominant source of uncertainty, and other sources of uncertainty are not accounted for. However, a number of analyses, including Relative Operating Characteristic diagrams, indicate that the ensemble spread is a useful indicator to assess potential forecast errors for lead times larger than 12 h.

  3. The Development of Storm Surge Ensemble Prediction System and Case Study of Typhoon Meranti in 2016

    Science.gov (United States)

    Tsai, Y. L.; Wu, T. R.; Terng, C. T.; Chu, C. H.

    2017-12-01

    Taiwan is under the threat of storm surge and associated inundation, which is located at a potentially severe storm generation zone. The use of ensemble prediction can help forecasters to know the characteristic of storm surge under the uncertainty of track and intensity. In addition, it can help the deterministic forecasting. In this study, the kernel of ensemble prediction system is based on COMCOT-SURGE (COrnell Multi-grid COupled Tsunami Model - Storm Surge). COMCOT-SURGE solves nonlinear shallow water equations in Open Ocean and coastal regions with the nested-grid scheme and adopts wet-dry-cell treatment to calculate potential inundation area. In order to consider tide-surge interaction, the global TPXO 7.1 tide model provides the tidal boundary conditions. After a series of validations and case studies, COMCOT-SURGE has become an official operating system of Central Weather Bureau (CWB) in Taiwan. In this study, the strongest typhoon in 2016, Typhoon Meranti, is chosen as a case study. We adopt twenty ensemble members from CWB WRF Ensemble Prediction System (CWB WEPS), which differs from parameters of microphysics, boundary layer, cumulus, and surface. From box-and-whisker results, maximum observed storm surges were located in the interval of the first and third quartile at more than 70 % gauge locations, e.g. Toucheng, Chengkung, and Jiangjyun. In conclusion, the ensemble prediction can effectively help forecasters to predict storm surge especially under the uncertainty of storm track and intensity

  4. Accounting for model error due to unresolved scales within ensemble Kalman filtering

    OpenAIRE

    Mitchell, Lewis; Carrassi, Alberto

    2014-01-01

    We propose a method to account for model error due to unresolved scales in the context of the ensemble transform Kalman filter (ETKF). The approach extends to this class of algorithms the deterministic model error formulation recently explored for variational schemes and extended Kalman filter. The model error statistic required in the analysis update is estimated using historical reanalysis increments and a suitable model error evolution law. Two different versions of the method are describe...

  5. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm

    International Nuclear Information System (INIS)

    Yu, Lean; Wang, Shouyang; Lai, Kin Keung

    2008-01-01

    In this study, an empirical mode decomposition (EMD) based neural network ensemble learning paradigm is proposed for world crude oil spot price forecasting. For this purpose, the original crude oil spot price series were first decomposed into a finite, and often small, number of intrinsic mode functions (IMFs). Then a three-layer feed-forward neural network (FNN) model was used to model each of the extracted IMFs, so that the tendencies of these IMFs could be accurately predicted. Finally, the prediction results of all IMFs are combined with an adaptive linear neural network (ALNN), to formulate an ensemble output for the original crude oil price series. For verification and testing, two main crude oil price series, West Texas Intermediate (WTI) crude oil spot price and Brent crude oil spot price, are used to test the effectiveness of the proposed EMD-based neural network ensemble learning methodology. Empirical results obtained demonstrate attractiveness of the proposed EMD-based neural network ensemble learning paradigm. (author)

  6. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong

    2010-09-19

    The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  7. 77 FR 26441 - Approval and Promulgation of Implementation Plans; North Carolina; Charlotte; Ozone 2002 Base...

    Science.gov (United States)

    2012-05-04

    ... Promulgation of Implementation Plans; North Carolina; Charlotte; Ozone 2002 Base Year Emissions Inventory... final action to approve the ozone 2002 base year emissions inventory portion of the state implementation... Air Act (CAA or Act). EPA will take action on the South Carolina submission for the ozone 2002 base...

  8. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  9. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  10. Novel Water Treatment Processes Based on Hybrid Membrane-Ozonation Systems: A Novel Ceramic Membrane Contactor for Bubbleless Ozonation of Emerging Micropollutants

    Directory of Open Access Journals (Sweden)

    Stylianos K. Stylianou

    2015-01-01

    Full Text Available The aim of this study is the presentation of novel water treatment systems based on ozonation combined with ceramic membranes for the treatment of refractory organic compounds found in natural water sources such as groundwater. This includes, firstly, a short review of possible membrane based hybrid processes for water treatment from various sources. Several practical and theoretical aspects for the application of hybrid membrane-ozonation systems are discussed, along with theoretical background regarding the transformation of target organic pollutants by ozone. Next, a novel ceramic membrane contactor, bringing into contact the gas phase (ozone and water phase without the creation of bubbles (bubbleless ozonation, is presented. Experimental data showing the membrane contactor efficiency for oxidation of atrazine, endosulfan, and methyl tert-butyl ether (MTBE are shown and discussed. Almost complete endosulfan degradation was achieved with the use of the ceramic contactor, whereas atrazine degradation higher than 50% could not be achieved even after 60 min of reaction time. Single ozonation of water containing MTBE could not result in a significant MTBE degradation. MTBE mineralization by O3/H2O2 combination increased at higher pH values and O3/H2O2 molar ratio of 0.2 reaching a maximum of around 65%.

  11. Harmony Search Based Parameter Ensemble Adaptation for Differential Evolution

    Directory of Open Access Journals (Sweden)

    Rammohan Mallipeddi

    2013-01-01

    Full Text Available In differential evolution (DE algorithm, depending on the characteristics of the problem at hand and the available computational resources, different strategies combined with a different set of parameters may be effective. In addition, a single, well-tuned combination of strategies and parameters may not guarantee optimal performance because different strategies combined with different parameter settings can be appropriate during different stages of the evolution. Therefore, various adaptive/self-adaptive techniques have been proposed to adapt the DE strategies and parameters during the course of evolution. In this paper, we propose a new parameter adaptation technique for DE based on ensemble approach and harmony search algorithm (HS. In the proposed method, an ensemble of parameters is randomly sampled which form the initial harmony memory. The parameter ensemble evolves during the course of the optimization process by HS algorithm. Each parameter combination in the harmony memory is evaluated by testing them on the DE population. The performance of the proposed adaptation method is evaluated using two recently proposed strategies (DE/current-to-pbest/bin and DE/current-to-gr_best/bin as basic DE frameworks. Numerical results demonstrate the effectiveness of the proposed adaptation technique compared to the state-of-the-art DE based algorithms on a set of challenging test problems (CEC 2005.

  12. Stochastic Watershed Models for Risk Based Decision Making

    Science.gov (United States)

    Vogel, R. M.

    2017-12-01

    Over half a century ago, the Harvard Water Program introduced the field of operational or synthetic hydrology providing stochastic streamflow models (SSMs), which could generate ensembles of synthetic streamflow traces useful for hydrologic risk management. The application of SSMs, based on streamflow observations alone, revolutionized water resources planning activities, yet has fallen out of favor due, in part, to their inability to account for the now nearly ubiquitous anthropogenic influences on streamflow. This commentary advances the modern equivalent of SSMs, termed `stochastic watershed models' (SWMs) useful as input to nearly all modern risk based water resource decision making approaches. SWMs are deterministic watershed models implemented using stochastic meteorological series, model parameters and model errors, to generate ensembles of streamflow traces that represent the variability in possible future streamflows. SWMs combine deterministic watershed models, which are ideally suited to accounting for anthropogenic influences, with recent developments in uncertainty analysis and principles of stochastic simulation

  13. Ensemble-based Probabilistic Forecasting at Horns Rev

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    2009-01-01

    forecasting methodology. In a first stage, ensemble forecasts of meteorological variables are converted to power through a suitable power curve model. This modelemploys local polynomial regression, and is adoptively estimated with an orthogonal fitting method. The obtained ensemble forecasts of wind power...

  14. 'Lazy' quantum ensembles

    International Nuclear Information System (INIS)

    Parfionov, George; Zapatrin, Roman

    2006-01-01

    We compare different strategies aimed to prepare an ensemble with a given density matrix ρ. Preparing the ensemble of eigenstates of ρ with appropriate probabilities can be treated as 'generous' strategy: it provides maximal accessible information about the state. Another extremity is the so-called 'Scrooge' ensemble, which is mostly stingy in sharing the information. We introduce 'lazy' ensembles which require minimal effort to prepare the density matrix by selecting pure states with respect to completely random choice. We consider two parties, Alice and Bob, playing a kind of game. Bob wishes to guess which pure state is prepared by Alice. His null hypothesis, based on the lack of any information about Alice's intention, is that Alice prepares any pure state with equal probability. Then, the average quantum state measured by Bob turns out to be ρ, and he has to make a new hypothesis about Alice's intention solely based on the information that the observed density matrix is ρ. The arising 'lazy' ensemble is shown to be the alternative hypothesis which minimizes type I error

  15. Estimation of surface UV levels based on Meteor-3/TOMS ozone data

    Energy Technology Data Exchange (ETDEWEB)

    Borisov, Y A [Central Aerological Observatory, Moscow (Russian Federation); Geogdzhaev, I V [Moscow Inst. of Physics and Technology, Moscow (Russian Federation); Khattatov, V U [Central Aerological Observatory, Moscow (Russian Federation)

    1996-12-31

    The major consequence of ozone layer depletion for the environment is an increase of harmful ultraviolet (UV) radiation on the Earth surface and in the upper ocean. This implies the importance of environmental UV monitoring. Since the direct global monitoring is not currently possible, indirect estimations of surface UV levels may be used based on satellite ozone data (Madronich, S. 1992). Total Ozone Mapping Spectrometer (TOMS) on board the METEOR-3 satellite provided regular set of data for such estimates. During the time of its operation (August, 1991 - December, 1994) the instrument registered several ozone hole events over Antarctica, when ozone levels dropped by as much as 60 % from their unperturbed values. Probably even more alarming ozone depletions were observed over highly populated regions of middle latitudes of northern hemisphere. Radiative transfer modeling was used to convert METEOR-3/TOMS daily ozone values into regional and global maps of biologically active UV. Calculations demonstrate the effect on surface UV levels produced by ozone hole over Antarctica and ozone depletions over the territory of Russia (March, 1994). UV contour lines deviate from the normal appearance which is determined by growing southward solar elevation. UV contour lines are almost perpendicular to the ozone ones in the ozone depletions areas. The 30 % ozone depletion, over Siberia caused more than 30 % increase in noontime erythemal UV levels, which is equivalent to 10-15 degrees southward latitude displacement. Higher UV radiation increases were found in ozone hole over South America (October 1992) equivalent to about 20 degrees southward displacement

  16. Estimation of surface UV levels based on Meteor-3/TOMS ozone data

    Energy Technology Data Exchange (ETDEWEB)

    Borisov, Y.A. [Central Aerological Observatory, Moscow (Russian Federation); Geogdzhaev, I.V. [Moscow Inst. of Physics and Technology, Moscow (Russian Federation); Khattatov, V.U. [Central Aerological Observatory, Moscow (Russian Federation)

    1995-12-31

    The major consequence of ozone layer depletion for the environment is an increase of harmful ultraviolet (UV) radiation on the Earth surface and in the upper ocean. This implies the importance of environmental UV monitoring. Since the direct global monitoring is not currently possible, indirect estimations of surface UV levels may be used based on satellite ozone data (Madronich, S. 1992). Total Ozone Mapping Spectrometer (TOMS) on board the METEOR-3 satellite provided regular set of data for such estimates. During the time of its operation (August, 1991 - December, 1994) the instrument registered several ozone hole events over Antarctica, when ozone levels dropped by as much as 60 % from their unperturbed values. Probably even more alarming ozone depletions were observed over highly populated regions of middle latitudes of northern hemisphere. Radiative transfer modeling was used to convert METEOR-3/TOMS daily ozone values into regional and global maps of biologically active UV. Calculations demonstrate the effect on surface UV levels produced by ozone hole over Antarctica and ozone depletions over the territory of Russia (March, 1994). UV contour lines deviate from the normal appearance which is determined by growing southward solar elevation. UV contour lines are almost perpendicular to the ozone ones in the ozone depletions areas. The 30 % ozone depletion, over Siberia caused more than 30 % increase in noontime erythemal UV levels, which is equivalent to 10-15 degrees southward latitude displacement. Higher UV radiation increases were found in ozone hole over South America (October 1992) equivalent to about 20 degrees southward displacement

  17. Deterministic versus evidence-based attitude towards clinical diagnosis.

    Science.gov (United States)

    Soltani, Akbar; Moayyeri, Alireza

    2007-08-01

    Generally, two basic classes have been proposed for scientific explanation of events. Deductive reasoning emphasizes on reaching conclusions about a hypothesis based on verification of universal laws pertinent to that hypothesis, while inductive or probabilistic reasoning explains an event by calculation of some probabilities for that event to be related to a given hypothesis. Although both types of reasoning are used in clinical practice, evidence-based medicine stresses on the advantages of the second approach for most instances in medical decision making. While 'probabilistic or evidence-based' reasoning seems to involve more mathematical formulas at the first look, this attitude is more dynamic and less imprisoned by the rigidity of mathematics comparing with 'deterministic or mathematical attitude'. In the field of medical diagnosis, appreciation of uncertainty in clinical encounters and utilization of likelihood ratio as measure of accuracy seem to be the most important characteristics of evidence-based doctors. Other characteristics include use of series of tests for refining probability, changing diagnostic thresholds considering external evidences and nature of the disease, and attention to confidence intervals to estimate uncertainty of research-derived parameters.

  18. A Matrix-Free Posterior Ensemble Kalman Filter Implementation Based on a Modified Cholesky Decomposition

    Directory of Open Access Journals (Sweden)

    Elias D. Nino-Ruiz

    2017-07-01

    Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.

  19. A deep learning-based multi-model ensemble method for cancer prediction.

    Science.gov (United States)

    Xiao, Yawen; Wu, Jun; Lin, Zongli; Zhao, Xiaodong

    2018-01-01

    Cancer is a complex worldwide health problem associated with high mortality. With the rapid development of the high-throughput sequencing technology and the application of various machine learning methods that have emerged in recent years, progress in cancer prediction has been increasingly made based on gene expression, providing insight into effective and accurate treatment decision making. Thus, developing machine learning methods, which can successfully distinguish cancer patients from healthy persons, is of great current interest. However, among the classification methods applied to cancer prediction so far, no one method outperforms all the others. In this paper, we demonstrate a new strategy, which applies deep learning to an ensemble approach that incorporates multiple different machine learning models. We supply informative gene data selected by differential gene expression analysis to five different classification models. Then, a deep learning method is employed to ensemble the outputs of the five classifiers. The proposed deep learning-based multi-model ensemble method was tested on three public RNA-seq data sets of three kinds of cancers, Lung Adenocarcinoma, Stomach Adenocarcinoma and Breast Invasive Carcinoma. The test results indicate that it increases the prediction accuracy of cancer for all the tested RNA-seq data sets as compared to using a single classifier or the majority voting algorithm. By taking full advantage of different classifiers, the proposed deep learning-based multi-model ensemble method is shown to be accurate and effective for cancer prediction. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Combining 2-m temperature nowcasting and short range ensemble forecasting

    Directory of Open Access Journals (Sweden)

    A. Kann

    2011-12-01

    Full Text Available During recent years, numerical ensemble prediction systems have become an important tool for estimating the uncertainties of dynamical and physical processes as represented in numerical weather models. The latest generation of limited area ensemble prediction systems (LAM-EPSs allows for probabilistic forecasts at high resolution in both space and time. However, these systems still suffer from systematic deficiencies. Especially for nowcasting (0–6 h applications the ensemble spread is smaller than the actual forecast error. This paper tries to generate probabilistic short range 2-m temperature forecasts by combining a state-of-the-art nowcasting method and a limited area ensemble system, and compares the results with statistical methods. The Integrated Nowcasting Through Comprehensive Analysis (INCA system, which has been in operation at the Central Institute for Meteorology and Geodynamics (ZAMG since 2006 (Haiden et al., 2011, provides short range deterministic forecasts at high temporal (15 min–60 min and spatial (1 km resolution. An INCA Ensemble (INCA-EPS of 2-m temperature forecasts is constructed by applying a dynamical approach, a statistical approach, and a combined dynamic-statistical method. The dynamical method takes uncertainty information (i.e. ensemble variance from the operational limited area ensemble system ALADIN-LAEF (Aire Limitée Adaptation Dynamique Développement InterNational Limited Area Ensemble Forecasting which is running operationally at ZAMG (Wang et al., 2011. The purely statistical method assumes a well-calibrated spread-skill relation and applies ensemble spread according to the skill of the INCA forecast of the most recent past. The combined dynamic-statistical approach adapts the ensemble variance gained from ALADIN-LAEF with non-homogeneous Gaussian regression (NGR which yields a statistical mbox{correction} of the first and second moment (mean bias and dispersion for Gaussian distributed continuous

  1. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    Science.gov (United States)

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  2. An enhanced deterministic K-Means clustering algorithm for cancer subtype prediction from gene expression data.

    Science.gov (United States)

    Nidheesh, N; Abdul Nazeer, K A; Ameer, P M

    2017-12-01

    Clustering algorithms with steps involving randomness usually give different results on different executions for the same dataset. This non-deterministic nature of algorithms such as the K-Means clustering algorithm limits their applicability in areas such as cancer subtype prediction using gene expression data. It is hard to sensibly compare the results of such algorithms with those of other algorithms. The non-deterministic nature of K-Means is due to its random selection of data points as initial centroids. We propose an improved, density based version of K-Means, which involves a novel and systematic method for selecting initial centroids. The key idea of the algorithm is to select data points which belong to dense regions and which are adequately separated in feature space as the initial centroids. We compared the proposed algorithm to a set of eleven widely used single clustering algorithms and a prominent ensemble clustering algorithm which is being used for cancer data classification, based on the performances on a set of datasets comprising ten cancer gene expression datasets. The proposed algorithm has shown better overall performance than the others. There is a pressing need in the Biomedical domain for simple, easy-to-use and more accurate Machine Learning tools for cancer subtype prediction. The proposed algorithm is simple, easy-to-use and gives stable results. Moreover, it provides comparatively better predictions of cancer subtypes from gene expression data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. RBE for deterministic effects

    International Nuclear Information System (INIS)

    1990-01-01

    In the present report, data on RBE values for effects in tissues of experimental animals and man are analysed to assess whether for specific tissues the present dose limits or annual limits of intake based on Q values, are adequate to prevent deterministic effects. (author)

  4. Using Analog Ensemble to generate spatially downscaled probabilistic wind power forecasts

    Science.gov (United States)

    Delle Monache, L.; Shahriari, M.; Cervone, G.

    2017-12-01

    We use the Analog Ensemble (AnEn) method to generate probabilistic 80-m wind power forecasts. We use data from the NCEP GFS ( 28 km resolution) and NCEP NAM (12 km resolution). We use forecasts data from NAM and GFS, and analysis data from NAM which enables us to: 1) use a lower-resolution model to create higher-resolution forecasts, and 2) use a higher-resolution model to create higher-resolution forecasts. The former essentially increases computing speed and the latter increases forecast accuracy. An aggregated model of the former can be compared against the latter to measure the accuracy of the AnEn spatial downscaling. The AnEn works by taking a deterministic future forecast and comparing it with past forecasts. The model searches for the best matching estimates within the past forecasts and selects the predictand value corresponding to these past forecasts as the ensemble prediction for the future forecast. Our study is based on predicting wind speed and air density at more than 13,000 grid points in the continental US. We run the AnEn model twice: 1) estimating 80-m wind speed by using predictor variables such as temperature, pressure, geopotential height, U-component and V-component of wind, 2) estimating air density by using predictors such as temperature, pressure, and relative humidity. We use the air density values to correct the standard wind power curves for different values of air density. The standard deviation of the ensemble members (i.e. ensemble spread) will be used as the degree of difficulty to predict wind power at different locations. The value of the correlation coefficient between the ensemble spread and the forecast error determines the appropriateness of this measure. This measure is prominent for wind farm developers as building wind farms in regions with higher predictability will reduce the real-time risks of operating in the electricity markets.

  5. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  6. Human resource recommendation algorithm based on ensemble learning and Spark

    Science.gov (United States)

    Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie

    2017-08-01

    Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.

  7. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  8. Antarctic ozone loss in 1989-2010: evidence for ozone recovery?

    Science.gov (United States)

    Kuttippurath, J.; Lefèvre, F.; Pommereau, J.-P.; Roscoe, H. K.; Goutail, F.; Pazmiño, A.; Shanklin, J. D.

    2012-04-01

    We present a detailed estimation of chemical ozone loss in the Antarctic polar vortex from 1989 to 2010. The analyses include ozone loss estimates for 12 Antarctic ground-based (GB) stations. All GB observations show minimum ozone in the late September-early October period. Among the stations, the lowest minimum ozone values are observed at South Pole and the highest at Dumont d'Urville. The ozone loss starts by mid-June at the vortex edge and then progresses towards the vortex core with time. The loss intensifies in August-September, peaks by the end of September-early October, and recovers thereafter. The average ozone loss in the Antarctic is revealed to be about 33-50% in 1989-1992 in agreement with the increase in halogens during this period, and then stayed at around 48% due to saturation of the loss. The ozone loss in the warmer winters (e.g. 2002, and 2004) is lower (37-46%) and in the colder winters (e.g. 2003, and 2006) is higher (52-55%). Because of small inter-annual variability, the correlation between ozone loss and the volume of polar stratospheric clouds yields ~0.51. The GB ozone and ozone loss values are in good agreement with those found from the space-based observations of the Total Ozone Mapping Spectrometer/Ozone Monitoring Instrument (TOMS/OMI), the Global Ozone Monitoring Experiment (GOME), the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY), and the Aura Microwave Limb Sounder (MLS), where the differences are within ±5% and are mostly within the error bars of the measurements. The piece-wise linear trends computed from the September-November vortex average GB and TOMS/OMI ozone show about -4 to -5.6 DU (Dobson Unit) yr-1 in 1989-1996 and about +1 DU yr-1 in 1997-2010. The trend during the former period is significant at 95% confidence intervals, but the trend in 1997-2010 is significant only at 85% confidence intervals. Our analyses suggest a period of about 9-10 yr to get the first detectable ozone

  9. Comparison of different incremental analysis update schemes in a realistic assimilation system with Ensemble Kalman Filter

    Science.gov (United States)

    Yan, Y.; Barth, A.; Beckers, J. M.; Brankart, J. M.; Brasseur, P.; Candille, G.

    2017-07-01

    In this paper, three incremental analysis update schemes (IAU 0, IAU 50 and IAU 100) are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The difference between the three IAU schemes lies on the position of the increment update window. The relevance of each IAU scheme is evaluated through analyses on both thermohaline and dynamical variables. The validation of the assimilation results is performed according to both deterministic and probabilistic metrics against different sources of observations. For deterministic validation, the ensemble mean and the ensemble spread are compared to the observations. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centred random variable (RCRV) score. The obtained results show that 1) the IAU 50 scheme has the same performance as the IAU 100 scheme 2) the IAU 50/100 schemes outperform the IAU 0 scheme in error covariance propagation for thermohaline variables in relatively stable region, while the IAU 0 scheme outperforms the IAU 50/100 schemes in dynamical variables estimation in dynamically active region 3) in case with sufficient number of observations and good error specification, the impact of IAU schemes is negligible. The differences between the IAU 0 scheme and the IAU 50/100 schemes are mainly due to different model integration time and different instability (density inversion, large vertical velocity, etc.) induced by the increment update. The longer model integration time with the IAU 50/100 schemes, especially the free model integration, on one hand, allows for better re-establishment of the equilibrium model state, on the other hand, smooths the strong gradients in dynamically active region.

  10. Prediction of required ozone dosage for pilot recirculating aquaculture systems based on laboratory studies

    DEFF Research Database (Denmark)

    Spiliotopoulou, Aikaterini; Rojas-Tirado, Paula Andrea; Kaarsholm, Kamilla Marie Speht

    2017-01-01

    In recirculating aquaculture systems (RAS), the water quality changes continuously. Organic and inorganic compounds accumulates creating toxic conditions for the farmed organisms. Ozone improves water quality diminishing significantly both bacteria load and dissolved organic matter. However......, in a non-meticulously designed system, residual ozone might reach the culture tanks causing significant harm to cultured species or excess costs. The aim of the study was to predict the suitable ozone dosage in pilot RAS, for water treatment purposes, based on laboratory studies. The ozone effect on water...... quality of freshwater RAS and system’s ozone demand was investigated. Bench-scale ozonation experiments revealed the ozone demand of the system to be 180 mg O3/h. Three different ozone dosages were applied to four replicated systems with fixed feed loading (1.56 kg feed/m3 make up water). Results...

  11. New stomatal flux-based critical levels for ozone effects on vegetation

    Science.gov (United States)

    Mills, Gina; Pleijel, Håkan; Braun, Sabine; Büker, Patrick; Bermejo, Victoria; Calvo, Esperanza; Danielsson, Helena; Emberson, Lisa; Fernández, Ignacio González; Grünhage, Ludger; Harmens, Harry; Hayes, Felicity; Karlsson, Per-Erik; Simpson, David

    2011-09-01

    The critical levels for ozone effects on vegetation have been reviewed and revised by the LRTAP Convention. Eight new or revised critical levels based on the accumulated stomatal flux of ozone (POD Y, the Phytotoxic Ozone Dose above a threshold flux of Y nmol m -2 PLA s -1, where PLA is the projected leaf area) have been agreed. For each receptor, data were combined from experiments conducted under naturally fluctuating environmental conditions in 2-4 countries, resulting in linear dose-response relationships with response variables specific to each receptor ( r2 = 0.49-0.87, p Norway spruce. For (semi-)natural vegetation, the critical level for effects on productive and high conservation value perennial grasslands was based on effects on important component species of the genus Trifolium (clover species). These critical levels can be used to assess protection against the damaging effects of ozone on food security, important ecosystem services provided by forest trees (roundwood production, C sequestration, soil stability and flood prevention) and the vitality of pasture.

  12. Ensemble based system for whole-slide prostate cancer probability mapping using color texture features.

    LENUS (Irish Health Repository)

    DiFranco, Matthew D

    2011-01-01

    We present a tile-based approach for producing clinically relevant probability maps of prostatic carcinoma in histological sections from radical prostatectomy. Our methodology incorporates ensemble learning for feature selection and classification on expert-annotated images. Random forest feature selection performed over varying training sets provides a subset of generalized CIEL*a*b* co-occurrence texture features, while sample selection strategies with minimal constraints reduce training data requirements to achieve reliable results. Ensembles of classifiers are built using expert-annotated tiles from training images, and scores for the probability of cancer presence are calculated from the responses of each classifier in the ensemble. Spatial filtering of tile-based texture features prior to classification results in increased heat-map coherence as well as AUC values of 95% using ensembles of either random forests or support vector machines. Our approach is designed for adaptation to different imaging modalities, image features, and histological decision domains.

  13. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices

    Science.gov (United States)

    Monajemi, Hatef; Jafarpour, Sina; Gavish, Matan; Donoho, David L.; Ambikasaran, Sivaram; Bacallado, Sergio; Bharadia, Dinesh; Chen, Yuxin; Choi, Young; Chowdhury, Mainak; Chowdhury, Soham; Damle, Anil; Fithian, Will; Goetz, Georges; Grosenick, Logan; Gross, Sam; Hills, Gage; Hornstein, Michael; Lakkam, Milinda; Lee, Jason; Li, Jian; Liu, Linxi; Sing-Long, Carlos; Marx, Mike; Mittal, Akshay; Monajemi, Hatef; No, Albert; Omrani, Reza; Pekelis, Leonid; Qin, Junjie; Raines, Kevin; Ryu, Ernest; Saxe, Andrew; Shi, Dai; Siilats, Keith; Strauss, David; Tang, Gary; Wang, Chaojun; Zhou, Zoey; Zhu, Zhen

    2013-01-01

    In compressed sensing, one takes samples of an N-dimensional vector using an matrix A, obtaining undersampled measurements . For random matrices with independent standard Gaussian entries, it is known that, when is k-sparse, there is a precisely determined phase transition: for a certain region in the (,)-phase diagram, convex optimization typically finds the sparsest solution, whereas outside that region, it typically fails. It has been shown empirically that the same property—with the same phase transition location—holds for a wide range of non-Gaussian random matrix ensembles. We report extensive experiments showing that the Gaussian phase transition also describes numerous deterministic matrices, including Spikes and Sines, Spikes and Noiselets, Paley Frames, Delsarte-Goethals Frames, Chirp Sensing Matrices, and Grassmannian Frames. Namely, for each of these deterministic matrices in turn, for a typical k-sparse object, we observe that convex optimization is successful over a region of the phase diagram that coincides with the region known for Gaussian random matrices. Our experiments considered coefficients constrained to for four different sets , and the results establish our finding for each of the four associated phase transitions. PMID:23277588

  14. Ensemble-based flash-flood modelling: Taking into account hydrodynamic parameters and initial soil moisture uncertainties

    Science.gov (United States)

    Edouard, Simon; Vincendon, Béatrice; Ducrocq, Véronique

    2018-05-01

    Intense precipitation events in the Mediterranean often lead to devastating flash floods (FF). FF modelling is affected by several kinds of uncertainties and Hydrological Ensemble Prediction Systems (HEPS) are designed to take those uncertainties into account. The major source of uncertainty comes from rainfall forcing and convective-scale meteorological ensemble prediction systems can manage it for forecasting purpose. But other sources are related to the hydrological modelling part of the HEPS. This study focuses on the uncertainties arising from the hydrological model parameters and initial soil moisture with aim to design an ensemble-based version of an hydrological model dedicated to Mediterranean fast responding rivers simulations, the ISBA-TOP coupled system. The first step consists in identifying the parameters that have the strongest influence on FF simulations by assuming perfect precipitation. A sensitivity study is carried out first using a synthetic framework and then for several real events and several catchments. Perturbation methods varying the most sensitive parameters as well as initial soil moisture allow designing an ensemble-based version of ISBA-TOP. The first results of this system on some real events are presented. The direct perspective of this work will be to drive this ensemble-based version with the members of a convective-scale meteorological ensemble prediction system to design a complete HEPS for FF forecasting.

  15. An Integrated Scenario Ensemble-Based Framework for Hurricane Evacuation Modeling: Part 1-Decision Support System.

    Science.gov (United States)

    Davidson, Rachel A; Nozick, Linda K; Wachtendorf, Tricia; Blanton, Brian; Colle, Brian; Kolar, Randall L; DeYoung, Sarah; Dresback, Kendra M; Yi, Wenqi; Yang, Kun; Leonardo, Nicholas

    2018-03-30

    This article introduces a new integrated scenario-based evacuation (ISE) framework to support hurricane evacuation decision making. It explicitly captures the dynamics, uncertainty, and human-natural system interactions that are fundamental to the challenge of hurricane evacuation, but have not been fully captured in previous formal evacuation models. The hazard is represented with an ensemble of probabilistic scenarios, population behavior with a dynamic decision model, and traffic with a dynamic user equilibrium model. The components are integrated in a multistage stochastic programming model that minimizes risk and travel times to provide a tree of evacuation order recommendations and an evaluation of the risk and travel time performance for that solution. The ISE framework recommendations offer an advance in the state of the art because they: (1) are based on an integrated hazard assessment (designed to ultimately include inland flooding), (2) explicitly balance the sometimes competing objectives of minimizing risk and minimizing travel time, (3) offer a well-hedged solution that is robust under the range of ways the hurricane might evolve, and (4) leverage the substantial value of increasing information (or decreasing degree of uncertainty) over the course of a hurricane event. A case study for Hurricane Isabel (2003) in eastern North Carolina is presented to demonstrate how the framework is applied, the type of results it can provide, and how it compares to available methods of a single scenario deterministic analysis and a two-stage stochastic program. © 2018 Society for Risk Analysis.

  16. Ensemble manifold regularization.

    Science.gov (United States)

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  17. Dobson spectrophotometer ozone measurements during international ozone rocketsonde intercomparison

    Science.gov (United States)

    Parsons, C. L.

    1980-01-01

    Measurements of the total ozone content of the atmosphere, made with seven ground based instruments at a site near Wallops Island, Virginia, are discussed in terms for serving as control values with which the rocketborne sensor data products can be compared. These products are profiles of O3 concentration with altitude. By integrating over the range of altitudes from the surface to the rocket apogee and by appropriately estimating the residual ozone amount from apogee to the top of the atmosphere, a total ozone amount can be computed from the profiles that can be directly compared with the ground based instrumentation results. Dobson spectrophotometers were used for two of the ground-based instruments. Preliminary data collected during the IORI from Dobson spectrophotometers 72 and 38 are presented. The agreement between the two and the variability of total ozone overburden through the experiment period are discussed.

  18. Generalized rate-code model for neuron ensembles with finite populations

    International Nuclear Information System (INIS)

    Hasegawa, Hideo

    2007-01-01

    We have proposed a generalized Langevin-type rate-code model subjected to multiplicative noise, in order to study stationary and dynamical properties of an ensemble containing a finite number N of neurons. Calculations using the Fokker-Planck equation have shown that, owing to the multiplicative noise, our rate model yields various kinds of stationary non-Gaussian distributions such as Γ, inverse-Gaussian-like, and log-normal-like distributions, which have been experimentally observed. The dynamical properties of the rate model have been studied with the use of the augmented moment method (AMM), which was previously proposed by the author from a macroscopic point of view for finite-unit stochastic systems. In the AMM, the original N-dimensional stochastic differential equations (DEs) are transformed into three-dimensional deterministic DEs for the means and fluctuations of local and global variables. The dynamical responses of the neuron ensemble to pulse and sinusoidal inputs calculated by the AMM are in good agreement with those obtained by direct simulation. The synchronization in the neuronal ensemble is discussed. The variabilities of the firing rate and of the interspike interval are shown to increase with increasing magnitude of multiplicative noise, which may be a conceivable origin of the observed large variability in cortical neurons

  19. Ozonation-based decolorization of food dyes for recovery of fruit leather wastes.

    Science.gov (United States)

    Zhu, Wenda; Koziel, Jacek A; Cai, Lingshuang; Brehm-Stecher, Byron F; Ozsoy, H Duygu; van Leeuwen, J Hans

    2013-08-28

    Commercial manufacture of fruit leathers (FL) usually results in a portion of the product that is out of specification. The disposition of this material poses special challenges in the food industry. Because the material remains edible and contains valuable ingredients (fruit pulp, sugars, acidulates, etc.), an ideal solution would be to recover this material for product rework. A key practical obstacle to such recovery is that compositing of differently colored wastes results in an unsalable gray product. Therefore, a safe and scalable method for decolorization of FL prior to product rework is needed. This research introduces a novel approach utilizing ozonation for color removal. To explore the use of ozonation as a decolorization step, we first applied it to simple solutions of the commonly used food colorants 2-naphthalenesulfonic acid (Red 40), tartrazine (Yellow 5), and erioglaucine (Blue 1). Decolorization was measured by UV/vis spectrometry at visible wavelengths and with a Hunter colorimeter. Volatile and semivolatile byproducts from ozone-based colorant decomposition were identified and quantified with solid phase microextraction coupled with gas chromatography-mass spectrometry (SPME-GC-MS). Removal of Yellow 5, Red 40 and Blue 1 of about 65%, 80%, and 90%, respectively, was accomplished with 70 g of ozone applied per 1 kg of redissolved and resuspended FL. Carbonyl compounds were identified as major byproducts from ozone-induced decomposition of the food colorants. A conservative risk assessment based on quantification results and published toxicity information of potentially toxic byproducts, determined that ozone-based decolorization of FL before recycling is acceptable from a safety standpoint. A preliminary cost estimate based on recycling of 1000 tons of FL annually suggests a potential of $275,000 annual profit from this practice at one production facility alone.

  20. An "Ensemble Approach" to Modernizing Extreme Precipitation Estimation for Dam Safety Decision-Making

    Science.gov (United States)

    Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.

    2017-12-01

    To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the

  1. Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique

    Directory of Open Access Journals (Sweden)

    Jan D. Keller

    2008-12-01

    Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.

  2. Enhanced WWTP effluent organic matter removal in hybrid ozonation-coagulation (HOC) process catalyzed by Al-based coagulant

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Xin [School of Environmental and Municipal Engineering, Xi’an University of Architecture and Technology, Xi’an, Shaanxi Province, 710055 (China); Jin, Pengkang, E-mail: pkjin@hotmail.com [School of Environmental and Municipal Engineering, Xi’an University of Architecture and Technology, Xi’an, Shaanxi Province, 710055 (China); Hou, Rui [School of Environmental and Municipal Engineering, Xi’an University of Architecture and Technology, Xi’an, Shaanxi Province, 710055 (China); Yang, Lei [Department of Materials Science and Engineering, Monash University, Clayton, VIC, 3800 (Australia); Wang, Xiaochang C., E-mail: xcwang@xauat.edu.cn [School of Environmental and Municipal Engineering, Xi’an University of Architecture and Technology, Xi’an, Shaanxi Province, 710055 (China)

    2017-04-05

    Highlights: • A novel HOC process was firstly put forward to apply in wastewater reclamation. • Interactions between ozone and Al-based coagulants was found in the HOC process. • Ozonation can be catalyzed and enhanced by Al-based coagulants in the HOC process. • HOC process showed better organics removal than pre-ozonation-coagulation process. - Abstract: A novel hybrid ozonation-coagulation (HOC) process was developed for application in wastewater reclamation. In this process, ozonation and coagulation occurred simultaneously within a single unit. Compared with the conventional pre-ozonation-coagulation process, the HOC process exhibited much better performance in removing dissolved organic matters. In particular, the maximal organic matters removal efficiency was obtained at the ozone dosage of 1 mgO{sub 3}/mg DOC at each pH value (pH 5, 7 and 9). In order to interpret the mechanism of the HOC process, ozone decomposition was monitored. The results indicated that ozone decomposed much faster in the HOC process. Moreover, by using the reagent of O{sub 3}-resistant hydroxyl radical (·OH) probe compound, para-chlorobenzoic acid (pCBA), and electron paramagnetic resonance (EPR) analysis, it was observed that the HOC process generated higher content of ·OH compared with pre-ozonation process. This indicates that the ·OH oxidation reaction as the key step can be catalyzed and enhanced by Al-based coagulants and their hydrolyzed products in this developed process. Thus, based on the catalytic effects of Al-based coagulants on ozonation, the HOC process provides a promising alternative to the conventional technology for wastewater reclamation in terms of higher efficiency.

  3. Ensembl 2017

    OpenAIRE

    Aken, Bronwen L.; Achuthan, Premanand; Akanni, Wasiu; Amode, M. Ridwan; Bernsdorff, Friederike; Bhai, Jyothish; Billis, Konstantinos; Carvalho-Silva, Denise; Cummins, Carla; Clapham, Peter; Gil, Laurent; Gir?n, Carlos Garc?a; Gordon, Leo; Hourlier, Thibaut; Hunt, Sarah E.

    2016-01-01

    Ensembl (www.ensembl.org) is a database and genome browser for enabling research on vertebrate genomes. We import, analyse, curate and integrate a diverse collection of large-scale reference data to create a more comprehensive view of genome biology than would be possible from any individual dataset. Our extensive data resources include evidence-based gene and regulatory region annotation, genome variation and gene trees. An accompanying suite of tools, infrastructure and programmatic access ...

  4. Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model

    Science.gov (United States)

    Zheng, F.; Zhu, J.

    2015-12-01

    To perform an ensemble-based ENSO probabilistic forecast, the crucial issue is to design a reliable ensemble prediction strategy that should include the major uncertainties of a forecast system. In this study, we developed a new general ensemble perturbation technique to improve the ensemble-mean predictive skill of forecasting ENSO using an intermediate coupled model (ICM). The model uncertainties are first estimated and analyzed from EnKF analysis results through assimilating observed SST. Then, based on the pre-analyzed properties of the model errors, a zero-mean stochastic model-error model is developed to mainly represent the model uncertainties induced by some important physical processes missed in the coupled model (i.e., stochastic atmospheric forcing/MJO, extra-tropical cooling and warming, Indian Ocean Dipole mode, etc.). Each member of an ensemble forecast is perturbed by the stochastic model-error model at each step during the 12-month forecast process, and the stochastical perturbations are added into the modeled physical fields to mimic the presence of these high-frequency stochastic noises and model biases and their effect on the predictability of the coupled system. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr retrospective forecast experiments. The two forecast schemes are differentiated by whether they considered the model stochastic perturbations, with both initialized by the ensemble-mean analysis states from EnKF. The comparison results suggest that the stochastic model-error perturbations have significant and positive impacts on improving the ensemble-mean prediction skills during the entire 12-month forecast process. Because the nonlinear feature of the coupled model can induce the nonlinear growth of the added stochastic model errors with model integration, especially through the nonlinear heating mechanism with the vertical advection term of the model, the

  5. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  6. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  7. 77 FR 24399 - Approval and Promulgation of Implementation Plans; Georgia; Atlanta; Ozone 2002 Base Year...

    Science.gov (United States)

    2012-04-24

    ... Promulgation of Implementation Plans; Georgia; Atlanta; Ozone 2002 Base Year Emissions Inventory AGENCY... approve the ozone 2002 base year emissions inventory, portion of the state implementation plan (SIP... technology (RACT), contingency measures, a 2002 base- year emissions inventory and other planning SIP...

  8. Height-Deterministic Pushdown Automata

    DEFF Research Database (Denmark)

    Nowotka, Dirk; Srba, Jiri

    2007-01-01

    We define the notion of height-deterministic pushdown automata, a model where for any given input string the stack heights during any (nondeterministic) computation on the input are a priori fixed. Different subclasses of height-deterministic pushdown automata, strictly containing the class...... of regular languages and still closed under boolean language operations, are considered. Several of such language classes have been described in the literature. Here, we suggest a natural and intuitive model that subsumes all the formalisms proposed so far by employing height-deterministic pushdown automata...

  9. The dialectical thinking about deterministic and probabilistic safety analysis

    International Nuclear Information System (INIS)

    Qian Yongbai; Tong Jiejuan; Zhang Zuoyi; He Xuhong

    2005-01-01

    There are two methods in designing and analysing the safety performance of a nuclear power plant, the traditional deterministic method and the probabilistic method. To date, the design of nuclear power plant is based on the deterministic method. It has been proved in practice that the deterministic method is effective on current nuclear power plant. However, the probabilistic method (Probabilistic Safety Assessment - PSA) considers a much wider range of faults, takes an integrated look at the plant as a whole, and uses realistic criteria for the performance of the systems and constructions of the plant. PSA can be seen, in principle, to provide a broader and realistic perspective on safety issues than the deterministic approaches. In this paper, the historical origins and development trend of above two methods are reviewed and summarized in brief. Based on the discussion of two application cases - one is the changes to specific design provisions of the general design criteria (GDC) and the other is the risk-informed categorization of structure, system and component, it can be concluded that the deterministic method and probabilistic method are dialectical and unified, and that they are being merged into each other gradually, and being used in coordination. (authors)

  10. Beginning of the ozone recovery over Europe? − Analysis of the total ozone data from the ground-based observations, 1964−2004

    Directory of Open Access Journals (Sweden)

    J. W. Krzyścin

    2005-07-01

    Full Text Available The total ozone variations over Europe (~50° N in the period 1964–2004 are analyzed for detection of signals of ozone recovery. The ozone deviations from the long-term monthly means (1964–1980 for selected European stations, where the ozone observations (by the Dobson spectrophotometers have been carried out continuously for at least 3–4 decades, are averaged and examined by a regression model. A new method is proposed to disclose both the ozone trend variations and date of the trend turnaround. The regression model contains a piecewise linear trend component and the terms describing the ozone response to forcing by "natural" changes in the atmosphere. Standard proxies for the dynamically driven ozone variations are used. The Multivariate Adaptive Regression Splines (MARS methodology and principal component analysis are used to find an optimal set of the explanatory variables and the trend pattern. The turnaround of the ozone trend in 1994 is suggested from the pattern of the piecewise linear trend component. Thus, the changes in the ozone mean level are calculated over the periods 1970–1994 and 1994–2003, for both the original time series and the time series having "natural" variations removed. Statistical significance of the changes are derived by bootstrapping. A first stage of recovery (according to the definition of the International Ozone Commission, i.e. lessening of a negative trend, is found over Europe. It seems possible that the increase in the ozone mean level since 1994 of about 1–2% is due to superposition of the "natural" processes. Comparison of the total ozone ground-based network (the Dobson and Brewer spectrophotometers and the satellite (TOMS, version 8 data over Europe shows the small bias in the mean values for the period 1996–2004, but the differences between the daily ozone values from these instruments are not trendless, and this may hamper an identification of the next stage of the ozone recovery over

  11. Deterministic behavioural models for concurrency

    DEFF Research Database (Denmark)

    Sassone, Vladimiro; Nielsen, Mogens; Winskel, Glynn

    1993-01-01

    This paper offers three candidates for a deterministic, noninterleaving, behaviour model which generalizes Hoare traces to the noninterleaving situation. The three models are all proved equivalent in the rather strong sense of being equivalent as categories. The models are: deterministic labelled...... event structures, generalized trace languages in which the independence relation is context-dependent, and deterministic languages of pomsets....

  12. Skill prediction of local weather forecasts based on the ECMWF ensemble

    Directory of Open Access Journals (Sweden)

    C. Ziehmann

    2001-01-01

    Full Text Available Ensemble Prediction has become an essential part of numerical weather forecasting. In this paper we investigate the ability of ensemble forecasts to provide an a priori estimate of the expected forecast skill. Several quantities derived from the local ensemble distribution are investigated for a two year data set of European Centre for Medium-Range Weather Forecasts (ECMWF temperature and wind speed ensemble forecasts at 30 German stations. The results indicate that the population of the ensemble mode provides useful information for the uncertainty in temperature forecasts. The ensemble entropy is a similar good measure. This is not true for the spread if it is simply calculated as the variance of the ensemble members with respect to the ensemble mean. The number of clusters in the C regions is almost unrelated to the local skill. For wind forecasts, the results are less promising.

  13. Unequivocal detection of ozone recovery in the Antarctic Ozone Hole through significant increases in atmospheric layers with minimum ozone

    Science.gov (United States)

    de Laat, Jos; van Weele, Michiel; van der A, Ronald

    2015-04-01

    An important new landmark in present day ozone research is presented through MLS satellite observations of significant ozone increases during the ozone hole season that are attributed unequivocally to declining ozone depleting substances. For many decades the Antarctic ozone hole has been the prime example of both the detrimental effects of human activities on our environment as well as how to construct effective and successful environmental policies. Nowadays atmospheric concentrations of ozone depleting substances are on the decline and first signs of recovery of stratospheric ozone and ozone in the Antarctic ozone hole have been observed. The claimed detection of significant recovery, however, is still subject of debate. In this talk we will discuss first current uncertainties in the assessment of ozone recovery in the Antarctic ozone hole by using multi-variate regression methods, and, secondly present an alternative approach to identify ozone hole recovery unequivocally. Even though multi-variate regression methods help to reduce uncertainties in estimates of ozone recovery, great care has to be taken in their application due to the existence of uncertainties and degrees of freedom in the choice of independent variables. We show that taking all uncertainties into account in the regressions the formal recovery of ozone in the Antarctic ozone hole cannot be established yet, though is likely before the end of the decade (before 2020). Rather than focusing on time and area averages of total ozone columns or ozone profiles, we argue that the time evolution of the probability distribution of vertically resolved ozone in the Antarctic ozone hole contains a better fingerprint for the detection of ozone recovery in the Antarctic ozone hole. The advantages of this method over more tradition methods of trend analyses based on spatio-temporal average ozone are discussed. The 10-year record of MLS satellite measurements of ozone in the Antarctic ozone hole shows a

  14. On the skill of various ensemble spread estimators for probabilistic short range wind forecasting

    Science.gov (United States)

    Kann, A.

    2012-05-01

    A variety of applications ranging from civil protection associated with severe weather to economical interests are heavily dependent on meteorological information. For example, a precise planning of the energy supply with a high share of renewables requires detailed meteorological information on high temporal and spatial resolution. With respect to wind power, detailed analyses and forecasts of wind speed are of crucial interest for the energy management. Although the applicability and the current skill of state-of-the-art probabilistic short range forecasts has increased during the last years, ensemble systems still show systematic deficiencies which limit its practical use. This paper presents methods to improve the ensemble skill of 10-m wind speed forecasts by combining deterministic information from a nowcasting system on very high horizontal resolution with uncertainty estimates from a limited area ensemble system. It is shown for a one month validation period that a statistical post-processing procedure (a modified non-homogeneous Gaussian regression) adds further skill to the probabilistic forecasts, especially beyond the nowcasting range after +6 h.

  15. Evaluation of quantitative precipitation forecasts by TIGGE ensembles for south China during the presummer rainy season

    Science.gov (United States)

    Huang, Ling; Luo, Yali

    2017-08-01

    Based on The Observing System Research and Predictability Experiment Interactive Grand Global Ensemble (TIGGE) data set, this study evaluates the ability of global ensemble prediction systems (EPSs) from the European Centre for Medium-Range Weather Forecasts (ECMWF), U.S. National Centers for Environmental Prediction, Japan Meteorological Agency (JMA), Korean Meteorological Administration, and China Meteorological Administration (CMA) to predict presummer rainy season (April-June) precipitation in south China. Evaluation of 5 day forecasts in three seasons (2013-2015) demonstrates the higher skill of probability matching forecasts compared to simple ensemble mean forecasts and shows that the deterministic forecast is a close second. The EPSs overestimate light-to-heavy rainfall (0.1 to 30 mm/12 h) and underestimate heavier rainfall (>30 mm/12 h), with JMA being the worst. By analyzing the synoptic situations predicted by the identified more skillful (ECMWF) and less skillful (JMA and CMA) EPSs and the ensemble sensitivity for four representative cases of torrential rainfall, the transport of warm-moist air into south China by the low-level southwesterly flow, upstream of the torrential rainfall regions, is found to be a key synoptic factor that controls the quantitative precipitation forecast. The results also suggest that prediction of locally produced torrential rainfall is more challenging than prediction of more extensively distributed torrential rainfall. A slight improvement in the performance is obtained by shortening the forecast lead time from 30-36 h to 18-24 h to 6-12 h for the cases with large-scale forcing, but not for the locally produced cases.

  16. Three-dimensional theory of quantum memories based on Λ-type atomic ensembles

    International Nuclear Information System (INIS)

    Zeuthen, Emil; Grodecka-Grad, Anna; Soerensen, Anders S.

    2011-01-01

    We develop a three-dimensional theory for quantum memories based on light storage in ensembles of Λ-type atoms, where two long-lived atomic ground states are employed. We consider light storage in an ensemble of finite spatial extent and we show that within the paraxial approximation the Fresnel number of the atomic ensemble and the optical depth are the only important physical parameters determining the quality of the quantum memory. We analyze the influence of these parameters on the storage of light followed by either forward or backward read-out from the quantum memory. We show that for small Fresnel numbers the forward memory provides higher efficiencies, whereas for large Fresnel numbers the backward memory is advantageous. The optimal light modes to store in the memory are presented together with the corresponding spin waves and outcoming light modes. We show that for high optical depths such Λ-type atomic ensembles allow for highly efficient backward and forward memories even for small Fresnel numbers F(greater-or-similar sign)0.1.

  17. An empirical study of ensemble-based semi-supervised learning approaches for imbalanced splice site datasets.

    Science.gov (United States)

    Stanescu, Ana; Caragea, Doina

    2015-01-01

    Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.

  18. Ozonation control and effects of ozone on water quality in recirculating aquaculture systems

    DEFF Research Database (Denmark)

    Spiliotopoulou, Aikaterini; Rojas-Tirado, Paula Andrea; Chetri, Ravi K.

    2018-01-01

    To address the undesired effect of chemotherapeutants in aquaculture, ozone has been suggested as an alternative to improve water quality. To ensure safe and robust treatment, it is vital to define the ozone demand and ozone kinetics of the specific water matrix to avoid ozone overdose. Different...... ozone dosages were applied to water in freshwater recirculating aquaculture systems (RAS). Experiments were performed to investigate ozone kinetics and demand, and to evaluate the effects on the water quality, particularly in relation to fluorescent organic matter. This study aimed at predicting...... a suitable ozone dosage for water treatment based on daily ozone demand via laboratory studies. These ozone dosages will be eventually applied and maintained at these levels in pilot-scale RAS to verify predictions. Selected water quality parameters were measured, including natural fluorescence and organic...

  19. Ozone decay in chemical reactor for ozone-dynamical disintegration of used tyres

    International Nuclear Information System (INIS)

    Golota, V.I.; Manuilenko, O.V.; Taran, G.V.; Dotsenko, Yu.V.; Pismenetskii, A.S.; Zamuriev, A.A.; Benitskaja, V.A.

    2011-01-01

    The ozone decay kinetics in the chemical reactor intended for used tyres disintegration is investigated experimentally and theoretically. Ozone was synthesized in barrierless ozonizers based on the streamer discharge. The chemical reactor for tyres disintegration in the ozone-air environment represents the cylindrical chamber, which feeds from the ozonizer by ozone-air mixture with the specified rate of volume flow, and with known ozone concentration. The output of the used mixture, which rate of volume flow is also known, is carried out through the ozone destructor. As a result of ozone decay in the volume and on the reactor walls, and output of the used mixture from the reactor, the ozone concentration in the reactor depends from time. In the paper, the analytical expression for dependence of ozone concentration in the reactor from time and from the parameters of a problem such as the volumetric feed rate, ozone concentration on the input in the reactor, volume flow rate of the used mixture, the volume of the reactor and the area of its internal surface is obtained. It is shown that experimental results coincide with good accuracy with analytical ones.

  20. Regionalization based on spatial and seasonal variation in ground-level ozone concentrations across China.

    Science.gov (United States)

    Cheng, Linjun; Wang, Shuai; Gong, Zhengyu; Li, Hong; Yang, Qi; Wang, Yeyao

    2018-05-01

    Owing to the vast territory of China and strong regional characteristic of ozone pollution, it's desirable for policy makers to have a targeted and prioritized regulation and ozone pollution control strategy in China based on scientific evidences. It's important to assess its current pollution status as well as spatial and temporal variation patterns across China. Recent advances of national monitoring networks provide an opportunity to insight the actions of ozone pollution. Here, we present rotated empirical orthogonal function (REOF) analysis that was used on studying the spatiotemporal characteristics of daily ozone concentrations. Based on results of REOF analysis in pollution seasons for 3years' observations, twelve regions with clear patterns were identified in China. The patterns of temporal variation of ozone in each region were separated well and different from each other, reflecting local meteorological, photochemical or pollution features. A rising trend in annual averaged Eight-hour Average Ozone Concentrations (O 3 -8hr) from 2014 to 2016 was observed for all regions, except for the Tibetan Plateau. The mean values of annual and 90 percentile concentrations for all 338 cities were 82.6±14.6 and 133.9±25.8μg/m 3 , respectively, in 2015. The regionalization results of ozone were found to be influenced greatly by terrain features, indicating significant terrain and landform effects on ozone spatial correlations. Among 12 regions, North China Plain, Huanghuai Plain, Central Yangtze River Plain, Pearl River Delta and Sichuan Basin were realized as priority regions for mitigation strategies, due to their higher ozone concentrations and dense population. Copyright © 2017. Published by Elsevier B.V.

  1. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    Science.gov (United States)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  2. Improvement of OMI Ozone Profile Retrievals in the Troposphere and Lower Troposphere by the Use of the Tropopause-Based Ozone Profile Climatology

    Science.gov (United States)

    Bak, Juseon; Liu, X.; Wei, J.; Kim, J. H.; Chance, K.; Barnet, C.

    2011-01-01

    An advance algorithm based on the optimal estimation technique has beeen developed to derive ozone profile from GOME UV radiances and have adapted it to OMI UV radiances. OMI vertical resolution : 7-11 km in the troposphere and 10-14 km in the stratosphere. Satellite ultraviolet measurements (GOME, OMI) contain little vertical information for the small scale of ozone, especially in the upper troposphere (UT) and lower stratosphere (LS) where the sharp O3 gradient across the tropopause and large ozone variability are observed. Therefore, retrievals depend greatly on the a-priori knowledge in the UTLS

  3. The Ensembl REST API: Ensembl Data for Any Language.

    Science.gov (United States)

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.

  4. A target recognition method for maritime surveillance radars based on hybrid ensemble selection

    Science.gov (United States)

    Fan, Xueman; Hu, Shengliang; He, Jingbo

    2017-11-01

    In order to improve the generalisation ability of the maritime surveillance radar, a novel ensemble selection technique, termed Optimisation and Dynamic Selection (ODS), is proposed. During the optimisation phase, the non-dominated sorting genetic algorithm II for multi-objective optimisation is used to find the Pareto front, i.e. a set of ensembles of classifiers representing different tradeoffs between the classification error and diversity. During the dynamic selection phase, the meta-learning method is used to predict whether a candidate ensemble is competent enough to classify a query instance based on three different aspects, namely, feature space, decision space and the extent of consensus. The classification performance and time complexity of ODS are compared against nine other ensemble methods using a self-built full polarimetric high resolution range profile data-set. The experimental results clearly show the effectiveness of ODS. In addition, the influence of the selection of diversity measures is studied concurrently.

  5. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    Science.gov (United States)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For

  6. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  7. Deterministic Predictions of Vessel Responses Based on Past Measurements

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Jensen, Jørgen Juncher

    2017-01-01

    The paper deals with a prediction procedure from which global wave-induced responses can be deterministically predicted a short time, 10-50 s, ahead of current time. The procedure relies on the autocorrelation function and takes into account prior measurements only; i.e. knowledge about wave...

  8. The role of ensemble-based statistics in variational assimilation of cloud-affected observations from infrared imagers

    Science.gov (United States)

    Hacker, Joshua; Vandenberghe, Francois; Jung, Byoung-Jo; Snyder, Chris

    2017-04-01

    Effective assimilation of cloud-affected radiance observations from space-borne imagers, with the aim of improving cloud analysis and forecasting, has proven to be difficult. Large observation biases, nonlinear observation operators, and non-Gaussian innovation statistics present many challenges. Ensemble-variational data assimilation (EnVar) systems offer the benefits of flow-dependent background error statistics from an ensemble, and the ability of variational minimization to handle nonlinearity. The specific benefits of ensemble statistics, relative to static background errors more commonly used in variational systems, have not been quantified for the problem of assimilating cloudy radiances. A simple experiment framework is constructed with a regional NWP model and operational variational data assimilation system, to provide the basis understanding the importance of ensemble statistics in cloudy radiance assimilation. Restricting the observations to those corresponding to clouds in the background forecast leads to innovations that are more Gaussian. The number of large innovations is reduced compared to the more general case of all observations, but not eliminated. The Huber norm is investigated to handle the fat tails of the distributions, and allow more observations to be assimilated without the need for strict background checks that eliminate them. Comparing assimilation using only ensemble background error statistics with assimilation using only static background error statistics elucidates the importance of the ensemble statistics. Although the cost functions in both experiments converge to similar values after sufficient outer-loop iterations, the resulting cloud water, ice, and snow content are greater in the ensemble-based analysis. The subsequent forecasts from the ensemble-based analysis also retain more condensed water species, indicating that the local environment is more supportive of clouds. In this presentation we provide details that explain the

  9. Fault Detection for Nonlinear Process With Deterministic Disturbances: A Just-In-Time Learning Based Data Driven Method.

    Science.gov (United States)

    Yin, Shen; Gao, Huijun; Qiu, Jianbin; Kaynak, Okyay

    2017-11-01

    Data-driven fault detection plays an important role in industrial systems due to its applicability in case of unknown physical models. In fault detection, disturbances must be taken into account as an inherent characteristic of processes. Nevertheless, fault detection for nonlinear processes with deterministic disturbances still receive little attention, especially in data-driven field. To solve this problem, a just-in-time learning-based data-driven (JITL-DD) fault detection method for nonlinear processes with deterministic disturbances is proposed in this paper. JITL-DD employs JITL scheme for process description with local model structures to cope with processes dynamics and nonlinearity. The proposed method provides a data-driven fault detection solution for nonlinear processes with deterministic disturbances, and owns inherent online adaptation and high accuracy of fault detection. Two nonlinear systems, i.e., a numerical example and a sewage treatment process benchmark, are employed to show the effectiveness of the proposed method.

  10. Identifying Different Transportation Modes from Trajectory Data Using Tree-Based Ensemble Classifiers

    Directory of Open Access Journals (Sweden)

    Zhibin Xiao

    2017-02-01

    Full Text Available Recognition of transportation modes can be used in different applications including human behavior research, transport management and traffic control. Previous work on transportation mode recognition has often relied on using multiple sensors or matching Geographic Information System (GIS information, which is not possible in many cases. In this paper, an approach based on ensemble learning is proposed to infer hybrid transportation modes using only Global Position System (GPS data. First, in order to distinguish between different transportation modes, we used a statistical method to generate global features and extract several local features from sub-trajectories after trajectory segmentation, before these features were combined in the classification stage. Second, to obtain a better performance, we used tree-based ensemble models (Random Forest, Gradient Boosting Decision Tree, and XGBoost instead of traditional methods (K-Nearest Neighbor, Decision Tree, and Support Vector Machines to classify the different transportation modes. The experiment results on the later have shown the efficacy of our proposed approach. Among them, the XGBoost model produced the best performance with a classification accuracy of 90.77% obtained on the GEOLIFE dataset, and we used a tree-based ensemble method to ensure accurate feature selection to reduce the model complexity.

  11. An Ensemble Learning Based Framework for Traditional Chinese Medicine Data Analysis with ICD-10 Labels

    Directory of Open Access Journals (Sweden)

    Gang Zhang

    2015-01-01

    Full Text Available Objective. This study aims to establish a model to analyze clinical experience of TCM veteran doctors. We propose an ensemble learning based framework to analyze clinical records with ICD-10 labels information for effective diagnosis and acupoints recommendation. Methods. We propose an ensemble learning framework for the analysis task. A set of base learners composed of decision tree (DT and support vector machine (SVM are trained by bootstrapping the training dataset. The base learners are sorted by accuracy and diversity through nondominated sort (NDS algorithm and combined through a deep ensemble learning strategy. Results. We evaluate the proposed method with comparison to two currently successful methods on a clinical diagnosis dataset with manually labeled ICD-10 information. ICD-10 label annotation and acupoints recommendation are evaluated for three methods. The proposed method achieves an accuracy rate of 88.2%  ±  2.8% measured by zero-one loss for the first evaluation session and 79.6%  ±  3.6% measured by Hamming loss, which are superior to the other two methods. Conclusion. The proposed ensemble model can effectively model the implied knowledge and experience in historic clinical data records. The computational cost of training a set of base learners is relatively low.

  12. Det-WiFi: A Multihop TDMA MAC Implementation for Industrial Deterministic Applications Based on Commodity 802.11 Hardware

    Directory of Open Access Journals (Sweden)

    Yujun Cheng

    2017-01-01

    Full Text Available Wireless control system for industrial automation has been gaining increasing popularity in recent years thanks to their ease of deployment and the low cost of their components. However, traditional low sample rate industrial wireless sensor networks cannot support high-speed application, while high-speed IEEE 802.11 networks are not designed for real-time application and not able to provide deterministic feature. Thus, in this paper, we propose Det-WiFi, a real-time TDMA MAC implementation for high-speed multihop industrial application. It is able to support high-speed applications and provide deterministic network features since it combines the advantages of high-speed IEEE802.11 physical layer and a software Time Division Multiple Access (TDMA based MAC layer. We implement Det-WiFi on commercial off-the-shelf hardware and compare the deterministic performance between 802.11s and Det-WiFi under the real industrial environment, which is full of field devices and industrial equipment. We changed the hop number and the packet payload size in each experiment, and all of the results show that Det-WiFi has better deterministic performance.

  13. Measurements of total and tropospheric ozone from IASI: comparison with correlative satellite, ground-based and ozonesonde observations

    Directory of Open Access Journals (Sweden)

    A. Boynard

    2009-08-01

    Full Text Available In this paper, we present measurements of total and tropospheric ozone, retrieved from infrared radiance spectra recorded by the Infrared Atmospheric Sounding Interferometer (IASI, which was launched on board the MetOp-A European satellite in October 2006. We compare IASI total ozone columns to Global Ozone Monitoring Experiment-2 (GOME-2 observations and ground-based measurements from the Dobson and Brewer network for one full year of observations (2008. The IASI total ozone columns are shown to be in good agreement with both GOME-2 and ground-based data, with correlation coefficients of about 0.9 and 0.85, respectively. On average, IASI ozone retrievals exhibit a positive bias of about 9 DU (3.3% compared to both GOME-2 and ground-based measurements. In addition to total ozone columns, the good spectral resolution of IASI enables the retrieval of tropospheric ozone concentrations. Comparisons of IASI tropospheric columns to 490 collocated ozone soundings available from several stations around the globe have been performed for the period of June 2007–August 2008. IASI tropospheric ozone columns compare well with sonde observations, with correlation coefficients of 0.95 and 0.77 for the [surface–6 km] and [surface–12 km] partial columns, respectively. IASI retrievals tend to overestimate the tropospheric ozone columns in comparison with ozonesonde measurements. Positive average biases of 0.15 DU (1.2% and 3 DU (11% are found for the [surface–6 km] and for the [surface–12 km] partial columns respectively.

  14. Prediction of drug synergy in cancer using ensemble-based machine learning techniques

    Science.gov (United States)

    Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder

    2018-04-01

    Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.

  15. Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    . The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...

  16. New flux based dose–response relationships for ozone for European forest tree species

    International Nuclear Information System (INIS)

    Büker, P.; Feng, Z.; Uddling, J.; Briolat, A.; Alonso, R.; Braun, S.; Elvira, S.; Gerosa, G.; Karlsson, P.E.; Le Thiec, D.

    2015-01-01

    To derive O 3 dose–response relationships (DRR) for five European forest trees species and broadleaf deciduous and needleleaf tree plant functional types (PFTs), phytotoxic O 3 doses (PODy) were related to biomass reductions. PODy was calculated using a stomatal flux model with a range of cut-off thresholds (y) indicative of varying detoxification capacities. Linear regression analysis showed that DRR for PFT and individual tree species differed in their robustness. A simplified parameterisation of the flux model was tested and showed that for most non-Mediterranean tree species, this simplified model led to similarly robust DRR as compared to a species- and climate region-specific parameterisation. Experimentally induced soil water stress was not found to substantially reduce PODy, mainly due to the short duration of soil water stress periods. This study validates the stomatal O 3 flux concept and represents a step forward in predicting O 3 damage to forests in a spatially and temporally varying climate. - Highlights: • We present new ozone flux based dose–response relationships for European trees. • The model-based study accounted for the soil water effect on stomatal flux. • Different statistically derived ozone flux thresholds were applied. • Climate region specific parameterisation often outperformed simplified parameterisation. • Findings could help redefining critical levels for ozone effects on trees. - New stomatal flux based ozone dose–response relationships for tree species are derived for the regional risk assessment of ozone effects on European forest ecosystems.

  17. Intelligent and robust prediction of short term wind power using genetic programming based ensemble of neural networks

    International Nuclear Information System (INIS)

    Zameer, Aneela; Arshad, Junaid; Khan, Asifullah; Raja, Muhammad Asif Zahoor

    2017-01-01

    Highlights: • Genetic programming based ensemble of neural networks is employed for short term wind power prediction. • Proposed predictor shows resilience against abrupt changes in weather. • Genetic programming evolves nonlinear mapping between meteorological measures and wind-power. • Proposed approach gives mathematical expressions of wind power to its independent variables. • Proposed model shows relatively accurate and steady wind-power prediction performance. - Abstract: The inherent instability of wind power production leads to critical problems for smooth power generation from wind turbines, which then requires an accurate forecast of wind power. In this study, an effective short term wind power prediction methodology is presented, which uses an intelligent ensemble regressor that comprises Artificial Neural Networks and Genetic Programming. In contrast to existing series based combination of wind power predictors, whereby the error or variation in the leading predictor is propagated down the stream to the next predictors, the proposed intelligent ensemble predictor avoids this shortcoming by introducing Genetical Programming based semi-stochastic combination of neural networks. It is observed that the decision of the individual base regressors may vary due to the frequent and inherent fluctuations in the atmospheric conditions and thus meteorological properties. The novelty of the reported work lies in creating ensemble to generate an intelligent, collective and robust decision space and thereby avoiding large errors due to the sensitivity of the individual wind predictors. The proposed ensemble based regressor, Genetic Programming based ensemble of Artificial Neural Networks, has been implemented and tested on data taken from five different wind farms located in Europe. Obtained numerical results of the proposed model in terms of various error measures are compared with the recent artificial intelligence based strategies to demonstrate the

  18. A new strategy for snow-cover mapping using remote sensing data and ensemble based systems techniques

    Science.gov (United States)

    Roberge, S.; Chokmani, K.; De Sève, D.

    2012-04-01

    The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological

  19. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    Directory of Open Access Journals (Sweden)

    Xiaodong Zeng

    2014-01-01

    Full Text Available A weighted accuracy and diversity (WAD method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  20. A convection-allowing ensemble forecast based on the breeding growth mode and associated optimization of precipitation forecast

    Science.gov (United States)

    Li, Xiang; He, Hongrang; Chen, Chaohui; Miao, Ziqing; Bai, Shigang

    2017-10-01

    A convection-allowing ensemble forecast experiment on a squall line was conducted based on the breeding growth mode (BGM). Meanwhile, the probability matched mean (PMM) and neighborhood ensemble probability (NEP) methods were used to optimize the associated precipitation forecast. The ensemble forecast predicted the precipitation tendency accurately, which was closer to the observation than in the control forecast. For heavy rainfall, the precipitation center produced by the ensemble forecast was also better. The Fractions Skill Score (FSS) results indicated that the ensemble mean was skillful in light rainfall, while the PMM produced better probability distribution of precipitation for heavy rainfall. Preliminary results demonstrated that convection-allowing ensemble forecast could improve precipitation forecast skill through providing valuable probability forecasts. It is necessary to employ new methods, such as the PMM and NEP, to generate precipitation probability forecasts. Nonetheless, the lack of spread and the overprediction of precipitation by the ensemble members are still problems that need to be solved.

  1. Ensemble-based prediction of RNA secondary structures.

    Science.gov (United States)

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between

  2. A Two-Timescale Response to Ozone Depletion: Importance of the Background State

    Science.gov (United States)

    Seviour, W.; Waugh, D.; Gnanadesikan, A.

    2015-12-01

    It has been recently suggested that the response of Southern Ocean sea-ice extent to stratospheric ozone depletion is time-dependent; that the ocean surface initially cools due to enhanced northward Ekman drift caused by a poleward shift in the eddy-driven jet, and then warms after some time due to upwelling of warm waters from below the mixed layer. It is therefore possible that ozone depletion could act to favor a short-term increase in sea-ice extent. However, many uncertainties remain in understanding this mechanism, with different models showing widely differing time-scales and magnitudes of the response. Here, we analyze an ensemble of coupled model simulations with a step-function ozone perturbation. The two-timescale response is present with an approximately 30 year initial cooling period. The response is further shown to be highly dependent upon the background ocean temperature and salinity stratification, which is influenced by both natural internal variability and the isopycnal eddy mixing parameterization. It is suggested that the majority of inter-model differences in the Southern Ocean response to ozone depletion is caused by differences in stratification.

  3. New dynamic NNORSY ozone profile climatology

    Science.gov (United States)

    Kaifel, A. K.; Felder, M.; Declercq, C.; Lambert, J.-C.

    2012-01-01

    Climatological ozone profile data are widely used as a-priori information for total ozone using DOAS type retrievals as well as for ozone profile retrieval using optimal estimation, for data assimilation or evaluation of 3-D chemistry-transport models and a lot of other applications in atmospheric sciences and remote sensing. For most applications it is important that the climatology represents not only long term mean values but also the links between ozone and dynamic input parameters. These dynamic input parameters should be easily accessible from auxiliary datasets or easily measureable, and obviously should have a high correlation with ozone. For ozone profile these parameters are mainly total ozone column and temperature profile data. This was the outcome of a user consultation carried out in the framework of developing a new, dynamic ozone profile climatology. The new ozone profile climatology is based on the Neural Network Ozone Retrieval System (NNORSY) widely used for ozone profile retrieval from UV and IR satellite sounder data. NNORSY allows implicit modelling of any non-linear correspondence between input parameters (predictors) and ozone profile target vector. This paper presents the approach, setup and validation of a new family of ozone profile climatologies with static as well as dynamic input parameters (total ozone and temperature profile). The neural network training relies on ozone profile measurement data of well known quality provided by ground based (ozonesondes) and satellite based (SAGE II, HALOE, and POAM-III) measurements over the years 1995-2007. In total, four different combinations (modes) for input parameters (date, geolocation, total ozone column and temperature profile) are available. The geophysical validation spans from pole to pole using independent ozonesonde, lidar and satellite data (ACE-FTS, AURA-MLS) for individual and time series comparisons as well as for analysing the vertical and meridian structure of different modes of

  4. Deterministic linear-optics quantum computing based on a hybrid approach

    International Nuclear Information System (INIS)

    Lee, Seung-Woo; Jeong, Hyunseok

    2014-01-01

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources

  5. Deterministic linear-optics quantum computing based on a hybrid approach

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Woo; Jeong, Hyunseok [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul, 151-742 (Korea, Republic of)

    2014-12-04

    We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.

  6. Impact of increasing heat waves on U.S. ozone episodes in the 2050s: Results from a multimodel analysis using extreme value theory

    Science.gov (United States)

    Shen, L.; Mickley, L. J.; Gilleland, E.

    2016-04-01

    We develop a statistical model using extreme value theory to estimate the 2000-2050 changes in ozone episodes across the United States. We model the relationships between daily maximum temperature (Tmax) and maximum daily 8 h average (MDA8) ozone in May-September over 2003-2012 using a Point Process (PP) model. At ~20% of the sites, a marked decrease in the ozone-temperature slope occurs at high temperatures, defined as ozone suppression. The PP model sometimes fails to capture ozone-Tmax relationships, so we refit the ozone-Tmax slope using logistic regression and a generalized Pareto distribution model. We then apply the resulting hybrid-extreme value theory model to projections of Tmax from an ensemble of downscaled climate models. Assuming constant anthropogenic emissions at the present level, we find an average increase of 2.3 d a-1 in ozone episodes (>75 ppbv) across the United States by the 2050s, with a change of +3-9 d a-1 at many sites.

  7. The ARPAL operational high resolution Poor Man's Ensemble, description and validation

    Science.gov (United States)

    Corazza, Matteo; Sacchetti, Davide; Antonelli, Marta; Drofa, Oxana

    2018-05-01

    The Meteo Hydrological Functional Center for Civil Protection of the Environmental Protection Agency of the Liguria Region is responsible for issuing forecasts primarily aimed at the Civil Protection needs. Several deterministic high resolution models, run every 6 or 12 h, are regularly used in the Center to elaborate weather forecasts at short to medium range. The Region is frequently affected by severe flash floods over its very small basins, characterized by a steep orography close to the sea. These conditions led the Center in the past years to pay particular attention to the use and development of high resolution model chains for explicit simulation of convective phenomena. For years, the availability of several models has been used by the forecasters for subjective analyses of the potential evolution of the atmosphere and of its uncertainty. More recently, an Interactive Poor Man's Ensemble has been developed, aimed at providing statistical ensemble variables to help forecaster's evaluations. In this paper the structure of this system is described and results are validated using the regional dense ground observational network.

  8. Analyzing the uncertainty of ensemble-based gridded observations in land surface simulations and drought assessment

    Science.gov (United States)

    Ahmadalipour, Ali; Moradkhani, Hamid

    2017-12-01

    Hydrologic modeling is one of the primary tools utilized for drought monitoring and drought early warning systems. Several sources of uncertainty in hydrologic modeling have been addressed in the literature. However, few studies have assessed the uncertainty of gridded observation datasets from a drought monitoring perspective. This study provides a hydrologic modeling oriented analysis of the gridded observation data uncertainties over the Pacific Northwest (PNW) and its implications on drought assessment. We utilized a recently developed 100-member ensemble-based observed forcing data to simulate hydrologic fluxes at 1/8° spatial resolution using Variable Infiltration Capacity (VIC) model, and compared the results with a deterministic observation. Meteorological and hydrological droughts are studied at multiple timescales over the basin, and seasonal long-term trends and variations of drought extent is investigated for each case. Results reveal large uncertainty of observed datasets at monthly timescale, with systematic differences for temperature records, mainly due to different lapse rates. The uncertainty eventuates in large disparities of drought characteristics. In general, an increasing trend is found for winter drought extent across the PNW. Furthermore, a ∼3% decrease per decade is detected for snow water equivalent (SWE) over the PNW, with the region being more susceptible to SWE variations of the northern Rockies than the western Cascades. The agricultural areas of southern Idaho demonstrate decreasing trend of natural soil moisture as a result of precipitation decline, which implies higher appeal for anthropogenic water storage and irrigation systems.

  9. Conceptual design for the breakwater system of the south of Doson naval base : Optimisation versus deterministic design

    NARCIS (Netherlands)

    Viet, N.D.; Verhagen, H.J.; Van Gelder, P.H.A.J.M.; Vrijling, J.K.

    2008-01-01

    In 2006 a Vietnamese Engineering Consultancy Company carried out a design study of a Naval Base at the location of the South of Doson Peninsula in Vietnam. A deterministic approach applied to the conceptual design of the breakwater system of the Naval Base resulted in a cross-section with a big

  10. ROCOZ-A (improved rocket launched ozone sensor) for middle atmosphere ozone measurements

    International Nuclear Information System (INIS)

    Lee, H.S.; Parsons, C.L.

    1987-01-01

    An improved interference filter based ultraviolet photometer (ROCOZ-A) for measuring stratospheric ozone is discussed. The payload is launched aboard a Super-Loki to a typical apogee of 70 km. The instrument measures the solar ultraviolet irradiance as it descends on a parachute. The total cumulative ozone is then calculated based on the Beer-Lambert law. The cumulative ozone precision measured in this way is 2.0% to 2.5% over an altitude range of 20 and 55 km. Results of the intercomparison with the SBUV overpass data and ROCOZ-A data are also discussed

  11. Ensemble-based evaluation of extreme water levels for the eastern Baltic Sea

    Science.gov (United States)

    Eelsalu, Maris; Soomere, Tarmo

    2016-04-01

    The risks and damages associated with coastal flooding that are naturally associated with an increase in the magnitude of extreme storm surges are one of the largest concerns of countries with extensive low-lying nearshore areas. The relevant risks are even more contrast for semi-enclosed water bodies such as the Baltic Sea where subtidal (weekly-scale) variations in the water volume of the sea substantially contribute to the water level and lead to large spreading of projections of future extreme water levels. We explore the options for using large ensembles of projections to more reliably evaluate return periods of extreme water levels. Single projections of the ensemble are constructed by means of fitting several sets of block maxima with various extreme value distributions. The ensemble is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. A hindcast by the Rossby Centre Ocean model is sampled with a resolution of 6 h and a similar hindcast by the circulation model NEMO with a resolution of 1 h. As the annual maxima of water levels in the Baltic Sea are not always uncorrelated, we employ maxima for calendar years and for stormy seasons. As the shape parameter of the Generalised Extreme Value distribution changes its sign and substantially varies in magnitude along the eastern coast of the Baltic Sea, the use of a single distribution for the entire coast is inappropriate. The ensemble involves projections based on the Generalised Extreme Value, Gumbel and Weibull distributions. The parameters of these distributions are evaluated using three different ways: maximum likelihood method and method of moments based on both biased and unbiased estimates. The total number of projections in the ensemble is 40. As some of the resulting estimates contain limited additional information, the members of pairs of projections that are highly correlated are assigned weights 0.6. A comparison of the ensemble-based projection of

  12. The Drag-based Ensemble Model (DBEM) for Coronal Mass Ejection Propagation

    Science.gov (United States)

    Dumbović, Mateja; Čalogović, Jaša; Vršnak, Bojan; Temmer, Manuela; Mays, M. Leila; Veronig, Astrid; Piantschitsch, Isabell

    2018-02-01

    The drag-based model for heliospheric propagation of coronal mass ejections (CMEs) is a widely used analytical model that can predict CME arrival time and speed at a given heliospheric location. It is based on the assumption that the propagation of CMEs in interplanetary space is solely under the influence of magnetohydrodynamical drag, where CME propagation is determined based on CME initial properties as well as the properties of the ambient solar wind. We present an upgraded version, the drag-based ensemble model (DBEM), that covers ensemble modeling to produce a distribution of possible ICME arrival times and speeds. Multiple runs using uncertainty ranges for the input values can be performed in almost real-time, within a few minutes. This allows us to define the most likely ICME arrival times and speeds, quantify prediction uncertainties, and determine forecast confidence. The performance of the DBEM is evaluated and compared to that of ensemble WSA-ENLIL+Cone model (ENLIL) using the same sample of events. It is found that the mean error is ME = ‑9.7 hr, mean absolute error MAE = 14.3 hr, and root mean square error RMSE = 16.7 hr, which is somewhat higher than, but comparable to ENLIL errors (ME = ‑6.1 hr, MAE = 12.8 hr and RMSE = 14.4 hr). Overall, DBEM and ENLIL show a similar performance. Furthermore, we find that in both models fast CMEs are predicted to arrive earlier than observed, most likely owing to the physical limitations of models, but possibly also related to an overestimation of the CME initial speed for fast CMEs.

  13. R-FCN Object Detection Ensemble based on Object Resolution and Image Quality

    DEFF Research Database (Denmark)

    Rasmussen, Christoffer Bøgelund; Nasrollahi, Kamal; Moeslund, Thomas B.

    2017-01-01

    Object detection can be difficult due to challenges such as variations in objects both inter- and intra-class. Additionally, variations can also be present between images. Based on this, research was conducted into creating an ensemble of Region-based Fully Convolutional Networks (R-FCN) object d...

  14. Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing

    KAUST Repository

    Toye, Habib

    2017-05-26

    We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.

  15. 20 Years of Total and Tropical Ozone Time Series Based on European Satellite Observations

    Science.gov (United States)

    Loyola, D. G.; Heue, K. P.; Coldewey-Egbers, M.

    2016-12-01

    Ozone is an important trace gas in the atmosphere, while the stratospheric ozone layer protects the earth surface from the incident UV radiation, the tropospheric ozone acts as green house gas and causes health damages as well as crop loss. The total ozone column is dominated by the stratospheric column, the tropospheric columns only contributes about 10% to the total column.The ozone column data from the European satellite instruments GOME, SCIAMACHY, OMI, GOME-2A and GOME-2B are available within the ESA Climate Change Initiative project with a high degree of inter-sensor consistency. The tropospheric ozone columns are based on the convective cloud differential algorithm. The datasets encompass a period of more than 20 years between 1995 and 2015, for the trend analysis the data sets were harmonized relative to one of the instruments. For the tropics we found an increase in the tropospheric ozone column of 0.75 ± 0.12 DU decade^{-1} with local variations between 1.8 and -0.8. The largest trends were observed over southern Africa and the Atlantic Ocean. A seasonal trend analysis led to the assumption that the increase is caused by additional forest fires.The trend for the total column was not that certain, based on model predicted trend data and the measurement uncertainty we estimated that another 10 to 15 years of observations will be required to observe a statistical significant trend. In the mid latitudes the trends are currently hidden in the large variability and for the tropics the modelled trends are low. Also the possibility of diverging trends at different altitudes must be considered; an increase in the tropospheric ozone might be accompanied by decreasing stratospheric ozone.The European satellite data record will be extended over the next two decades with the atmospheric satellite missions Sentinel 5 Precursor (launch end of 2016), Sentinel 4 and Sentinel 5.

  16. Mean-field Ensemble Kalman Filter

    KAUST Repository

    Law, Kody

    2015-01-07

    A proof of convergence of the standard EnKF generalized to non-Gaussian state space models is provided. A density-based deterministic approximation of the mean-field limiting EnKF (MFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for d < 2 . The fidelity of approximation of the true distribution is also established using an extension of total variation metric to random measures. This is limited by a Gaussian bias term arising from non-linearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  17. Deterministic and fuzzy-based methods to evaluate community resilience

    Science.gov (United States)

    Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo

    2018-04-01

    Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.

  18. Toward an ozone standard to protect vegetation based on effective dose: a review of deposition resistances and a possible metric

    Science.gov (United States)

    Massman, W. J.

    Present air quality standards to protect vegetation from ozone are based on measured concentrations (i.e., exposure) rather than on plant uptake rates (or dose). Some familiar cumulative exposure-based indices include SUM06, AOT40, and W126. However, plant injury is more closely related to dose, or more appropriately to effective dose, than to exposure. This study develops and applies a simple model for estimating effective ozone dose that combines the plant canopy's rate of stomatal ozone uptake with the plant's defense to ozone uptake. Here the plant defense is explicitly parameterized as a function of gross photosynthesis and the model is applied using eddy covariance (ozone and CO 2) flux data obtained at a vineyard site in the San Joaquin Valley during the California Ozone Deposition Experiment (CODE91). With the ultimate intention of applying these concepts using prognostic models and remotely sensed data, the pathways for ozone deposition are parameterized (as much as possible) in terms of canopy LAI and the surface friction velocity. Results indicate that (1) the daily maximum potential for plant injury (based on effective dose) tends to coincide with the daily peak in ozone mixing ratio (ppbV), (2) potentially there are some significant differences between ozone metrics based on dose (no plant defense) and effective dose, and (3) nocturnal conductance can contribute significantly to the potential for plant ozone injury.

  19. Tropospheric Ozone from the TOMS TDOT (TOMS-Direct-Ozone-in-Troposphere) Technique During SAFARI-2000

    Science.gov (United States)

    Stone, J. B.; Thompson, A. M.; Frolov, A. D.; Hudson, R. D.; Bhartia, P. K. (Technical Monitor)

    2002-01-01

    There are a number of published residual-type methods for deriving tropospheric ozone from TOMS (Total Ozone Mapping Spectrometer). The basic concept of these methods is that within a zone of constant stratospheric ozone, the tropospheric ozone column can be computed by subtracting stratospheric ozone from the TOMS Level 2 total ozone column, We used the modified-residual method for retrieving tropospheric ozone during SAFARI-2000 and found disagreements with in-situ ozone data over Africa in September 2000. Using the newly developed TDOT (TOMS-Direct-Ozone-in-Troposphere) method that uses TOMS radiances and a modified lookup table based on actual profiles during high ozone pollution periods, new maps were prepared and found to compare better to soundings over Lusaka, Zambia (15.5 S, 28 E), Nairobi and several African cities where MOZAIC aircraft operated in September 2000. The TDOT technique and comparisons are described in detail.

  20. Improving a Deep Learning based RGB-D Object Recognition Model by Ensemble Learning

    DEFF Research Database (Denmark)

    Aakerberg, Andreas; Nasrollahi, Kamal; Heder, Thomas

    2018-01-01

    Augmenting RGB images with depth information is a well-known method to significantly improve the recognition accuracy of object recognition models. Another method to im- prove the performance of visual recognition models is ensemble learning. However, this method has not been widely explored...... in combination with deep convolutional neural network based RGB-D object recognition models. Hence, in this paper, we form different ensembles of complementary deep convolutional neural network models, and show that this can be used to increase the recognition performance beyond existing limits. Experiments...

  1. Visualizing Confidence in Cluster-Based Ensemble Weather Forecast Analyses.

    Science.gov (United States)

    Kumpf, Alexander; Tost, Bianca; Baumgart, Marlene; Riemer, Michael; Westermann, Rudiger; Rautenhaus, Marc

    2018-01-01

    In meteorology, cluster analysis is frequently used to determine representative trends in ensemble weather predictions in a selected spatio-temporal region, e.g., to reduce a set of ensemble members to simplify and improve their analysis. Identified clusters (i.e., groups of similar members), however, can be very sensitive to small changes of the selected region, so that clustering results can be misleading and bias subsequent analyses. In this article, we - a team of visualization scientists and meteorologists-deliver visual analytics solutions to analyze the sensitivity of clustering results with respect to changes of a selected region. We propose an interactive visual interface that enables simultaneous visualization of a) the variation in composition of identified clusters (i.e., their robustness), b) the variability in cluster membership for individual ensemble members, and c) the uncertainty in the spatial locations of identified trends. We demonstrate that our solution shows meteorologists how representative a clustering result is, and with respect to which changes in the selected region it becomes unstable. Furthermore, our solution helps to identify those ensemble members which stably belong to a given cluster and can thus be considered similar. In a real-world application case we show how our approach is used to analyze the clustering behavior of different regions in a forecast of "Tropical Cyclone Karl", guiding the user towards the cluster robustness information required for subsequent ensemble analysis.

  2. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  3. The Effect of Future Ambient Air Pollution on Human Premature Mortality to 2100 Using Output from the ACCMIP Model Ensemble

    Science.gov (United States)

    Silva, Raquel A.; West, J. Jason; Lamarque, Jean-Francois; Shindell, Drew T.; Collins, William J.; Dalsoren, Stig; Faluvegi, Greg; Folberth, Gerd; Horowitz, Larry W.; Nagashima, Tatsuya; hide

    2016-01-01

    Ambient air pollution from ground-level ozone and fine particulate matter (PM(sub 2.5)) is associated with premature mortality. Future concentrations of these air pollutants will be driven by natural and anthropogenic emissions and by climate change. Using anthropogenic and biomass burning emissions projected in the four Representative Concentration Pathway scenarios (RCPs), the ACCMIP ensemble of chemistry climate models simulated future concentrations of ozone and PM(sub 2.5) at selected decades between 2000 and 2100. We use output from the ACCMIP ensemble, together with projections of future population and baseline mortality rates, to quantify the human premature mortality impacts of future ambient air pollution. Future air-pollution-related premature mortality in 2030, 2050 and 2100 is estimated for each scenario and for each model using a health impact function based on changes in concentrations of ozone and PM(sub 2.5) relative to 2000 and projected future population and baseline mortality rates. Additionally, the global mortality burden of ozone and PM(sub 2.5) in 2000 and each future period is estimated relative to 1850 concentrations, using present-day and future population and baseline mortality rates. The change in future ozone concentrations relative to 2000 is associated with excess global premature mortality in some scenarios/periods, particularly in RCP8.5 in 2100 (316 thousand deaths per year), likely driven by the large increase in methane emissions and by the net effect of climate change projected in this scenario, but it leads to considerable avoided premature mortality for the three other RCPs. However, the global mortality burden of ozone markedly increases from 382000 (121000 to 728000) deaths per year in 2000 to between 1.09 and 2.36 million deaths per year in 2100, across RCPs, mostly due to the effect of increases in population and baseline mortality rates. PM(sub 2.5) concentrations decrease relative to 2000 in all scenarios, due to

  4. The effect of future ambient air pollution on human premature mortality to 2100 using output from the ACCMIP model ensemble

    Directory of Open Access Journals (Sweden)

    R. A. Silva

    2016-08-01

    Full Text Available Ambient air pollution from ground-level ozone and fine particulate matter (PM2.5 is associated with premature mortality. Future concentrations of these air pollutants will be driven by natural and anthropogenic emissions and by climate change. Using anthropogenic and biomass burning emissions projected in the four Representative Concentration Pathway scenarios (RCPs, the ACCMIP ensemble of chemistry–climate models simulated future concentrations of ozone and PM2.5 at selected decades between 2000 and 2100. We use output from the ACCMIP ensemble, together with projections of future population and baseline mortality rates, to quantify the human premature mortality impacts of future ambient air pollution. Future air-pollution-related premature mortality in 2030, 2050 and 2100 is estimated for each scenario and for each model using a health impact function based on changes in concentrations of ozone and PM2.5 relative to 2000 and projected future population and baseline mortality rates. Additionally, the global mortality burden of ozone and PM2.5 in 2000 and each future period is estimated relative to 1850 concentrations, using present-day and future population and baseline mortality rates. The change in future ozone concentrations relative to 2000 is associated with excess global premature mortality in some scenarios/periods, particularly in RCP8.5 in 2100 (316 thousand deaths year−1, likely driven by the large increase in methane emissions and by the net effect of climate change projected in this scenario, but it leads to considerable avoided premature mortality for the three other RCPs. However, the global mortality burden of ozone markedly increases from 382 000 (121 000 to 728 000 deaths year−1 in 2000 to between 1.09 and 2.36 million deaths year−1 in 2100, across RCPs, mostly due to the effect of increases in population and baseline mortality rates. PM2.5 concentrations decrease relative to 2000 in all scenarios

  5. Deterministic global optimization algorithm based on outer approximation for the parameter estimation of nonlinear dynamic biological systems.

    Science.gov (United States)

    Miró, Anton; Pozo, Carlos; Guillén-Gosálbez, Gonzalo; Egea, Jose A; Jiménez, Laureano

    2012-05-10

    The estimation of parameter values for mathematical models of biological systems is an optimization problem that is particularly challenging due to the nonlinearities involved. One major difficulty is the existence of multiple minima in which standard optimization methods may fall during the search. Deterministic global optimization methods overcome this limitation, ensuring convergence to the global optimum within a desired tolerance. Global optimization techniques are usually classified into stochastic and deterministic. The former typically lead to lower CPU times but offer no guarantee of convergence to the global minimum in a finite number of iterations. In contrast, deterministic methods provide solutions of a given quality (i.e., optimality gap), but tend to lead to large computational burdens. This work presents a deterministic outer approximation-based algorithm for the global optimization of dynamic problems arising in the parameter estimation of models of biological systems. Our approach, which offers a theoretical guarantee of convergence to global minimum, is based on reformulating the set of ordinary differential equations into an equivalent set of algebraic equations through the use of orthogonal collocation methods, giving rise to a nonconvex nonlinear programming (NLP) problem. This nonconvex NLP is decomposed into two hierarchical levels: a master mixed-integer linear programming problem (MILP) that provides a rigorous lower bound on the optimal solution, and a reduced-space slave NLP that yields an upper bound. The algorithm iterates between these two levels until a termination criterion is satisfied. The capabilities of our approach were tested in two benchmark problems, in which the performance of our algorithm was compared with that of the commercial global optimization package BARON. The proposed strategy produced near optimal solutions (i.e., within a desired tolerance) in a fraction of the CPU time required by BARON.

  6. One-day-ahead streamflow forecasting via super-ensembles of several neural network architectures based on the Multi-Level Diversity Model

    Science.gov (United States)

    Brochero, Darwin; Hajji, Islem; Pina, Jasson; Plana, Queralt; Sylvain, Jean-Daniel; Vergeynst, Jenna; Anctil, Francois

    2015-04-01

    Theories about generalization error with ensembles are mainly based on the diversity concept, which promotes resorting to many members of different properties to support mutually agreeable decisions. Kuncheva (2004) proposed the Multi Level Diversity Model (MLDM) to promote diversity in model ensembles, combining different data subsets, input subsets, models, parameters, and including a combiner level in order to optimize the final ensemble. This work tests the hypothesis about the minimisation of the generalization error with ensembles of Neural Network (NN) structures. We used the MLDM to evaluate two different scenarios: (i) ensembles from a same NN architecture, and (ii) a super-ensemble built by a combination of sub-ensembles of many NN architectures. The time series used correspond to the 12 basins of the MOdel Parameter Estimation eXperiment (MOPEX) project that were used by Duan et al. (2006) and Vos (2013) as benchmark. Six architectures are evaluated: FeedForward NN (FFNN) trained with the Levenberg Marquardt algorithm (Hagan et al., 1996), FFNN trained with SCE (Duan et al., 1993), Recurrent NN trained with a complex method (Weins et al., 2008), Dynamic NARX NN (Leontaritis and Billings, 1985), Echo State Network (ESN), and leak integrator neuron (L-ESN) (Lukosevicius and Jaeger, 2009). Each architecture performs separately an Input Variable Selection (IVS) according to a forward stepwise selection (Anctil et al., 2009) using mean square error as objective function. Post-processing by Predictor Stepwise Selection (PSS) of the super-ensemble has been done following the method proposed by Brochero et al. (2011). IVS results showed that the lagged stream flow, lagged precipitation, and Standardized Precipitation Index (SPI) (McKee et al., 1993) were the most relevant variables. They were respectively selected as one of the firsts three selected variables in 66, 45, and 28 of the 72 scenarios. A relationship between aridity index (Arora, 2002) and NN

  7. Semi-Supervised Multi-View Ensemble Learning Based On Extracting Cross-View Correlation

    Directory of Open Access Journals (Sweden)

    ZALL, R.

    2016-05-01

    Full Text Available Correlated information between different views incorporate useful for learning in multi view data. Canonical correlation analysis (CCA plays important role to extract these information. However, CCA only extracts the correlated information between paired data and cannot preserve correlated information between within-class samples. In this paper, we propose a two-view semi-supervised learning method called semi-supervised random correlation ensemble base on spectral clustering (SS_RCE. SS_RCE uses a multi-view method based on spectral clustering which takes advantage of discriminative information in multiple views to estimate labeling information of unlabeled samples. In order to enhance discriminative power of CCA features, we incorporate the labeling information of both unlabeled and labeled samples into CCA. Then, we use random correlation between within-class samples from cross view to extract diverse correlated features for training component classifiers. Furthermore, we extend a general model namely SSMV_RCE to construct ensemble method to tackle semi-supervised learning in the presence of multiple views. Finally, we compare the proposed methods with existing multi-view feature extraction methods using multi-view semi-supervised ensembles. Experimental results on various multi-view data sets are presented to demonstrate the effectiveness of the proposed methods.

  8. Intercomparison of ground-based ozone and NO2 measurements during the MANTRA 2004 campaign

    Directory of Open Access Journals (Sweden)

    K. Strong

    2007-11-01

    Full Text Available The MANTRA (Middle Atmosphere Nitrogen TRend Assessment 2004 campaign took place in Vanscoy, Saskatchewan, Canada (52° N, 107° W from 3 August to 15 September, 2004. In support of the main balloon launch, a suite of five zenith-sky and direct-Sun-viewing UV-visible ground-based spectrometers was deployed, primarily measuring ozone and NO2 total columns. Three Fourier transform spectrometers (FTSs that were part of the balloon payload also performed ground-based measurements of several species, including ozone. Ground-based measurements of ozone and NO2 differential slant column densities from the zenith-viewing UV-visible instruments are presented herein. They are found to partially agree within NDACC (Network for the Detection of Atmospheric Composition Change standards for instruments certified for process studies and satellite validation. Vertical column densities of ozone from the zenith-sky UV-visible instruments, the FTSs, a Brewer spectrophotometer, and ozonesondes are compared, and found to agree within the combined error estimates of the instruments (15%. NO2 vertical column densities from two of the UV-visible instruments are compared, and are also found to agree within combined error (15%.

  9. Ensemble atmospheric dispersion calculations for decision support systems

    International Nuclear Information System (INIS)

    Borysiewicz, M.; Potempski, S.; Galkowski, A.; Zelazny, R.

    2003-01-01

    This document describes two approaches to long-range atmospheric dispersion of pollutants based on the ensemble concept. In the first part of the report some experiences related to the exercises undertaken under the ENSEMBLE project of the European Union are presented. The second part is devoted to the implementation of mesoscale numerical prediction models RAMS and atmospheric dispersion model HYPACT on Beowulf cluster and theirs usage for ensemble forecasting and long range atmospheric ensemble dispersion calculations based on available meteorological data from NCEO, NOAA (USA). (author)

  10. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been

  11. Deterministic secure communication protocol without using entanglement

    OpenAIRE

    Cai, Qing-yu

    2003-01-01

    We show a deterministic secure direct communication protocol using single qubit in mixed state. The security of this protocol is based on the security proof of BB84 protocol. It can be realized with current technologies.

  12. Multiobjective anatomy-based dose optimization for HDR-brachytherapy with constraint free deterministic algorithms

    International Nuclear Information System (INIS)

    Milickovic, N.; Lahanas, M.; Papagiannopoulou, M.; Zamboglou, N.; Baltas, D.

    2002-01-01

    In high dose rate (HDR) brachytherapy, conventional dose optimization algorithms consider multiple objectives in the form of an aggregate function that transforms the multiobjective problem into a single-objective problem. As a result, there is a loss of information on the available alternative possible solutions. This method assumes that the treatment planner exactly understands the correlation between competing objectives and knows the physical constraints. This knowledge is provided by the Pareto trade-off set obtained by single-objective optimization algorithms with a repeated optimization with different importance vectors. A mapping technique avoids non-feasible solutions with negative dwell weights and allows the use of constraint free gradient-based deterministic algorithms. We compare various such algorithms and methods which could improve their performance. This finally allows us to generate a large number of solutions in a few minutes. We use objectives expressed in terms of dose variances obtained from a few hundred sampling points in the planning target volume (PTV) and in organs at risk (OAR). We compare two- to four-dimensional Pareto fronts obtained with the deterministic algorithms and with a fast-simulated annealing algorithm. For PTV-based objectives, due to the convex objective functions, the obtained solutions are global optimal. If OARs are included, then the solutions found are also global optimal, although local minima may be present as suggested. (author)

  13. Tweet-based Target Market Classification Using Ensemble Method

    Directory of Open Access Journals (Sweden)

    Muhammad Adi Khairul Anshary

    2016-09-01

    Full Text Available Target market classification is aimed at focusing marketing activities on the right targets. Classification of target markets can be done through data mining and by utilizing data from social media, e.g. Twitter. The end result of data mining are learning models that can classify new data. Ensemble methods can improve the accuracy of the models and therefore provide better results. In this study, classification of target markets was conducted on a dataset of 3000 tweets in order to extract features. Classification models were constructed to manipulate the training data using two ensemble methods (bagging and boosting. To investigate the effectiveness of the ensemble methods, this study used the CART (classification and regression tree algorithm for comparison. Three categories of consumer goods (computers, mobile phones and cameras and three categories of sentiments (positive, negative and neutral were classified towards three target-market categories. Machine learning was performed using Weka 3.6.9. The results of the test data showed that the bagging method improved the accuracy of CART with 1.9% (to 85.20%. On the other hand, for sentiment classification, the ensemble methods were not successful in increasing the accuracy of CART. The results of this study may be taken into consideration by companies who approach their customers through social media, especially Twitter.

  14. A Prediction Method of Airport Noise Based on Hybrid Ensemble Learning

    Directory of Open Access Journals (Sweden)

    Tao XU

    2014-05-01

    Full Text Available Using monitoring history data to build and to train a prediction model for airport noise is a normal method in recent years. However, the single model built in different ways has various performances in the storage, efficiency and accuracy. In order to predict the noise accurately in some complex environment around airport, this paper presents a prediction method based on hybrid ensemble learning. The proposed method ensembles three algorithms: artificial neural network as an active learner, nearest neighbor as a passive leaner and nonlinear regression as a synthesized learner. The experimental results show that the three learners can meet forecast demands respectively in on- line, near-line and off-line. And the accuracy of prediction is improved by integrating these three learners’ results.

  15. Impacts of calibration strategies and ensemble methods on ensemble flood forecasting over Lanjiang basin, Southeast China

    Science.gov (United States)

    Liu, Li; Xu, Yue-Ping

    2017-04-01

    Ensemble flood forecasting driven by numerical weather prediction products is becoming more commonly used in operational flood forecasting applications.In this study, a hydrological ensemble flood forecasting system based on Variable Infiltration Capacity (VIC) model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated.The hydrological model is optimized by parallel programmed ɛ-NSGAII multi-objective algorithm and two respectively parameterized models are determined to simulate daily flows and peak flows coupled with a modular approach.The results indicatethat the ɛ-NSGAII algorithm permits more efficient optimization and rational determination on parameter setting.It is demonstrated that the multimodel ensemble streamflow mean have better skills than the best singlemodel ensemble mean (ECMWF) and the multimodel ensembles weighted on members and skill scores outperform other multimodel ensembles. For typical flood event, it is proved that the flood can be predicted 3-4 days in advance, but the flows in rising limb can be captured with only 1-2 days ahead due to the flash feature. With respect to peak flows selected by Peaks Over Threshold approach, the ensemble means from either singlemodel or multimodels are generally underestimated as the extreme values are smoothed out by ensemble process.

  16. Industrial wastewater advanced treatment via catalytic ozonation with an Fe-based catalyst.

    Science.gov (United States)

    Li, Xufang; Chen, Weiyu; Ma, Luming; Wang, Hongwu; Fan, Jinhong

    2018-03-01

    An Fe-based catalyst was used as a heterogeneous catalyst for the ozonation of industrial wastewater, and key operational parameters (pH and catalyst dosage) were studied. The results indicated that the Fe-based catalyst significantly improved the mineralization of organic pollutants in wastewater. TOC (total organic carbon) removal was high, at 78.7%, with a catalyst concentration of 200 g/L, but only 31.6% with ozonation alone. The Fe-based catalyst significantly promoted ozone decomposition by 70% in aqueous solution. Hydroxyl radicals (·OH) were confirmed to be existed directly via EPR (electron paramagnetic resonance) experiments, and ·OH were verified to account for about 34.4% of TOC removal with NaHCO 3 as a radical scavenger. Through characterization by SEM-EDS (field emission scanning electron microscope with energy-dispersive spectrometer), XRD (X-ray powder diffraction) and XPS (X-ray photoelectron spectroscopy), it was deduced that FeOOH on the surface of the catalyst was the dominant contributor to the catalytic efficiency. The catalyst was certified as having good stability and excellent reusability based on 50 successive operations and could be used as a filler simultaneously. Thereby, it is a promising catalyst for practical industrial wastewater advanced treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Coastal aquifer management under parameter uncertainty: Ensemble surrogate modeling based simulation-optimization

    Science.gov (United States)

    Janardhanan, S.; Datta, B.

    2011-12-01

    Surrogate models are widely used to develop computationally efficient simulation-optimization models to solve complex groundwater management problems. Artificial intelligence based models are most often used for this purpose where they are trained using predictor-predictand data obtained from a numerical simulation model. Most often this is implemented with the assumption that the parameters and boundary conditions used in the numerical simulation model are perfectly known. However, in most practical situations these values are uncertain. Under these circumstances the application of such approximation surrogates becomes limited. In our study we develop a surrogate model based coupled simulation optimization methodology for determining optimal pumping strategies for coastal aquifers considering parameter uncertainty. An ensemble surrogate modeling approach is used along with multiple realization optimization. The methodology is used to solve a multi-objective coastal aquifer management problem considering two conflicting objectives. Hydraulic conductivity and the aquifer recharge are considered as uncertain values. Three dimensional coupled flow and transport simulation model FEMWATER is used to simulate the aquifer responses for a number of scenarios corresponding to Latin hypercube samples of pumping and uncertain parameters to generate input-output patterns for training the surrogate models. Non-parametric bootstrap sampling of this original data set is used to generate multiple data sets which belong to different regions in the multi-dimensional decision and parameter space. These data sets are used to train and test multiple surrogate models based on genetic programming. The ensemble of surrogate models is then linked to a multi-objective genetic algorithm to solve the pumping optimization problem. Two conflicting objectives, viz, maximizing total pumping from beneficial wells and minimizing the total pumping from barrier wells for hydraulic control of

  18. The classicality and quantumness of a quantum ensemble

    International Nuclear Information System (INIS)

    Zhu Xuanmin; Pang Shengshi; Wu Shengjun; Liu Quanhui

    2011-01-01

    In this Letter, we investigate the classicality and quantumness of a quantum ensemble. We define a quantity called ensemble classicality based on classical cloning strategy (ECCC) to characterize how classical a quantum ensemble is. An ensemble of commuting states has a unit ECCC, while a general ensemble can have a ECCC less than 1. We also study how quantum an ensemble is by defining a related quantity called quantumness. We find that the classicality of an ensemble is closely related to how perfectly the ensemble can be cloned, and that the quantumness of the ensemble used in a quantum key distribution (QKD) protocol is exactly the attainable lower bound of the error rate in the sifted key. - Highlights: → A quantity is defined to characterize how classical a quantum ensemble is. → The classicality of an ensemble is closely related to the cloning performance. → Another quantity is also defined to investigate how quantum an ensemble is. → This quantity gives the lower bound of the error rate in a QKD protocol.

  19. Ozone pollution and ozone biomonitoring in European cities. Part I: Ozone concentrations and cumulative exposure indices at urban and suburban sites

    DEFF Research Database (Denmark)

    Klumpp, A.; Ansel, W.; Klumpp, G.

    2006-01-01

    In the frame of a European research project on air quality in urban agglomerations, data on ozone concentrations from 23 automated urban and suburban monitoring stations in 11 cities from seven countries were analysed and evaluated. Daily and summer mean and maximum concentrations were computed...... based on hourly mean values, and cumulative ozone exposure indices (Accumulated exposure Over a Threshold of 40 ppb (AOT40), AOT20) were calculated. The diurnal profiles showed a characteristic pattern in most city centres, with minimum values in the early morning hours, a strong rise during the morning......, by contrast, maximum values were lower and diurnal variation was much smaller. Based on ozone concentrations as well as on cumulative exposure indices, a clear north-south gradient in ozone pollution, with increasing levels from northern and northwestern sites to central and southern European sites...

  20. Features of ozone intraannual variability in polar regions based on ozone sounding data obtained at the Resolute and Amundsen-Scott stations

    Energy Technology Data Exchange (ETDEWEB)

    Gruzdev, A.N.; Sitnov, S.A. (AN SSSR, Institut Fiziki Atmosfery, Moscow (USSR))

    1991-04-01

    Ozone sounding data obtained at the Resolute and Amundsen-Scott stations are used to analyze ozone intraannual variability in Southern and Northern polar regions. For the Arctic, in particular, features associated with winter stratospheric warmings, stratospheric-tropospheric exchange, and the isolated evolution of surface ozone are noted. Correlative connections between ozone and temperature making it possible to concretize ozone variability mechanisms are analyzed. 31 refs.

  1. Two Years of Ozone Vertical Profiles Collected from Aircraft over California and the Pacific Ocean

    Science.gov (United States)

    Austerberry, D.; Yates, E. L.; Roby, M.; Chatfield, R. B.; Iraci, L. T.; Pierce, B.; Fairlie, T. D.; Johnson, B. J.; Ives, M.

    2012-12-01

    . Ensemble back trajectories along the flight tracks and Reverse Domain Filling maps and curtains will be analyzed to further evaluate ozone transport pathways.

  2. A user credit assessment model based on clustering ensemble for broadband network new media service supervision

    Science.gov (United States)

    Liu, Fang; Cao, San-xing; Lu, Rui

    2012-04-01

    This paper proposes a user credit assessment model based on clustering ensemble aiming to solve the problem that users illegally spread pirated and pornographic media contents within the user self-service oriented broadband network new media platforms. Its idea is to do the new media user credit assessment by establishing indices system based on user credit behaviors, and the illegal users could be found according to the credit assessment results, thus to curb the bad videos and audios transmitted on the network. The user credit assessment model based on clustering ensemble proposed by this paper which integrates the advantages that swarm intelligence clustering is suitable for user credit behavior analysis and K-means clustering could eliminate the scattered users existed in the result of swarm intelligence clustering, thus to realize all the users' credit classification automatically. The model's effective verification experiments are accomplished which are based on standard credit application dataset in UCI machine learning repository, and the statistical results of a comparative experiment with a single model of swarm intelligence clustering indicates this clustering ensemble model has a stronger creditworthiness distinguishing ability, especially in the aspect of predicting to find user clusters with the best credit and worst credit, which will facilitate the operators to take incentive measures or punitive measures accurately. Besides, compared with the experimental results of Logistic regression based model under the same conditions, this clustering ensemble model is robustness and has better prediction accuracy.

  3. Deterministic extraction from weak random sources

    CERN Document Server

    Gabizon, Ariel

    2011-01-01

    In this research monograph, the author constructs deterministic extractors for several types of sources, using a methodology of recycling randomness which enables increasing the output length of deterministic extractors to near optimal length.

  4. Solution of the inverse problem of polarimetry for deterministic objects on the base of incomplete Mueller matrices

    CERN Document Server

    Savenkov, S M

    2002-01-01

    Using the Mueller matrix representation in the basis of the matrices of amplitude and phase anisotropies, a generalized solution of the inverse problem of polarimetry for deterministic objects on the base of incomplete Mueller matrices, which have been measured by method of three input polarization, is obtained.

  5. Solution of the inverse problem of polarimetry for deterministic objects on the base of incomplete Mueller matrices

    International Nuclear Information System (INIS)

    Savenkov, S.M.; Oberemok, Je.A.

    2002-01-01

    Using the Mueller matrix representation in the basis of the matrices of amplitude and phase anisotropies, a generalized solution of the inverse problem of polarimetry for deterministic objects on the base of incomplete Mueller matrices, which have been measured by method of three input polarization, is obtained

  6. A Compact Mobile Ozone Lidar for Atmospheric Ozone and Aerosol Profiling

    Science.gov (United States)

    De Young, Russell; Carrion, William; Pliutau, Denis

    2014-01-01

    A compact mobile differential absorption lidar (DIAL) system has been developed at NASA Langley Research Center to provide ozone, aerosol and cloud atmospheric measurements in a mobile trailer for ground-based atmospheric ozone air quality campaigns. This lidar is integrated into the Tropospheric Ozone Lidar Network (TOLNet) currently made up of four other ozone lidars across the country. The lidar system consists of a UV and green laser transmitter, a telescope and an optical signal receiver with associated Licel photon counting and analog channels. The laser transmitter consist of a Q-switched Nd:YLF inter-cavity doubled laser pumping a Ce:LiCAF tunable UV laser with all the associated power and lidar control support units on a single system rack. The system has been configured to enable mobile operation from a trailer and was deployed to Denver, CO July 15-August 15, 2014 supporting the DISCOVER-AQ campaign. Ozone curtain plots and the resulting science are presented.

  7. Merged SAGE II, Ozone_cci and OMPS ozone profile dataset and evaluation of ozone trends in the stratosphere

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2017-10-01

    Full Text Available In this paper, we present a merged dataset of ozone profiles from several satellite instruments: SAGE II on ERBS, GOMOS, SCIAMACHY and MIPAS on Envisat, OSIRIS on Odin, ACE-FTS on SCISAT, and OMPS on Suomi-NPP. The merged dataset is created in the framework of the European Space Agency Climate Change Initiative (Ozone_cci with the aim of analyzing stratospheric ozone trends. For the merged dataset, we used the latest versions of the original ozone datasets. The datasets from the individual instruments have been extensively validated and intercompared; only those datasets which are in good agreement, and do not exhibit significant drifts with respect to collocated ground-based observations and with respect to each other, are used for merging. The long-term SAGE–CCI–OMPS dataset is created by computation and merging of deseasonalized anomalies from individual instruments. The merged SAGE–CCI–OMPS dataset consists of deseasonalized anomalies of ozone in 10° latitude bands from 90° S to 90° N and from 10 to 50 km in steps of 1 km covering the period from October 1984 to July 2016. This newly created dataset is used for evaluating ozone trends in the stratosphere through multiple linear regression. Negative ozone trends in the upper stratosphere are observed before 1997 and positive trends are found after 1997. The upper stratospheric trends are statistically significant at midlatitudes and indicate ozone recovery, as expected from the decrease of stratospheric halogens that started in the middle of the 1990s and stratospheric cooling.

  8. SVM and SVM Ensembles in Breast Cancer Prediction.

    Science.gov (United States)

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.

  9. SVM and SVM Ensembles in Breast Cancer Prediction.

    Directory of Open Access Journals (Sweden)

    Min-Wei Huang

    Full Text Available Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.

  10. Utilising Tree-Based Ensemble Learning for Speaker Segmentation

    DEFF Research Database (Denmark)

    Abou-Zleikha, Mohamed; Tan, Zheng-Hua; Christensen, Mads Græsbøll

    2014-01-01

    In audio and speech processing, accurate detection of the changing points between multiple speakers in speech segments is an important stage for several applications such as speaker identification and tracking. Bayesian Information Criteria (BIC)-based approaches are the most traditionally used...... for a certain condition, the model becomes biased to the data used for training limiting the model’s generalisation ability. In this paper, we propose a BIC-based tuning-free approach for speaker segmentation through the use of ensemble-based learning. A forest of segmentation trees is constructed in which each...... tree is trained using a sampled version of the speech segment. During the tree construction process, a set of randomly selected points in the input sequence is examined as potential segmentation points. The point that yields the highest ΔBIC is chosen and the same process is repeated for the resultant...

  11. Evaluation of ensemble precipitation forecasts generated through post-processing in a Canadian catchment

    Science.gov (United States)

    Jha, Sanjeev K.; Shrestha, Durga L.; Stadnyk, Tricia A.; Coulibaly, Paulin

    2018-03-01

    Flooding in Canada is often caused by heavy rainfall during the snowmelt period. Hydrologic forecast centers rely on precipitation forecasts obtained from numerical weather prediction (NWP) models to enforce hydrological models for streamflow forecasting. The uncertainties in raw quantitative precipitation forecasts (QPFs) are enhanced by physiography and orography effects over a diverse landscape, particularly in the western catchments of Canada. A Bayesian post-processing approach called rainfall post-processing (RPP), developed in Australia (Robertson et al., 2013; Shrestha et al., 2015), has been applied to assess its forecast performance in a Canadian catchment. Raw QPFs obtained from two sources, Global Ensemble Forecasting System (GEFS) Reforecast 2 project, from the National Centers for Environmental Prediction, and Global Deterministic Forecast System (GDPS), from Environment and Climate Change Canada, are used in this study. The study period from January 2013 to December 2015 covered a major flood event in Calgary, Alberta, Canada. Post-processed results show that the RPP is able to remove the bias and reduce the errors of both GEFS and GDPS forecasts. Ensembles generated from the RPP reliably quantify the forecast uncertainty.

  12. Ozone Depletion Caused by Rocket Engine Emissions: A Fundamental Limit on the Scale and Viability of Space-Based Geoengineering Schemes

    Science.gov (United States)

    Ross, M. N.; Toohey, D.

    2008-12-01

    Emissions from solid and liquid propellant rocket engines reduce global stratospheric ozone levels. Currently ~ one kiloton of payloads are launched into earth orbit annually by the global space industry. Stratospheric ozone depletion from present day launches is a small fraction of the ~ 4% globally averaged ozone loss caused by halogen gases. Thus rocket engine emissions are currently considered a minor, if poorly understood, contributor to ozone depletion. Proposed space-based geoengineering projects designed to mitigate climate change would require order of magnitude increases in the amount of material launched into earth orbit. The increased launches would result in comparable increases in the global ozone depletion caused by rocket emissions. We estimate global ozone loss caused by three space-based geoengineering proposals to mitigate climate change: (1) mirrors, (2) sunshade, and (3) space-based solar power (SSP). The SSP concept does not directly engineer climate, but is touted as a mitigation strategy in that SSP would reduce CO2 emissions. We show that launching the mirrors or sunshade would cause global ozone loss between 2% and 20%. Ozone loss associated with an economically viable SSP system would be at least 0.4% and possibly as large as 3%. It is not clear which, if any, of these levels of ozone loss would be acceptable under the Montreal Protocol. The large uncertainties are mainly caused by a lack of data or validated models regarding liquid propellant rocket engine emissions. Our results offer four main conclusions. (1) The viability of space-based geoengineering schemes could well be undermined by the relatively large ozone depletion that would be caused by the required rocket launches. (2) Analysis of space- based geoengineering schemes should include the difficult tradeoff between the gain of long-term (~ decades) climate control and the loss of short-term (~ years) deep ozone loss. (3) The trade can be properly evaluated only if our

  13. A hydro-meteorological ensemble prediction system for real-time flood forecasting purposes in the Milano area

    Science.gov (United States)

    Ravazzani, Giovanni; Amengual, Arnau; Ceppi, Alessandro; Romero, Romualdo; Homar, Victor; Mancini, Marco

    2015-04-01

    Analysis of forecasting strategies that can provide a tangible basis for flood early warning procedures and mitigation measures over the Western Mediterranean region is one of the fundamental motivations of the European HyMeX programme. Here, we examine a set of hydro-meteorological episodes that affected the Milano urban area for which the complex flood protection system of the city did not completely succeed before the occurred flash-floods. Indeed, flood damages have exponentially increased in the area during the last 60 years, due to industrial and urban developments. Thus, the improvement of the Milano flood control system needs a synergism between structural and non-structural approaches. The flood forecasting system tested in this work comprises the Flash-flood Event-based Spatially distributed rainfall-runoff Transformation, including Water Balance (FEST-WB) and the Weather Research and Forecasting (WRF) models, in order to provide a hydrological ensemble prediction system (HEPS). Deterministic and probabilistic quantitative precipitation forecasts (QPFs) have been provided by WRF model in a set of 48-hours experiments. HEPS has been generated by combining different physical parameterizations (i.e. cloud microphysics, moist convection and boundary-layer schemes) of the WRF model in order to better encompass the atmospheric processes leading to high precipitation amounts. We have been able to test the value of a probabilistic versus a deterministic framework when driving Quantitative Discharge Forecasts (QDFs). Results highlight (i) the benefits of using a high-resolution HEPS in conveying uncertainties for this complex orographic area and (ii) a better simulation of the most of extreme precipitation events, potentially enabling valuable probabilistic QDFs. Hence, the HEPS copes with the significant deficiencies found in the deterministic QPFs. These shortcomings would prevent to correctly forecast the location and timing of high precipitation rates and

  14. An Adjoint-Based Adaptive Ensemble Kalman Filter

    KAUST Repository

    Song, Hajoon

    2013-10-01

    A new hybrid ensemble Kalman filter/four-dimensional variational data assimilation (EnKF/4D-VAR) approach is introduced to mitigate background covariance limitations in the EnKF. The work is based on the adaptive EnKF (AEnKF) method, which bears a strong resemblance to the hybrid EnKF/three-dimensional variational data assimilation (3D-VAR) method. In the AEnKF, the representativeness of the EnKF ensemble is regularly enhanced with new members generated after back projection of the EnKF analysis residuals to state space using a 3D-VAR [or optimal interpolation (OI)] scheme with a preselected background covariance matrix. The idea here is to reformulate the transformation of the residuals as a 4D-VAR problem, constraining the new member with model dynamics and the previous observations. This should provide more information for the estimation of the new member and reduce dependence of the AEnKF on the assumed stationary background covariance matrix. This is done by integrating the analysis residuals backward in time with the adjoint model. Numerical experiments are performed with the Lorenz-96 model under different scenarios to test the new approach and to evaluate its performance with respect to the EnKF and the hybrid EnKF/3D-VAR. The new method leads to the least root-mean-square estimation errors as long as the linear assumption guaranteeing the stability of the adjoint model holds. It is also found to be less sensitive to choices of the assimilation system inputs and parameters.

  15. An Adjoint-Based Adaptive Ensemble Kalman Filter

    KAUST Repository

    Song, Hajoon; Hoteit, Ibrahim; Cornuelle, Bruce D.; Luo, Xiaodong; Subramanian, Aneesh C.

    2013-01-01

    A new hybrid ensemble Kalman filter/four-dimensional variational data assimilation (EnKF/4D-VAR) approach is introduced to mitigate background covariance limitations in the EnKF. The work is based on the adaptive EnKF (AEnKF) method, which bears a strong resemblance to the hybrid EnKF/three-dimensional variational data assimilation (3D-VAR) method. In the AEnKF, the representativeness of the EnKF ensemble is regularly enhanced with new members generated after back projection of the EnKF analysis residuals to state space using a 3D-VAR [or optimal interpolation (OI)] scheme with a preselected background covariance matrix. The idea here is to reformulate the transformation of the residuals as a 4D-VAR problem, constraining the new member with model dynamics and the previous observations. This should provide more information for the estimation of the new member and reduce dependence of the AEnKF on the assumed stationary background covariance matrix. This is done by integrating the analysis residuals backward in time with the adjoint model. Numerical experiments are performed with the Lorenz-96 model under different scenarios to test the new approach and to evaluate its performance with respect to the EnKF and the hybrid EnKF/3D-VAR. The new method leads to the least root-mean-square estimation errors as long as the linear assumption guaranteeing the stability of the adjoint model holds. It is also found to be less sensitive to choices of the assimilation system inputs and parameters.

  16. Estimating Uncertainty of Point-Cloud Based Single-Tree Segmentation with Ensemble Based Filtering

    Directory of Open Access Journals (Sweden)

    Matthew Parkan

    2018-02-01

    Full Text Available Individual tree crown segmentation from Airborne Laser Scanning data is a nodal problem in forest remote sensing. Focusing on single layered spruce and fir dominated coniferous forests, this article addresses the problem of directly estimating 3D segment shape uncertainty (i.e., without field/reference surveys, using a probabilistic approach. First, a coarse segmentation (marker controlled watershed is applied. Then, the 3D alpha hull and several descriptors are computed for each segment. Based on these descriptors, the alpha hulls are grouped to form ensembles (i.e., groups of similar tree shapes. By examining how frequently regions of a shape occur within an ensemble, it is possible to assign a shape probability to each point within a segment. The shape probability can subsequently be thresholded to obtain improved (filtered tree segments. Results indicate this approach can be used to produce segmentation reliability maps. A comparison to manually segmented tree crowns also indicates that the approach is able to produce more reliable tree shapes than the initial (unfiltered segmentation.

  17. Making decisions based on an imperfect ensemble of climate simulators: strategies and future directions

    Science.gov (United States)

    Sanderson, B. M.

    2017-12-01

    The CMIP ensembles represent the most comprehensive source of information available to decision-makers for climate adaptation, yet it is clear that there are fundamental limitations in our ability to treat the ensemble as an unbiased sample of possible future climate trajectories. There is considerable evidence that models are not independent, and increasing complexity and resolution combined with computational constraints prevent a thorough exploration of parametric uncertainty or internal variability. Although more data than ever is available for calibration, the optimization of each model is influenced by institutional priorities, historical precedent and available resources. The resulting ensemble thus represents a miscellany of climate simulators which defy traditional statistical interpretation. Models are in some cases interdependent, but are sufficiently complex that the degree of interdependency is conditional on the application. Configurations have been updated using available observations to some degree, but not in a consistent or easily identifiable fashion. This means that the ensemble cannot be viewed as a true posterior distribution updated by available data, but nor can observational data alone be used to assess individual model likelihood. We assess recent literature for combining projections from an imperfect ensemble of climate simulators. Beginning with our published methodology for addressing model interdependency and skill in the weighting scheme for the 4th US National Climate Assessment, we consider strategies for incorporating process-based constraints on future response, perturbed parameter experiments and multi-model output into an integrated framework. We focus on a number of guiding questions: Is the traditional framework of confidence in projections inferred from model agreement leading to biased or misleading conclusions? Can the benefits of upweighting skillful models be reconciled with the increased risk of truth lying outside the

  18. Tropospheric and total ozone columns over Paris (France measured using medium-resolution ground-based solar-absorption Fourier-transform infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    C. Viatte

    2011-10-01

    Full Text Available Ground-based Fourier-transform infrared (FTIR solar absorption spectroscopy is a powerful remote sensing technique providing information on the vertical distribution of various atmospheric constituents. This work presents the first evaluation of a mid-resolution ground-based FTIR to measure tropospheric ozone, independently of stratospheric ozone. This is demonstrated using a new atmospheric observatory (named OASIS for "Observations of the Atmosphere by Solar absorption Infrared Spectroscopy", installed in Créteil (France. The capacity of the technique to separate stratospheric and tropospheric ozone is demonstrated. Daily mean tropospheric ozone columns derived from the Infrared Atmospheric Sounding Interferometer (IASI and from OASIS measurements are compared for summer 2009 and a good agreement of −5.6 (±16.1 % is observed. Also, a qualitative comparison between in-situ surface ozone measurements and OASIS data reveals OASIS's capacity to monitor seasonal tropospheric ozone variations, as well as ozone pollution episodes in summer 2009 around Paris. Two extreme pollution events are identified (on the 1 July and 6 August 2009 for which ozone partial columns from OASIS and predictions from a regional air-quality model (CHIMERE are compared following strict criteria of temporal and spatial coincidence. An average bias of 0.2%, a mean square error deviation of 7.6%, and a correlation coefficient of 0.91 is found between CHIMERE and OASIS, demonstrating the potential of a mid-resolution FTIR instrument in ground-based solar absorption geometry for tropospheric ozone monitoring.

  19. Reducing false-positive incidental findings with ensemble genotyping and logistic regression based variant filtering methods.

    Science.gov (United States)

    Hwang, Kyu-Baek; Lee, In-Hee; Park, Jin-Ho; Hambuch, Tina; Choe, Yongjoon; Kim, MinHyeok; Lee, Kyungjoon; Song, Taemin; Neu, Matthew B; Gupta, Neha; Kohane, Isaac S; Green, Robert C; Kong, Sek Won

    2014-08-01

    As whole genome sequencing (WGS) uncovers variants associated with rare and common diseases, an immediate challenge is to minimize false-positive findings due to sequencing and variant calling errors. False positives can be reduced by combining results from orthogonal sequencing methods, but costly. Here, we present variant filtering approaches using logistic regression (LR) and ensemble genotyping to minimize false positives without sacrificing sensitivity. We evaluated the methods using paired WGS datasets of an extended family prepared using two sequencing platforms and a validated set of variants in NA12878. Using LR or ensemble genotyping based filtering, false-negative rates were significantly reduced by 1.1- to 17.8-fold at the same levels of false discovery rates (5.4% for heterozygous and 4.5% for homozygous single nucleotide variants (SNVs); 30.0% for heterozygous and 18.7% for homozygous insertions; 25.2% for heterozygous and 16.6% for homozygous deletions) compared to the filtering based on genotype quality scores. Moreover, ensemble genotyping excluded > 98% (105,080 of 107,167) of false positives while retaining > 95% (897 of 937) of true positives in de novo mutation (DNM) discovery in NA12878, and performed better than a consensus method using two sequencing platforms. Our proposed methods were effective in prioritizing phenotype-associated variants, and an ensemble genotyping would be essential to minimize false-positive DNM candidates. © 2014 WILEY PERIODICALS, INC.

  20. Uncertainties in models of tropospheric ozone based on Monte Carlo analysis: Tropospheric ozone burdens, atmospheric lifetimes and surface distributions

    Science.gov (United States)

    Derwent, Richard G.; Parrish, David D.; Galbally, Ian E.; Stevenson, David S.; Doherty, Ruth M.; Naik, Vaishali; Young, Paul J.

    2018-05-01

    Recognising that global tropospheric ozone models have many uncertain input parameters, an attempt has been made to employ Monte Carlo sampling to quantify the uncertainties in model output that arise from global tropospheric ozone precursor emissions and from ozone production and destruction in a global Lagrangian chemistry-transport model. Ninety eight quasi-randomly Monte Carlo sampled model runs were completed and the uncertainties were quantified in tropospheric burdens and lifetimes of ozone, carbon monoxide and methane, together with the surface distribution and seasonal cycle in ozone. The results have shown a satisfactory degree of convergence and provide a first estimate of the likely uncertainties in tropospheric ozone model outputs. There are likely to be diminishing returns in carrying out many more Monte Carlo runs in order to refine further these outputs. Uncertainties due to model formulation were separately addressed using the results from 14 Atmospheric Chemistry Coupled Climate Model Intercomparison Project (ACCMIP) chemistry-climate models. The 95% confidence ranges surrounding the ACCMIP model burdens and lifetimes for ozone, carbon monoxide and methane were somewhat smaller than for the Monte Carlo estimates. This reflected the situation where the ACCMIP models used harmonised emissions data and differed only in their meteorological data and model formulations whereas a conscious effort was made to describe the uncertainties in the ozone precursor emissions and in the kinetic and photochemical data in the Monte Carlo runs. Attention was focussed on the model predictions of the ozone seasonal cycles at three marine boundary layer stations: Mace Head, Ireland, Trinidad Head, California and Cape Grim, Tasmania. Despite comprehensively addressing the uncertainties due to global emissions and ozone sources and sinks, none of the Monte Carlo runs were able to generate seasonal cycles that matched the observations at all three MBL stations. Although

  1. Tropospheric ozone. Formation, properties, effects. Expert opinion; Ozon in der Troposphaere. Bildung, Eigenschaften, Wirkungen. Gutachten

    Energy Technology Data Exchange (ETDEWEB)

    Elstner, E.F. [Technische Univ. Muenchen (Germany). Lehrstuhl fuer Phytopathologie

    1996-06-01

    The formation and dispersion of tropospheric ozone are discussed only marginally in this expert opinion; the key interest is in the effects of ground level ozone on plants, animals, and humans. The expert opinion is based on an analysis of the available scientific publications. (orig./MG) [Deutsch] Das Gutachten nimmt nur am Rande die Problematik der Bildung und Ausbreitung von troposphaerischen Ozon auf; Im Mittelpunkt steht die Auseinandersetzung mit den Wirkungen des bodennahen Ozons auf Pflanze, Tier und Mensch. Das Gutachten basiert auf einer Analyse der zugaenglichen wissenschaftlichen Arbeiten. (orig./MG)

  2. Seasonal Changes in Tropospheric Ozone Concentrations over South Korea and Its Link to Ozone Precursors

    Science.gov (United States)

    Jung, H. C.; Moon, B. K.; Wie, J.

    2017-12-01

    Concentration of tropospheric ozone over South Korea has steadily been on the rise in the last decades, mainly due to rapid industrializing and urbanizing in the Eastern Asia. To identify the characteristics of tropospheric ozone in South Korea, we fitted a sine function to the surface ozone concentration data from 2005 to 2014. Based on fitted sine curves, we analyzed the shifts in the dates on which ozone concentration reached its peak in the calendar year. Ozone monitoring sites can be classified into type types: where the highest annual ozone concentration kept occurring sooner (Esites) and those that kept occurring later (Lsites). The seasonal analysis shows that the surface ozone had increased more rapidly in Esites than in Lsites in the past decade during springtime and vice-versa during summertime. We tried to find the reason for the different seasonal trends with the relationship between ozone and ozone precursors. As a result, it was found that the changes in the ground-level ozone concentration in the spring and summer times are considerably influenced by changes in nitrogen dioxide concentration, and this is closely linked to the destruction (production) process of ozone by nitrogen dioxide in spring (summer). The link between tropospheric ozone and nitrogen dioxide discussed in this study will have to be thoroughly examined through climate-chemistry modeling in the future. Acknowledgements This research was supported by the Korea Ministry of Environment (MOE) as "Climate Change Correspondence Program."

  3. Deterministic Graphical Games Revisited

    DEFF Research Database (Denmark)

    Andersson, Klas Olof Daniel; Hansen, Kristoffer Arnsfelt; Miltersen, Peter Bro

    2012-01-01

    Starting from Zermelo’s classical formal treatment of chess, we trace through history the analysis of two-player win/lose/draw games with perfect information and potentially infinite play. Such chess-like games have appeared in many different research communities, and methods for solving them......, such as retrograde analysis, have been rediscovered independently. We then revisit Washburn’s deterministic graphical games (DGGs), a natural generalization of chess-like games to arbitrary zero-sum payoffs. We study the complexity of solving DGGs and obtain an almost-linear time comparison-based algorithm...

  4. Evaluation of ozone profile and tropospheric ozone retrievals from GEMS and OMI spectra

    Directory of Open Access Journals (Sweden)

    J. Bak

    2013-02-01

    Full Text Available South Korea is planning to launch the GEMS (Geostationary Environment Monitoring Spectrometer instrument into the GeoKOMPSAT (Geostationary Korea Multi-Purpose SATellite platform in 2018 to monitor tropospheric air pollutants on an hourly basis over East Asia. GEMS will measure backscattered UV radiances covering the 300–500 nm wavelength range with a spectral resolution of 0.6 nm. The main objective of this study is to evaluate ozone profiles and stratospheric column ozone amounts retrieved from simulated GEMS measurements. Ozone Monitoring Instrument (OMI Level 1B radiances, which have the spectral range 270–500 nm at spectral resolution of 0.42–0.63 nm, are used to simulate the GEMS radiances. An optimal estimation-based ozone profile algorithm is used to retrieve ozone profiles from simulated GEMS radiances. Firstly, we compare the retrieval characteristics (including averaging kernels, degrees of freedom for signal, and retrieval error derived from the 270–330 nm (OMI and 300–330 nm (GEMS wavelength ranges. This comparison shows that the effect of not using measurements below 300 nm on retrieval characteristics in the troposphere is insignificant. However, the stratospheric ozone information in terms of DFS decreases greatly from OMI to GEMS, by a factor of ∼2. The number of the independent pieces of information available from GEMS measurements is estimated to 3 on average in the stratosphere, with associated retrieval errors of ~1% in stratospheric column ozone. The difference between OMI and GEMS retrieval characteristics is apparent for retrieving ozone layers above ~20 km, with a reduction in the sensitivity and an increase in the retrieval errors for GEMS. We further investigate whether GEMS can resolve the stratospheric ozone variation observed from high vertical resolution Earth Observing System (EOS Microwave Limb Sounder (MLS. The differences in stratospheric ozone profiles between GEMS and MLS are comparable to those

  5. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence

  6. An assessment of 10-year NOAA aircraft-based tropospheric ozone profiling in Colorado

    Science.gov (United States)

    Leonard, Mark; Petropavlovskikh, Irina; Lin, Meiyun; McClure-Begley, Audra; Johnson, Bryan J.; Oltmans, Samuel J.; Tarasick, David

    2017-06-01

    The Global Greenhouse Gas Reference Network Aircraft Program at NOAA has sampled ozone and other atmospheric trace constituents in North America for over a decade (2005-present). The method to derive tropospheric ozone climatology from the light aircraft measurements equipped with the 2B Technology instruments is described in this paper. Since ozone instruments at most of aircraft locations are flown once a month, this raises the question of whether the sampling frequency allows for deriving a climatology that can adequately represent ozone seasonal and vertical variability over various locations. Here we interpret the representativeness of the tropospheric ozone climatology derived from these under-sampled observations using hindcast simulations conducted with the Geophysical Fluid Dynamics Laboratory chemistry-climate model (GFDL-AM3). We first focus on ozone measurements from monthly aircraft profiles over the Front Range of Colorado and weekly ozonesondes launched in Boulder, Colorado. The climatology is presented as monthly values separated in 5th, 25th, 50th, 75th, 95th percentiles, and averaged at three vertical layers: lower (1.6-3 km), middle (3-6 km), and upper (6-8 km) troposphere. The aircraft-based climatology is compared to the climatology derived from the nearest located ozonesondes launched from Boulder, Colorado, from GFDL-AM3 co-sampled in time with in-situ observations, and from GFDL-AM3 continuous 3-h samples. Based on these analyses, we recommend the sampling frequency to obtain adequate representation of ozone climatology in the free troposphere. The 3-h sampled AM3 model is used as a benchmark reference for the under-sampled time series. We find that the minimal number of soundings required per month for the all altitude bins (1.6-3, 3-6, and 6-8 km) to sufficiently match the 95% confidence level of the fully sampled monthly ozone means vary between 3 and 5 sounding per month, except in August with a minimum of 6 soundings per month. The

  7. Optical remote measurement of ozone in cirrus clouds; Optische Fernmessung von Ozon in Zirruswolken

    Energy Technology Data Exchange (ETDEWEB)

    Reichardt, J. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Physikalische und Chemische Analytik

    1998-12-31

    The subject of this thesis is theoretical and experimental investigations into the simultaneous optical remote measurement of atmospheric ozone concentration and particle properties. A lidar system was developed that combines the Raman-lidar and the polarization-lidar with the Raman-DIAL technique. An error analysis is given for ozone measurements in clouds. It turns out that the wavelength dependencies of photon multiple scattering and of the particle extinction coefficient necessitate a correction of the measured ozone concentration. To quantify the cloud influence, model calculations based on particle size distributions of spheres are carried out. The most important experimental result of this thesis is the measured evidence of pronounced minima in the ozone distribution in a humid upper troposphere shortly before and during cirrus observation. Good correlation between ozone-depleted altitude ranges and ice clouds is found. This finding is in contrast to ozone profiles measured in a dry and cloud-free troposphere. (orig.) 151 refs.

  8. A continuous variable quantum deterministic key distribution based on two-mode squeezed states

    International Nuclear Information System (INIS)

    Gong, Li-Hua; Song, Han-Chong; Liu, Ye; Zhou, Nan-Run; He, Chao-Sheng

    2014-01-01

    The distribution of deterministic keys is of significance in personal communications, but the existing continuous variable quantum key distribution protocols can only generate random keys. By exploiting the entanglement properties of two-mode squeezed states, a continuous variable quantum deterministic key distribution (CVQDKD) scheme is presented for handing over the pre-determined key to the intended receiver. The security of the CVQDKD scheme is analyzed in detail from the perspective of information theory. It shows that the scheme can securely and effectively transfer pre-determined keys under ideal conditions. The proposed scheme can resist both the entanglement and beam splitter attacks under a relatively high channel transmission efficiency. (paper)

  9. Estonian total ozone climatology

    Directory of Open Access Journals (Sweden)

    K. Eerme

    Full Text Available The climatological characteristics of total ozone over Estonia based on the Total Ozone Mapping Spectrometer (TOMS data are discussed. The mean annual cycle during 1979–2000 for the site at 58.3° N and 26.5° E is compiled. The available ground-level data interpolated before TOMS, have been used for trend detection. During the last two decades, the quasi-biennial oscillation (QBO corrected systematic decrease of total ozone from February–April was 3 ± 2.6% per decade. Before 1980, a spring decrease was not detectable. No decreasing trend was found in either the late autumn ozone minimum or in the summer total ozone. The QBO related signal in the spring total ozone has an amplitude of ± 20 DU and phase lag of 20 months. Between 1987–1992, the lagged covariance between the Singapore wind and the studied total ozone was weak. The spring (April–May and summer (June–August total ozone have the best correlation (coefficient 0.7 in the yearly cycle. The correlation between the May and August total ozone is higher than the one between the other summer months. Seasonal power spectra of the total ozone variance show preferred periods with an over 95% significance level. Since 1986, during the winter/spring, the contribution period of 32 days prevails instead of the earlier dominating 26 days. The spectral densities of the periods from 4 days to 2 weeks exhibit high interannual variability.

    Key words. Atmospheric composition and structure (middle atmosphere – composition and chemistry; volcanic effects – Meteorology and atmospheric dynamics (climatology

  10. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  11. Tropospheric Ozone Source Attribution in Southern California during Summer 2014 Based on Lidar Measurements and Model Simulations

    Science.gov (United States)

    Granados Munoz, Maria Jose; Johnson, Matthew S.; Leblanc, Thierry

    2016-01-01

    In the past decades, significant efforts have been made to increase tropospheric ozone long-term monitoring. A large number of ground-based, airborne and space-borne instruments are currently providing valuable data to contribute to better understand tropospheric ozone budget and variability. Nonetheless, most of these instruments provide in-situ surface and column-integrated data, whereas vertically resolved measurements are still scarce. Besides ozonesondes and aircraft, lidar measurements have proven to be valuable tropospheric ozone profilers. Using the measurements from the tropospheric ozone differential absorption lidar (DIAL) located at the JPL Table Mountain Facility, California, and the GEOS-Chem and GEOS-5 model outputs, the impact of the North American monsoon on tropospheric ozone during summer 2014 is investigated. The influence of the Monsoon lightning-induced NOx will be evaluated against other sources (e.g. local anthropogenic emissions and the stratosphere) using also complementary data such as backward-trajectories analysis, coincident water vapor lidar measurements, and surface ozone in-situ measurements.

  12. A Two-Timescale Response of the Southern Ocean to Ozone Depletion: Importance of the Background State

    Science.gov (United States)

    Seviour, W.; Waugh, D.; Gnanadesikan, A.

    2016-02-01

    It has been recently suggested that the response of Southern Ocean sea-ice extent to stratospheric ozone depletion is time-dependent; that the ocean surface initially cools due to enhanced northward Ekman drift caused by a poleward shift in the eddy-driven jet, and then warms after some time due to upwelling of warm waters from below the mixed layer. It is therefore possible that ozone depletion could act to favor a short-term increase in sea-ice extent. However, many uncertainties remain in understanding this mechanism, with different models showing widely differing time-scales and magnitudes of the response. Here, we analyze an ensemble of coupled model simulations with a step-function ozone perturbation. The two-timescale response is present with an approximately 30 year initial cooling period. The response is further shown to be highly dependent upon the background ocean temperature and salinity stratification, which is influenced by both natural internal variability and the isopycnal eddy mixing parameterization. It is suggested that the majority of inter-model differences in the Southern Ocean response to ozone depletion are caused by differences in stratification.

  13. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    Science.gov (United States)

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  14. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  15. Are Bavarian Forests (southern Germany) at risk from ground-level ozone? Assessment using exposure and flux based ozone indices

    International Nuclear Information System (INIS)

    Baumgarten, Manuela; Huber, Christian; Bueker, Patrick; Emberson, Lisa; Dietrich, Hans-Peter; Nunn, Angela J.; Heerdt, Christian; Beudert, Burkhard; Matyssek, Rainer

    2009-01-01

    Exposure and flux-based indices of O 3 risk were compared, at 19 forest locations across Bavaria in southern Germany from 2002 to 2005; leaf symptoms on mature beech trees found at these locations were also examined for O 3 injury. O 3 flux modelling was performed using continuously recorded O 3 concentrations in combination with meteorological and soil moisture data collected from Level II forest sites. O 3 measurements at nearby rural open-field sites proved appropriate as surrogates in cases where O 3 data were lacking at forest sites (with altitude-dependent average differences of about 10% between O 3 concentrations). Operational thresholds of biomass loss for both O 3 indices were exceeded at the majority of the forest locations, suggesting similar risk under long-term average climate conditions. However, exposure-based indices estimated higher O 3 risk during dry years as compared to the flux-based approach. In comparison, minor O 3 -like leaf injury symptoms were detected only at a few of the forest sites investigated. Relationships between flux-based risk thresholds and tree response need to be established for mature forest stands for validation of predicted growth reductions under the prevailing O 3 regimes. - Exposure- and flux-based ozone indices suggest Bavarian forests to be at risk from ozone; the flux-based index offers a means of incorporating stand-specific and ecological variables that influence risk.

  16. Ensemble hydro-meteorological forecasting for early warning of floods and scheduling of hydropower production

    Science.gov (United States)

    Solvang Johansen, Stian; Steinsland, Ingelin; Engeland, Kolbjørn

    2016-04-01

    on where they are situated and the hydrological regime. There is an improvement in CRPS for all catchments compared to raw EPS ensembles. The improvement is up to lead-time 5-7. The postprocessing also improves the MAE for the median of the predictive PDF compared to the median of the raw EPS. But less compared to CRPS, often up to lead-time 2-3. The streamflow ensembles are to some extent used operationally in Statkraft Energi (Hydro Power company, Norway), with respect to early warning, risk assessment and decision-making. Presently all forecast used operationally for short-term scheduling are deterministic, but ensembles are used visually for expert assessment of risk in difficult situations where e.g. there is a chance of overflow in a reservoir. However, there are plans to incorporate ensembles in the daily scheduling of hydropower production.

  17. Ozone et propriétés oxydantes de la troposphère Ozone and Oxidizing Properties of the Troposhere

    Directory of Open Access Journals (Sweden)

    Académie des Sciences Groupe de Travail

    2006-11-01

    Full Text Available Jusqu'à aujourd'hui, le problème de l'augmentation des concentrations d'ozone et des photo-oxydants dans la troposphère est resté moins connu des décideurs que ceux de l'effet de serre additionnel ou de la diminution de la couche d'ozone stratosphérique. Or, les conséquences directes de cette augmentation concernent l'équilibre des écosystèmes végétaux et la santé des populations, altérés par le caractère oxydant puissant de l'ozone, ainsi que les équilibres climatiques, puisque l'ozone est un gaz à effet de serre près de 1000 fois plus actif, à concentration égale, que le gaz carbonique. A l'échelle globale, les observations expérimentales montrent que, depuis le début du xxe siècle, le niveau d'ozone dans l'atmosphère libre a été multiplié par 4 dans l'hémisphère nord, et par près de 2 dans l'hémisphère sud. Cette augmentation résulte de la production directe d'ozone dans la basse atmosphère par photo-oxydation, mettant en jeu oxydes d'azote, composés organiques volatils, monoxyde de carbone, méthane, dont les teneurs croissent rapidement du fait des activités anthropiques. A cette augmentation à l'échelle globale vient s'ajouter un accroissement de la fréquence d'occurrence des épisodes de pollution locale, lié essentiellement à l'accumulation des précurseurs de l'ozone : oxydes d'azote et composés organiques volatils. Les phénomènes de pollution oxydante ne sont plus seulement le fait de quelques grandes agglomérations mais se généralisent à l'ensemble des pays développés ou en voie de développement. Afin de faire face à cette évolution rapide qui, au rythme actuel, conduirait à un doublement des concentrations d'ozone dans la troposphère en moins de 40 ans, des mesures de régulation sont définies dans plusieurs pays et notamment dans l'Union Européenne. Leur respect nécessite d'élaborer des stratégies cohérentes de réduction des précurseurs, fondées sur des mod

  18. Moving beyond the cost-loss ratio: economic assessment of streamflow forecasts for a risk-averse decision maker

    Science.gov (United States)

    Matte, Simon; Boucher, Marie-Amélie; Boucher, Vincent; Fortier Filion, Thomas-Charles

    2017-06-01

    A large effort has been made over the past 10 years to promote the operational use of probabilistic or ensemble streamflow forecasts. Numerous studies have shown that ensemble forecasts are of higher quality than deterministic ones. Many studies also conclude that decisions based on ensemble rather than deterministic forecasts lead to better decisions in the context of flood mitigation. Hence, it is believed that ensemble forecasts possess a greater economic and social value for both decision makers and the general population. However, the vast majority of, if not all, existing hydro-economic studies rely on a cost-loss ratio framework that assumes a risk-neutral decision maker. To overcome this important flaw, this study borrows from economics and evaluates the economic value of early warning flood systems using the well-known Constant Absolute Risk Aversion (CARA) utility function, which explicitly accounts for the level of risk aversion of the decision maker. This new framework allows for the full exploitation of the information related to a forecasts' uncertainty, making it especially suited for the economic assessment of ensemble or probabilistic forecasts. Rather than comparing deterministic and ensemble forecasts, this study focuses on comparing different types of ensemble forecasts. There are multiple ways of assessing and representing forecast uncertainty. Consequently, there exist many different means of building an ensemble forecasting system for future streamflow. One such possibility is to dress deterministic forecasts using the statistics of past error forecasts. Such dressing methods are popular among operational agencies because of their simplicity and intuitiveness. Another approach is the use of ensemble meteorological forecasts for precipitation and temperature, which are then provided as inputs to one or many hydrological model(s). In this study, three concurrent ensemble streamflow forecasting systems are compared: simple statistically dressed

  19. Ozone sonde cell current measurements and implications for observations of near-zero ozone concentrations in the tropical upper troposphere

    Directory of Open Access Journals (Sweden)

    H. Vömel

    2010-04-01

    Full Text Available Laboratory measurements of the Electrochemical Concentration Cell (ECC ozone sonde cell current using ozone free air as well as defined amounts of ozone reveal that background current measurements during sonde preparation are neither constant as a function of time, nor constant as a function of ozone concentration. Using a background current, measured at a defined timed after exposure to high ozone may often overestimate the real background, leading to artificially low ozone concentrations in the upper tropical troposphere, and may frequently lead to operator dependent uncertainties. Based on these laboratory measurements an improved cell current to partial pressure conversion is proposed, which removes operator dependent variability in the background reading and possible artifacts in this measurement. Data from the Central Equatorial Pacific Experiment (CEPEX have been reprocessed using the improved background treatment based on these laboratory measurements. In the reprocessed data set near-zero ozone events no longer occur. At Samoa, Fiji, Tahiti, and San Cristóbal, nearly all near-zero ozone concentrations occur in soundings with larger background currents. To a large extent, these events are no longer observed in the reprocessed data set using the improved background treatment.

  20. The Ecophysiology Of A Pinus Ponderosa Ecosystem Exposed To High Tropospheric Ozone: Implications For Stomatal And Non-Stomatal Ozone Fluxes

    Science.gov (United States)

    Fares, S.; McKay, M.; Goldstein, A.

    2008-12-01

    Ecosystems remove ozone from the troposphere through both stomatal and non-stomatal deposition. The portion of ozone taken up through stomata has an oxidative effect causing damage. We used a multi-year dataset to assess the physiological controls over ozone deposition. Environmental parameters, CO2 and ozone fluxes were measured continuously from January 2001 to December 2006 above a ponderosa pine plantation near Blodgett Forest, Georgetown, California. We studied the dynamic of NEE (Net Ecosystem Exchange, -838 g C m-2 yr-1) and water evapotranspiration on an annual and daily basis. These processes are tightly coupled to stomatal aperture which also controlled ozone fluxes. High levels of ozone concentrations (~ 100 ppb) were observed during the spring-summer period, with corresponding high levels of ozone fluxes (~ 30 μmol m-2 h-1). During the summer season, a large portion of the total ozone flux was due to non-stomatal processes, and we propose that a plant physiological control, releasing BVOC (Biogenic Volatile Organic Compounds), is mainly responsible. We analyzed the correlations of common ozone exposure metrics based on accumulation of concentrations (AOT40 and SUM0) with ozone fluxes (total, stomatal and non-stomatal). Stomatal flux showed poorer correlation with ozone concentrations than non-stomatal flux during summer and fall seasons, which largely corresponded to the growing period. We therefore suggest that AOT40 and SUM0 are poor predictors of ozone damage and that a physiologically based metric would be more effective.

  1. Multi sensor reanalysis of total ozone

    Directory of Open Access Journals (Sweden)

    R. J. van der A

    2010-11-01

    Full Text Available A single coherent total ozone dataset, called the Multi Sensor Reanalysis (MSR, has been created from all available ozone column data measured by polar orbiting satellites in the near-ultraviolet Huggins band in the last thirty years. Fourteen total ozone satellite retrieval datasets from the instruments TOMS (on the satellites Nimbus-7 and Earth Probe, SBUV (Nimbus-7, NOAA-9, NOAA-11 and NOAA-16, GOME (ERS-2, SCIAMACHY (Envisat, OMI (EOS-Aura, and GOME-2 (Metop-A have been used in the MSR. As first step a bias correction scheme is applied to all satellite observations, based on independent ground-based total ozone data from the World Ozone and Ultraviolet Data Center. The correction is a function of solar zenith angle, viewing angle, time (trend, and effective ozone temperature. As second step data assimilation was applied to create a global dataset of total ozone analyses. The data assimilation method is a sub-optimal implementation of the Kalman filter technique, and is based on a chemical transport model driven by ECMWF meteorological fields. The chemical transport model provides a detailed description of (stratospheric transport and uses parameterisations for gas-phase and ozone hole chemistry. The MSR dataset results from a 30-year data assimilation run with the 14 corrected satellite datasets as input, and is available on a grid of 1× 1 1/2° with a sample frequency of 6 h for the complete time period (1978–2008. The Observation-minus-Analysis (OmA statistics show that the bias of the MSR analyses is less than 1% with an RMS standard deviation of about 2% as compared to the corrected satellite observations used.

  2. Influence of the ozone profile above Madrid (Spain) on Brewer estimation of ozone air mass factor

    Energy Technology Data Exchange (ETDEWEB)

    Anton, M. [Univ. de Extremadura, Badajoz (Spain). Dept. de Fisica; Evora Univ. (PT). Goephysics Centre of Evora (CGE); Lopez, M.; Banon, M. [Agenica Estatal de Meteorologia (AEMET), Madrid (Spain); Costa, M.J.; Silva, A.M. [Evora Univ. (PT). Goephysics Centre of Evora (CGE); Evora Univ. (Portugal). Dept. of Physics; Serrano, A. [Univ. de Extremadura, Badajoz (Spain). Dept. de Fisica; Bortoli, D. [Evora Univ. (PT). Goephysics Centre of Evora (CGE); Vilaplana, J.M. [Instituto Nacional de Tecnica Aeroespacial (INTA), Huelva (Spain). Estacion de Sondeos Atmosferico ' ' El Arenosillo' '

    2009-07-01

    The methodology used by Brewer spectroradiometers to estimate the ozone column is based on differential absorption spectroscopy. This methodology employs the ozone air mass factor (AMF) to derive the total ozone column from the slant path ozone amount. For the calculating the ozone AMF, the Brewer algorithm assumes that the ozone layer is located at a fixed height of 22 km. However, for a real specific site the ozone presents a certain profile, which varies spatially and temporally depending on the latitude, altitude and dynamical conditions of the atmosphere above the site of measurements. In this sense, this work address the reliability of the mentioned assumption and analyses the influence of the ozone profiles measured above Madrid (Spain) in the ozone AMF calculations. The approximated ozone AMF used by the Brewer algorithm is compared with simulations obtained using the libRadtran radiative transfer model code. The results show an excellent agreement between the simulated and the approximated AMF values for solar zenith angle lower than 75 . In addition, the relative differences remain lower than 2% at 85 . These good results are mainly due to the fact that the altitude of the ozone layer assumed constant by the Brewer algorithm for all latitudes notably can be considered representative of the real profile of ozone above Madrid (average value of 21.7{+-}1.8 km). The operational ozone AMF calculations for Brewer instruments are limited, in general, to SZA below 80 . Extending the usable SZA range is especially relevant for Brewer instruments located at high mid-latitudes. (orig.)

  3. Ozonation for source treatment of pharmaceuticals in hospital wastewater - ozone lifetime and required ozone dose

    DEFF Research Database (Denmark)

    Hansen, Kamilla Marie Speht; Spiliotopoulou, Aikaterini; Chhetri, Ravi Kumar

    2016-01-01

    Ozonation aimed at removing pharmaceuticals was studied in an effluent from an experimental pilot system using staged moving bed biofilm reactor (MBBR) tanks for the optimal biological treatment of wastewater from a medical care unit of Aarhus University Hospital. Dissolved organic carbon (DOC......) and pH in samples varied considerably, and the effect of these two parameters on ozone lifetime and the efficiency of ozone in removing pharmaceuticals were determined. The pH in the effluent varied from 5.0 to 9.0 resulting in approximately a doubling of the required ozone dose at the highest p......H for each pharmaceutical. DOC varied from 6 to 20 mg-DOC/L. The ozone required for removing each pharmaceutical, varied linearly with DOC and thus, ozone doses normalized to DOC (specific ozone dose) agreed between water samples (typically within 15%). At neutral pH the specific ozone dose required...

  4. Deterministic methods in radiation transport

    International Nuclear Information System (INIS)

    Rice, A.F.; Roussin, R.W.

    1992-06-01

    The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community

  5. Evaluation of ozone generation and indoor organic compounds removal by air cleaners based on chamber tests

    Science.gov (United States)

    Yu, Kuo-Pin; Lee, Grace Whei-May; Hsieh, Ching-Pei; Lin, Chi-Chi

    2011-01-01

    Ozone can cause many health problems, including exacerbation of asthma, throat irritation, cough, chest ache, shortness of breath, and respiratory infections. Air cleaners are one of the sources of indoor ozone, and thus the evaluation of ozone generated by air cleaners is desired significant issue. Most evaluation methods proposed are based on chamber tests. However, the adsorption and desorption of ozone on the wall of test chamber and the deposition of ozone resulted from the surface reaction can influence the evaluation results. In this study, we developed a mass balance model that took the adsorption, desorption and deposition of ozone into consideration to evaluate the effective ozone emission rates of six selected air cleaners. The experiments were conducted in a stainless steel chamber with a volume of 11.3 m 3 at 25 °C and 60% relative humidity. The adsorption, desorption and deposition rate constants of ozone obtained by fitting the model to the experimental data were k a = 0.149 ± 0.052 m h -1, k d = 0.013 ± 0.007 h -1, and k r = 0.050 ± 0.020 h -1, respectively. The effective ozone emission rates of Air Cleaners No. 1, 2, and 3 ranged between 13,400-24,500 μg h -1, 7190-10,400 μg h -1, and 4880-6560 μg h -1, respectively, which were more stable than those of No.4, 5, and 6. The effective ozone emission rates of Air Cleaners No. 4, 5, and 6 increased with the time of operation which might be relevant to the decrease of ozone removal by the "aging" filter installed in these cleaners. The removal of toluene and formaldehyde by these six air cleaners were also evaluated and the clean air delivery rates (CADRs) of these two pollutants ranged from non-detectable to 0.42 ± 0.08 m 3 h -1, and from non-detectable to 0.75 ± 0.07 m 3 h -1, respectively. The CADRs showed an insignificant relationship with the effective ozone emission rates. Thus, the removal of toluene and formaldehyde might be resulted from the adsorption on the filters and the

  6. Extreme events in total ozone over the Northern mid-latitudes: an analysis based on long-term data sets from five European ground-based stations

    Energy Technology Data Exchange (ETDEWEB)

    Rieder, Harald E. (Inst. for Atmospheric and Climate Science, ETH Zurich, Zurich (Switzerland)), e-mail: hr2302@columbia.edu; Jancso, Leonhardt M. (Inst. for Atmospheric and Climate Science, ETH Zurich, Zurich (Switzerland); Inst. for Meteorology and Geophysics, Univ. of Innsbruck, Innsbruck (Austria)); Di Rocco, Stefania (Inst. for Atmospheric and Climate Science, ETH Zurich, Zurich (Switzerland); Dept. of Geography, Univ. of Zurich, Zurich (Switzerland)) (and others)

    2011-11-15

    We apply methods from extreme value theory to identify extreme events in high (termed EHOs) and low (termed ELOs) total ozone and to describe the distribution tails (i.e. very high and very low values) of five long-term European ground-based total ozone time series. The influence of these extreme events on observed mean values, long-term trends and changes is analysed. The results show a decrease in EHOs and an increase in ELOs during the last decades, and establish that the observed downward trend in column ozone during the 1970-1990s is strongly dominated by changes in the frequency of extreme events. Furthermore, it is shown that clear 'fingerprints' of atmospheric dynamics (NAO, ENSO) and chemistry [ozone depleting substances (ODSs), polar vortex ozone loss] can be found in the frequency distribution of ozone extremes, even if no attribution is possible from standard metrics (e.g. annual mean values). The analysis complements earlier analysis for the world's longest total ozone record at Arosa, Switzerland, confirming and revealing the strong influence of atmospheric dynamics on observed ozone changes. The results provide clear evidence that in addition to ODS, volcanic eruptions and strong/moderate ENSO and NAO events had significant influence on column ozone in the European sector

  7. Assessing North American multimodel ensemble (NMME) seasonal forecast skill to assist in the early warning of hydrometeorological extremes over East Africa

    Science.gov (United States)

    Shukla, Shraddhanand; Roberts, Jason B.; Hoell. Andrew,; Funk, Chris; Robertson, Franklin R.; Kirtmann, Benjamin

    2016-01-01

    The skill of North American multimodel ensemble (NMME) seasonal forecasts in East Africa (EA), which encompasses one of the most food and water insecure areas of the world, is evaluated using deterministic, categorical, and probabilistic evaluation methods. The skill is estimated for all three primary growing seasons: March–May (MAM), July–September (JAS), and October–December (OND). It is found that the precipitation forecast skill in this region is generally limited and statistically significant over only a small part of the domain. In the case of MAM (JAS) [OND] season it exceeds the skill of climatological forecasts in parts of equatorial EA (Northern Ethiopia) [equatorial EA] for up to 2 (5) [5] months lead. Temperature forecast skill is generally much higher than precipitation forecast skill (in terms of deterministic and probabilistic skill scores) and statistically significant over a majority of the region. Over the region as a whole, temperature forecasts also exhibit greater reliability than the precipitation forecasts. The NMME ensemble forecasts are found to be more skillful and reliable than the forecast from any individual model. The results also demonstrate that for some seasons (e.g. JAS), the predictability of precipitation signals varies and is higher during certain climate events (e.g. ENSO). Finally, potential room for improvement in forecast skill is identified in some models by comparing homogeneous predictability in individual NMME models with their respective forecast skill.

  8. A retrospective streamflow ensemble forecast for an extreme hydrologic event: a case study of Hurricane Irene and on the Hudson River basin

    Science.gov (United States)

    Saleh, Firas; Ramaswamy, Venkatsundar; Georgas, Nickitas; Blumberg, Alan F.; Pullen, Julie

    2016-07-01

    This paper investigates the uncertainties in hourly streamflow ensemble forecasts for an extreme hydrological event using a hydrological model forced with short-range ensemble weather prediction models. A state-of-the art, automated, short-term hydrologic prediction framework was implemented using GIS and a regional scale hydrological model (HEC-HMS). The hydrologic framework was applied to the Hudson River basin ( ˜ 36 000 km2) in the United States using gridded precipitation data from the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) and was validated against streamflow observations from the United States Geologic Survey (USGS). Finally, 21 precipitation ensemble members of the latest Global Ensemble Forecast System (GEFS/R) were forced into HEC-HMS to generate a retrospective streamflow ensemble forecast for an extreme hydrological event, Hurricane Irene. The work shows that ensemble stream discharge forecasts provide improved predictions and useful information about associated uncertainties, thus improving the assessment of risks when compared with deterministic forecasts. The uncertainties in weather inputs may result in false warnings and missed river flooding events, reducing the potential to effectively mitigate flood damage. The findings demonstrate how errors in the ensemble median streamflow forecast and time of peak, as well as the ensemble spread (uncertainty) are reduced 48 h pre-event by utilizing the ensemble framework. The methodology and implications of this work benefit efforts of short-term streamflow forecasts at regional scales, notably regarding the peak timing of an extreme hydrologic event when combined with a flood threshold exceedance diagram. Although the modeling framework was implemented on the Hudson River basin, it is flexible and applicable in other parts of the world where atmospheric reanalysis products and streamflow data are available.

  9. Study on deterministic response time design for a class of nuclear Instrumentation and Control systems

    International Nuclear Information System (INIS)

    Chen, Chang-Kuo; Hou, Yi-You; Luo, Cheng-Long

    2012-01-01

    Highlights: ► An efficient design procedure for deterministic response time design of nuclear I and C system. ► We model the concurrent operations based on sequence diagrams and Petri nets. ► The model can achieve the deterministic behavior by using symbolic time representation. ► An illustrative example of the bistable processor logic is given. - Abstract: This study is concerned with a deterministic response time design for computer-based systems in the nuclear industry. In current approach, Petri nets are used to model the requirement of a system specified with sequence diagrams. Also, the linear logic is proposed to characterize the state of changes in the Petri net model accurately by using symbolic time representation for the purpose of acquiring deterministic behavior. An illustrative example of the bistable processor logic is provided to demonstrate the practicability of the proposed approach.

  10. Design of deterministic interleaver for turbo codes

    International Nuclear Information System (INIS)

    Arif, M.A.; Sheikh, N.M.; Sheikh, A.U.H.

    2008-01-01

    The choice of suitable interleaver for turbo codes can improve the performance considerably. For long block lengths, random interleavers perform well, but for some applications it is desirable to keep the block length shorter to avoid latency. For such applications deterministic interleavers perform better. The performance and design of a deterministic interleaver for short frame turbo codes is considered in this paper. The main characteristic of this class of deterministic interleaver is that their algebraic design selects the best permutation generator such that the points in smaller subsets of the interleaved output are uniformly spread over the entire range of the information data frame. It is observed that the interleaver designed in this manner improves the minimum distance or reduces the multiplicity of first few spectral lines of minimum distance spectrum. Finally we introduce a circular shift in the permutation function to reduce the correlation between the parity bits corresponding to the original and interleaved data frames to improve the decoding capability of MAP (Maximum A Posteriori) probability decoder. Our solution to design a deterministic interleaver outperforms the semi-random interleavers and the deterministic interleavers reported in the literature. (author)

  11. A Link-Based Cluster Ensemble Approach For Improved Gene Expression Data Analysis

    Directory of Open Access Journals (Sweden)

    P.Balaji

    2015-01-01

    Full Text Available Abstract It is difficult from possibilities to select a most suitable effective way of clustering algorithm and its dataset for a defined set of gene expression data because we have a huge number of ways and huge number of gene expressions. At present many researchers are preferring to use hierarchical clustering in different forms this is no more totally optimal. Cluster ensemble research can solve this type of problem by automatically merging multiple data partitions from a wide range of different clusterings of any dimensions to improve both the quality and robustness of the clustering result. But we have many existing ensemble approaches using an association matrix to condense sample-cluster and co-occurrence statistics and relations within the ensemble are encapsulated only at raw level while the existing among clusters are totally discriminated. Finding these missing associations can greatly expand the capability of those ensemble methodologies for microarray data clustering. We propose general K-means cluster ensemble approach for the clustering of general categorical data into required number of partitions.

  12. Effect of Pulse Width on Oxygen-fed Ozonizer

    Science.gov (United States)

    Okada, Sho; Wang, Douyan; Namihira, Takao; Katsuki, Sunao; Akiyama, Hidenori

    Though general ozonizers based on silent discharge (barrier discharge) have been used to supply ozone at many industrial situations, there is still some problem, such as improvements of ozone yield. In this work, ozone was generated by pulsed discharge in order to improve the characteristics of ozone generation. It is known that a pulse width gives strong effect to the improvement of energy efficiency in exhaust gas processing. In this paper, the effect of pulse duration on ozone generation by pulsed discharge in oxygen would be reported.

  13. Symmetric minimally entangled typical thermal states, grand-canonical ensembles, and the influence of the collapse bases

    Science.gov (United States)

    Binder, Moritz; Barthel, Thomas

    Based on DMRG, strongly correlated quantum many-body systems at finite temperatures can be simulated by sampling over a certain class of pure matrix product states (MPS) called minimally entangled typical thermal states (METTS). Here, we show how symmetries of the system can be exploited to considerably reduce computation costs in the METTS algorithm. While this is straightforward for the canonical ensemble, we introduce a modification of the algorithm to efficiently simulate the grand-canonical ensemble under utilization of symmetries. In addition, we construct novel symmetry-conserving collapse bases for the transitions in the Markov chain of METTS that improve the speed of convergence of the algorithm by reducing autocorrelations.

  14. Non deterministic finite automata for power systems fault diagnostics

    Directory of Open Access Journals (Sweden)

    LINDEN, R.

    2009-06-01

    Full Text Available This paper introduces an application based on finite non-deterministic automata for power systems diagnosis. Automata for the simpler faults are presented and the proposed system is compared with an established expert system.

  15. Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites

    Science.gov (United States)

    Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo

    2017-08-01

    The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.

  16. Ozone modeling for compliance planning: A synopsis of ''The Use of Photochemical Air Quality Models for Evaluating Emission Control Strategies: A Synthesis Report''

    International Nuclear Information System (INIS)

    Blanchard, C.L.

    1992-12-01

    The 1990 federal Clean Air Act Amendments require that many nonattainment areas use gridded, photochemical air quality models to develop compliance plans for meeting the ambient ozone standard. Both industry and regulatory agencies will need to consider explicitly the strengths and limitations of the models. Photochemical air quality models constitute the principal tool available for evaluating the relative effectiveness of alternative emission control strategies. Limitations in the utility of modeling results stem from the uncertainty and bias of predictions for modeled episodes, possible compensating errors, limitations in the number of modeled episodes, and incompatibility between deterministic model predictions and the statistical form of the air quality standard for ozone. If emissions estimates (including naturally produced ''biogenic'' emissions) are accurate, intensive aerometric data are available, and an evaluation of performance (including diagnostic evaluations) is successfully completed, gridded photochemical airquality models can determine (1) the types of emission controls - VOC, NO x , or both - that would be most effective for reducing ozone concentrations, and (2) the approximate magnitudes - to within about 20--40% - of the estimated ozone reductions

  17. Deterministic Properties of Serially Connected Distributed Lag Models

    Directory of Open Access Journals (Sweden)

    Piotr Nowak

    2013-01-01

    Full Text Available Distributed lag models are an important tool in modeling dynamic systems in economics. In the analysis of composite forms of such models, the component models are ordered in parallel (with the same independent variable and/or in series (where the independent variable is also the dependent variable in the preceding model. This paper presents an analysis of certain deterministic properties of composite distributed lag models composed of component distributed lag models arranged in sequence, and their asymptotic properties in particular. The models considered are in discrete form. Even though the paper focuses on deterministic properties of distributed lag models, the derivations are based on analytical tools commonly used in probability theory such as probability distributions and the central limit theorem. (original abstract

  18. Protein folding simulations by generalized-ensemble algorithms.

    Science.gov (United States)

    Yoda, Takao; Sugita, Yuji; Okamoto, Yuko

    2014-01-01

    In the protein folding problem, conventional simulations in physical statistical mechanical ensembles, such as the canonical ensemble with fixed temperature, face a great difficulty. This is because there exist a huge number of local-minimum-energy states in the system and the conventional simulations tend to get trapped in these states, giving wrong results. Generalized-ensemble algorithms are based on artificial unphysical ensembles and overcome the above difficulty by performing random walks in potential energy, volume, and other physical quantities or their corresponding conjugate parameters such as temperature, pressure, etc. The advantage of generalized-ensemble simulations lies in the fact that they not only avoid getting trapped in states of energy local minima but also allows the calculations of physical quantities as functions of temperature or other parameters from a single simulation run. In this article we review the generalized-ensemble algorithms. Four examples, multicanonical algorithm, replica-exchange method, replica-exchange multicanonical algorithm, and multicanonical replica-exchange method, are described in detail. Examples of their applications to the protein folding problem are presented.

  19. Evaluation of ensemble precipitation forecasts generated through post-processing in a Canadian catchment

    Directory of Open Access Journals (Sweden)

    S. K. Jha

    2018-03-01

    Full Text Available Flooding in Canada is often caused by heavy rainfall during the snowmelt period. Hydrologic forecast centers rely on precipitation forecasts obtained from numerical weather prediction (NWP models to enforce hydrological models for streamflow forecasting. The uncertainties in raw quantitative precipitation forecasts (QPFs are enhanced by physiography and orography effects over a diverse landscape, particularly in the western catchments of Canada. A Bayesian post-processing approach called rainfall post-processing (RPP, developed in Australia (Robertson et al., 2013; Shrestha et al., 2015, has been applied to assess its forecast performance in a Canadian catchment. Raw QPFs obtained from two sources, Global Ensemble Forecasting System (GEFS Reforecast 2 project, from the National Centers for Environmental Prediction, and Global Deterministic Forecast System (GDPS, from Environment and Climate Change Canada, are used in this study. The study period from January 2013 to December 2015 covered a major flood event in Calgary, Alberta, Canada. Post-processed results show that the RPP is able to remove the bias and reduce the errors of both GEFS and GDPS forecasts. Ensembles generated from the RPP reliably quantify the forecast uncertainty.

  20. Ensembl 2004.

    Science.gov (United States)

    Birney, E; Andrews, D; Bevan, P; Caccamo, M; Cameron, G; Chen, Y; Clarke, L; Coates, G; Cox, T; Cuff, J; Curwen, V; Cutts, T; Down, T; Durbin, R; Eyras, E; Fernandez-Suarez, X M; Gane, P; Gibbins, B; Gilbert, J; Hammond, M; Hotz, H; Iyer, V; Kahari, A; Jekosch, K; Kasprzyk, A; Keefe, D; Keenan, S; Lehvaslaiho, H; McVicker, G; Melsopp, C; Meidl, P; Mongin, E; Pettett, R; Potter, S; Proctor, G; Rae, M; Searle, S; Slater, G; Smedley, D; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Storey, R; Ureta-Vidal, A; Woodwark, C; Clamp, M; Hubbard, T

    2004-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organize biology around the sequences of large genomes. It is a comprehensive and integrated source of annotation of large genome sequences, available via interactive website, web services or flat files. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. The facilities of the system range from sequence analysis to data storage and visualization and installations exist around the world both in companies and at academic sites. With a total of nine genome sequences available from Ensembl and more genomes to follow, recent developments have focused mainly on closer integration between genomes and external data.

  1. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case

  2. Numerical Approach to Spatial Deterministic-Stochastic Models Arising in Cell Biology.

    Science.gov (United States)

    Schaff, James C; Gao, Fei; Li, Ye; Novak, Igor L; Slepchenko, Boris M

    2016-12-01

    Hybrid deterministic-stochastic methods provide an efficient alternative to a fully stochastic treatment of models which include components with disparate levels of stochasticity. However, general-purpose hybrid solvers for spatially resolved simulations of reaction-diffusion systems are not widely available. Here we describe fundamentals of a general-purpose spatial hybrid method. The method generates realizations of a spatially inhomogeneous hybrid system by appropriately integrating capabilities of a deterministic partial differential equation solver with a popular particle-based stochastic simulator, Smoldyn. Rigorous validation of the algorithm is detailed, using a simple model of calcium 'sparks' as a testbed. The solver is then applied to a deterministic-stochastic model of spontaneous emergence of cell polarity. The approach is general enough to be implemented within biologist-friendly software frameworks such as Virtual Cell.

  3. Proving Non-Deterministic Computations in Agda

    Directory of Open Access Journals (Sweden)

    Sergio Antoy

    2017-01-01

    Full Text Available We investigate proving properties of Curry programs using Agda. First, we address the functional correctness of Curry functions that, apart from some syntactic and semantic differences, are in the intersection of the two languages. Second, we use Agda to model non-deterministic functions with two distinct and competitive approaches incorporating the non-determinism. The first approach eliminates non-determinism by considering the set of all non-deterministic values produced by an application. The second approach encodes every non-deterministic choice that the application could perform. We consider our initial experiment a success. Although proving properties of programs is a notoriously difficult task, the functional logic paradigm does not seem to add any significant layer of difficulty or complexity to the task.

  4. Ensemble forecasting for renewable energy applications - status and current challenges for their generation and verification

    Science.gov (United States)

    Pinson, Pierre

    2016-04-01

    The operational management of renewable energy generation in power systems and electricity markets requires forecasts in various forms, e.g., deterministic or probabilistic, continuous or categorical, depending upon the decision process at hand. Besides, such forecasts may also be necessary at various spatial and temporal scales, from high temporal resolutions (in the order of minutes) and very localized for an offshore wind farm, to coarser temporal resolutions (hours) and covering a whole country for day-ahead power scheduling problems. As of today, weather predictions are a common input to forecasting methodologies for renewable energy generation. Since for most decision processes, optimal decisions can only be made if accounting for forecast uncertainties, ensemble predictions and density forecasts are increasingly seen as the product of choice. After discussing some of the basic approaches to obtaining ensemble forecasts of renewable power generation, it will be argued that space-time trajectories of renewable power production may or may not be necessitate post-processing ensemble forecasts for relevant weather variables. Example approaches and test case applications will be covered, e.g., looking at the Horns Rev offshore wind farm in Denmark, or gridded forecasts for the whole continental Europe. Eventually, we will illustrate some of the limitations of current frameworks to forecast verification, which actually make it difficult to fully assess the quality of post-processing approaches to obtain renewable energy predictions.

  5. Recent achievements of the neo-deterministic seismic hazard assessment in the CEI region

    International Nuclear Information System (INIS)

    Panza, G.F.; Vaccari, F.; Kouteva, M.

    2008-03-01

    A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales - regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown. (author)

  6. An Ensemble Approach to Knowledge-Based Intensity-Modulated Radiation Therapy Planning

    Directory of Open Access Journals (Sweden)

    Jiahan Zhang

    2018-03-01

    Full Text Available Knowledge-based planning (KBP utilizes experienced planners’ knowledge embedded in prior plans to estimate optimal achievable dose volume histogram (DVH of new cases. In the regression-based KBP framework, previously planned patients’ anatomical features and DVHs are extracted, and prior knowledge is summarized as the regression coefficients that transform features to organ-at-risk DVH predictions. In our study, we find that in different settings, different regression methods work better. To improve the robustness of KBP models, we propose an ensemble method that combines the strengths of various linear regression models, including stepwise, lasso, elastic net, and ridge regression. In the ensemble approach, we first obtain individual model prediction metadata using in-training-set leave-one-out cross validation. A constrained optimization is subsequently performed to decide individual model weights. The metadata is also used to filter out impactful training set outliers. We evaluate our method on a fresh set of retrospectively retrieved anonymized prostate intensity-modulated radiation therapy (IMRT cases and head and neck IMRT cases. The proposed approach is more robust against small training set size, wrongly labeled cases, and dosimetric inferior plans, compared with other individual models. In summary, we believe the improved robustness makes the proposed method more suitable for clinical settings than individual models.

  7. AUC-based biomarker ensemble with an application on gene scores predicting low bone mineral density.

    Science.gov (United States)

    Zhao, X G; Dai, W; Li, Y; Tian, L

    2011-11-01

    The area under the receiver operating characteristic (ROC) curve (AUC), long regarded as a 'golden' measure for the predictiveness of a continuous score, has propelled the need to develop AUC-based predictors. However, the AUC-based ensemble methods are rather scant, largely due to the fact that the associated objective function is neither continuous nor concave. Indeed, there is no reliable numerical algorithm identifying optimal combination of a set of biomarkers to maximize the AUC, especially when the number of biomarkers is large. We have proposed a novel AUC-based statistical ensemble methods for combining multiple biomarkers to differentiate a binary response of interest. Specifically, we propose to replace the non-continuous and non-convex AUC objective function by a convex surrogate loss function, whose minimizer can be efficiently identified. With the established framework, the lasso and other regularization techniques enable feature selections. Extensive simulations have demonstrated the superiority of the new methods to the existing methods. The proposal has been applied to a gene expression dataset to construct gene expression scores to differentiate elderly women with low bone mineral density (BMD) and those with normal BMD. The AUCs of the resulting scores in the independent test dataset has been satisfactory. Aiming for directly maximizing AUC, the proposed AUC-based ensemble method provides an efficient means of generating a stable combination of multiple biomarkers, which is especially useful under the high-dimensional settings. lutian@stanford.edu. Supplementary data are available at Bioinformatics online.

  8. Distinguishing deterministic and noise components in ELM time series

    International Nuclear Information System (INIS)

    Zvejnieks, G.; Kuzovkov, V.N

    2004-01-01

    Full text: One of the main problems in the preliminary data analysis is distinguishing the deterministic and noise components in the experimental signals. For example, in plasma physics the question arises analyzing edge localized modes (ELMs): is observed ELM behavior governed by a complicate deterministic chaos or just by random processes. We have developed methodology based on financial engineering principles, which allows us to distinguish deterministic and noise components. We extended the linear auto regression method (AR) by including the non-linearity (NAR method). As a starting point we have chosen the nonlinearity in the polynomial form, however, the NAR method can be extended to any other type of non-linear functions. The best polynomial model describing the experimental ELM time series was selected using Bayesian Information Criterion (BIC). With this method we have analyzed type I ELM behavior in a subset of ASDEX Upgrade shots. Obtained results indicate that a linear AR model can describe the ELM behavior. In turn, it means that type I ELM behavior is of a relaxation or random type

  9. An Efficient Ensemble Learning Method for Gene Microarray Classification

    Directory of Open Access Journals (Sweden)

    Alireza Osareh

    2013-01-01

    Full Text Available The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.

  10. Defense meteorological satellite measurements of total ozone

    International Nuclear Information System (INIS)

    Lovill, J.E.; Ellis, J.S.; Luther, F.M.; Sullivan, R.J.; Weichel, R.L.

    1992-01-01

    A multichannel filter radiometer (MFR) on Defense Meteorological Satellites (DMS) that measured total ozone on a global-scale from March 1977 - February 1980 is described. The total ozone data measured by the MFR were compared with total ozone data taken by surfaced-based Dobson spectrophotometers. When comparisons were made for five months, the Dobson spectrophotometer measured 2-5% more total ozone than the MFR. Comparisons between the Dobson spectrophotometer and the MFR showed a reduced RMS difference as the comparisons were made at closer proximity. A Northern Hemisphere total ozone distribution obtained from MFR data is presented

  11. Ozone Layer Protection

    Science.gov (United States)

    ... and Research Centers Contact Us Share Ozone Layer Protection The stratospheric ozone layer is Earth’s “sunscreen” – protecting ... GreenChill Partnership Responsible Appliance Disposal (RAD) Program Ozone Protection vs. Ozone Pollution This website addresses stratospheric ozone ...

  12. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  13. Deterministic nanoparticle assemblies: from substrate to solution

    International Nuclear Information System (INIS)

    Barcelo, Steven J; Gibson, Gary A; Yamakawa, Mineo; Li, Zhiyong; Kim, Ansoon; Norris, Kate J

    2014-01-01

    The deterministic assembly of metallic nanoparticles is an exciting field with many potential benefits. Many promising techniques have been developed, but challenges remain, particularly for the assembly of larger nanoparticles which often have more interesting plasmonic properties. Here we present a scalable process combining the strengths of top down and bottom up fabrication to generate deterministic 2D assemblies of metallic nanoparticles and demonstrate their stable transfer to solution. Scanning electron and high-resolution transmission electron microscopy studies of these assemblies suggested the formation of nanobridges between touching nanoparticles that hold them together so as to maintain the integrity of the assembly throughout the transfer process. The application of these nanoparticle assemblies as solution-based surface-enhanced Raman scattering (SERS) materials is demonstrated by trapping analyte molecules in the nanoparticle gaps during assembly, yielding uniformly high enhancement factors at all stages of the fabrication process. (paper)

  14. Mean-field Ensemble Kalman Filter

    KAUST Repository

    Law, Kody; Tembine, Hamidou; Tempone, Raul

    2015-01-01

    A proof of convergence of the standard EnKF generalized to non-Gaussian state space models is provided. A density-based deterministic approximation of the mean-field limiting EnKF (MFEnKF) is proposed, consisting of a PDE solver and a quadrature

  15. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    Science.gov (United States)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons

  16. Data assimilation in integrated hydrological modeling using ensemble Kalman filtering

    DEFF Research Database (Denmark)

    Rasmussen, Jørn; Madsen, H.; Jensen, Karsten Høgh

    2015-01-01

    Groundwater head and stream discharge is assimilated using the ensemble transform Kalman filter in an integrated hydrological model with the aim of studying the relationship between the filter performance and the ensemble size. In an attempt to reduce the required number of ensemble members...... and estimating parameters requires a much larger ensemble size than just assimilating groundwater head observations. However, the required ensemble size can be greatly reduced with the use of adaptive localization, which by far outperforms distance-based localization. The study is conducted using synthetic data...

  17. Handbook of EOQ inventory problems stochastic and deterministic models and applications

    CERN Document Server

    Choi, Tsan-Ming

    2013-01-01

    This book explores deterministic and stochastic EOQ-model based problems and applications, presenting technical analyses of single-echelon EOQ model based inventory problems, and applications of the EOQ model for multi-echelon supply chain inventory analysis.

  18. Bioactive focus in conformational ensembles: a pluralistic approach

    Science.gov (United States)

    Habgood, Matthew

    2017-12-01

    Computational generation of conformational ensembles is key to contemporary drug design. Selecting the members of the ensemble that will approximate the conformation most likely to bind to a desired target (the bioactive conformation) is difficult, given that the potential energy usually used to generate and rank the ensemble is a notoriously poor discriminator between bioactive and non-bioactive conformations. In this study an approach to generating a focused ensemble is proposed in which each conformation is assigned multiple rankings based not just on potential energy but also on solvation energy, hydrophobic or hydrophilic interaction energy, radius of gyration, and on a statistical potential derived from Cambridge Structural Database data. The best ranked structures derived from each system are then assembled into a new ensemble that is shown to be better focused on bioactive conformations. This pluralistic approach is tested on ensembles generated by the Molecular Operating Environment's Low Mode Molecular Dynamics module, and by the Cambridge Crystallographic Data Centre's conformation generator software.

  19. Comparative study of ozonized olive oil and ozonized sunflower oil

    Directory of Open Access Journals (Sweden)

    Díaz Maritza F.

    2006-01-01

    Full Text Available In this study the ozonized olive and sunflower oils are chemical and microbiologically compared. These oils were introduced into a reactor with bubbling ozone gas in a water bath at room temperature until they were solidified. The peroxide, acidity and iodine values along with antimicrobial activity were determined. Ozonization effects on the fatty acid composition of these oils were analyzed using Gas-Liquid Chromatographic Technique. An increase in peroxidation and acidity values was observed in both oils but they were higher in ozonized sunflower oil. Iodine value was zero in ozonized olive oil whereas in ozonized sunflower was 8.8 g Iodine per 100 g. The antimicrobial activity was similar for both ozonized oils except for Minimum Bactericidal Concentrations of Pseudomona aeruginosa. Composition of fatty acids in both ozonized oils showed gradual decrease in unsaturated fatty acids (C18:1, C18:2 with gradual increase in ozone doses.

  20. Deterministic chaotic dynamics of Raba River flow (Polish Carpathian Mountains)

    Science.gov (United States)

    Kędra, Mariola

    2014-02-01

    Is the underlying dynamics of river flow random or deterministic? If it is deterministic, is it deterministic chaotic? This issue is still controversial. The application of several independent methods, techniques and tools for studying daily river flow data gives consistent, reliable and clear-cut results to the question. The outcomes point out that the investigated discharge dynamics is not random but deterministic. Moreover, the results completely confirm the nonlinear deterministic chaotic nature of the studied process. The research was conducted on daily discharge from two selected gauging stations of the mountain river in southern Poland, the Raba River.

  1. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  2. Deterministic chaos in the pitting phenomena of passivable alloys

    International Nuclear Information System (INIS)

    Hoerle, Stephane

    1998-01-01

    It was shown that electrochemical noise recorded in stable pitting conditions exhibits deterministic (even chaotic) features. The occurrence of deterministic behaviors depend on the material/solution severity. Thus, electrolyte composition ([Cl - ]/[NO 3 - ] ratio, pH), passive film thickness or alloy composition can change the deterministic features. Only one pit is sufficient to observe deterministic behaviors. The electrochemical noise signals are non-stationary, which is a hint of a change with time in the pit behavior (propagation speed or mean). Modifications of electrolyte composition reveals transitions between random and deterministic behaviors. Spontaneous transitions between deterministic behaviors of different features (bifurcation) are also evidenced. Such bifurcations enlighten various routes to chaos. The routes to chaos and the features of chaotic signals allow to suggest the modeling (continuous and discontinuous models are proposed) of the electrochemical mechanisms inside a pit, that describe quite well the experimental behaviors and the effect of the various parameters. The analysis of the chaotic behaviors of a pit leads to a better understanding of propagation mechanisms and give tools for pit monitoring. (author) [fr

  3. Enhancing COSMO-DE ensemble forecasts by inexpensive techniques

    Directory of Open Access Journals (Sweden)

    Zied Ben Bouallègue

    2013-02-01

    Full Text Available COSMO-DE-EPS, a convection-permitting ensemble prediction system based on the high-resolution numerical weather prediction model COSMO-DE, is pre-operational since December 2010, providing probabilistic forecasts which cover Germany. This ensemble system comprises 20 members based on variations of the lateral boundary conditions, the physics parameterizations and the initial conditions. In order to increase the sample size in a computationally inexpensive way, COSMO-DE-EPS is combined with alternative ensemble techniques: the neighborhood method and the time-lagged approach. Their impact on the quality of the resulting probabilistic forecasts is assessed. Objective verification is performed over a six months period, scores based on the Brier score and its decomposition are shown for June 2011. The combination of the ensemble system with the alternative approaches improves probabilistic forecasts of precipitation in particular for high precipitation thresholds. Moreover, combining COSMO-DE-EPS with only the time-lagged approach improves the skill of area probabilities for precipitation and does not deteriorate the skill of 2 m-temperature and wind gusts forecasts.

  4. Ozone's impact on public health: Contributions from indoor exposures to ozone and products of ozone-initiated chemistry

    DEFF Research Database (Denmark)

    Weschler, Charles J.

    2006-01-01

    OBJECTIVES: The associations between ozone concentrations measured outdoors and both morbidity and mortality may be partially due to indoor exposures to ozone and ozone-initiated oxidation products. In this article I examine the contributions of such indoor exposures to overall ozone-related heal...

  5. Ensemble downscaling in coupled solar wind-magnetosphere modeling for space weather forecasting.

    Science.gov (United States)

    Owens, M J; Horbury, T S; Wicks, R T; McGregor, S L; Savani, N P; Xiong, M

    2014-06-01

    Advanced forecasting of space weather requires simulation of the whole Sun-to-Earth system, which necessitates driving magnetospheric models with the outputs from solar wind models. This presents a fundamental difficulty, as the magnetosphere is sensitive to both large-scale solar wind structures, which can be captured by solar wind models, and small-scale solar wind "noise," which is far below typical solar wind model resolution and results primarily from stochastic processes. Following similar approaches in terrestrial climate modeling, we propose statistical "downscaling" of solar wind model results prior to their use as input to a magnetospheric model. As magnetospheric response can be highly nonlinear, this is preferable to downscaling the results of magnetospheric modeling. To demonstrate the benefit of this approach, we first approximate solar wind model output by smoothing solar wind observations with an 8 h filter, then add small-scale structure back in through the addition of random noise with the observed spectral characteristics. Here we use a very simple parameterization of noise based upon the observed probability distribution functions of solar wind parameters, but more sophisticated methods will be developed in the future. An ensemble of results from the simple downscaling scheme are tested using a model-independent method and shown to add value to the magnetospheric forecast, both improving the best estimate and quantifying the uncertainty. We suggest a number of features desirable in an operational solar wind downscaling scheme. Solar wind models must be downscaled in order to drive magnetospheric models Ensemble downscaling is more effective than deterministic downscaling The magnetosphere responds nonlinearly to small-scale solar wind fluctuations.

  6. Decoding of Human Movements Based on Deep Brain Local Field Potentials Using Ensemble Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohammad S. Islam

    2017-01-01

    Full Text Available Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs for robust movement decoding of Parkinson’s disease (PD and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value at about 0.729±0.16 for decoding movement from the resting state and about 0.671±0.14 for decoding left and right visually cued movements.

  7. The influence of the new ECMWF Ensemble Prediction System resolution on wind power forecast accuracy and uncertainty estimation

    DEFF Research Database (Denmark)

    Alessandrini, S.; Pinson, Pierre; Sperati, S.

    2011-01-01

    The importance of wind power forecasting (WPF) is nowadays commonly recognized because it represents a useful tool to reduce problems of grid integration and to facilitate energy trading. If on one side the prediction accuracy is fundamental to these scopes, on the other it has become also clear...... by a recalibration procedure that allowed obtaining a more uniform distribution among the 51 intervals, making the ensemble spread large enough to include the observations. After that it was observed that the EPS power spread seemed to have enough correlation with the error calculated on the deterministic forecast...

  8. Ozone modeling within plasmas for ozone sensor applications

    OpenAIRE

    Arshak, Khalil; Forde, Edward; Guiney, Ivor

    2007-01-01

    peer-reviewed Ozone (03) is potentially hazardous to human health and accurate prediction and measurement of this gas is essential in addressing its associated health risks. This paper presents theory to predict the levels of ozone concentration emittedfrom a dielectric barrier discharge (DBD) plasma for ozone sensing applications. This is done by postulating the kinetic model for ozone generation, with a DBD plasma at atmospheric pressure in air, in the form of a set of rate equations....

  9. Creating ensembles of decision trees through sampling

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick

    2005-08-30

    A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.

  10. The probabilistic approach and the deterministic licensing procedure

    International Nuclear Information System (INIS)

    Fabian, H.; Feigel, A.; Gremm, O.

    1984-01-01

    If safety goals are given, the creativity of the engineers is necessary to transform the goals into actual safety measures. That is, safety goals are not sufficient for the derivation of a safety concept; the licensing process asks ''What does a safe plant look like.'' The answer connot be given by a probabilistic procedure, but need definite deterministic statements; the conclusion is, that the licensing process needs a deterministic approach. The probabilistic approach should be used in a complementary role in cases where deterministic criteria are not complete, not detailed enough or not consistent and additional arguments for decision making in connection with the adequacy of a specific measure are necessary. But also in these cases the probabilistic answer has to be transformed into a clear deterministic statement. (orig.)

  11. Global budget of tropospheric ozone: Evaluating recent model advances with satellite (OMI), aircraft (IAGOS), and ozonesonde observations

    Science.gov (United States)

    Hu, Lu; Jacob, Daniel J.; Liu, Xiong; Zhang, Yi; Zhang, Lin; Kim, Patrick S.; Sulprizio, Melissa P.; Yantosca, Robert M.

    2017-10-01

    The global budget of tropospheric ozone is governed by a complicated ensemble of coupled chemical and dynamical processes. Simulation of tropospheric ozone has been a major focus of the GEOS-Chem chemical transport model (CTM) over the past 20 years, and many developments over the years have affected the model representation of the ozone budget. Here we conduct a comprehensive evaluation of the standard version of GEOS-Chem (v10-01) with ozone observations from ozonesondes, the OMI satellite instrument, and MOZAIC-IAGOS commercial aircraft for 2012-2013. Global validation of the OMI 700-400 hPa data with ozonesondes shows that OMI maintained persistent high quality and no significant drift over the 2006-2013 period. GEOS-Chem shows no significant seasonal or latitudinal bias relative to OMI and strong correlations in all seasons on the 2° × 2.5° horizontal scale (r = 0.88-0.95), improving on previous model versions. The most pronounced model bias revealed by ozonesondes and MOZAIC-IAGOS is at high northern latitudes in winter-spring where the model is 10-20 ppbv too low. This appears to be due to insufficient stratosphere-troposphere exchange (STE). Model updates to lightning NOx, Asian anthropogenic emissions, bromine chemistry, isoprene chemistry, and meteorological fields over the past decade have overall led to gradual increase in the simulated global tropospheric ozone burden and more active ozone production and loss. From simulations with different versions of GEOS meteorological fields we find that tropospheric ozone in GEOS-Chem v10-01 has a global production rate of 4960-5530 Tg a-1, lifetime of 20.9-24.2 days, burden of 345-357 Tg, and STE of 325-492 Tg a-1. Change in the intensity of tropical deep convection between these different meteorological fields is a major factor driving differences in the ozone budget.

  12. First Reprocessing of Southern Hemisphere Additional Ozonesondes (SHADOZ) Ozone Profiles (1998-2016): 2. Comparisons With Satellites and Ground-Based Instruments

    Science.gov (United States)

    Thompson, Anne M.; Witte, Jacquelyn C.; Sterling, Chance; Jordan, Allen; Johnson, Bryan J.; Oltmans, Samuel J.; Fujiwara, Masatomo; Vömel, Holger; Allaart, Marc; Piters, Ankie; Coetzee, Gert J. R.; Posny, Françoise; Corrales, Ernesto; Diaz, Jorge Andres; Félix, Christian; Komala, Ninong; Lai, Nga; Ahn Nguyen, H. T.; Maata, Matakite; Mani, Francis; Zainal, Zamuna; Ogino, Shin-ya; Paredes, Francisco; Penha, Tercio Luiz Bezerra; da Silva, Francisco Raimundo; Sallons-Mitro, Sukarni; Selkirk, Henry B.; Schmidlin, F. J.; Stübi, Rene; Thiongo, Kennedy

    2017-12-01

    The Southern Hemisphere ADditional OZonesonde (SHADOZ) network was assembled to validate a new generation of ozone-monitoring satellites and to better characterize the vertical structure of tropical ozone in the troposphere and stratosphere. Beginning with nine stations in 1998, more than 7,000 ozone and P-T-U profiles are available from 14 SHADOZ sites that have operated continuously for at least a decade. We analyze ozone profiles from the recently reprocessed SHADOZ data set that is based on adjustments for inconsistencies caused by varying ozonesonde instruments and operating techniques. First, sonde-derived total ozone column amounts are compared to the overpasses from the Earth Probe/Total Ozone Mapping Spectrometer, Ozone Monitoring Instrument, and Ozone Mapping and Profiler Suite satellites that cover 1998-2016. Second, characteristics of the stratospheric and tropospheric columns are examined along with ozone structure in the tropical tropopause layer (TTL). We find that (1) relative to our earlier evaluations of SHADOZ data, in 2003, 2007, and 2012, sonde-satellite total ozone column offsets at 12 stations are 2% or less, a significant improvement; (2) as in prior studies, the 10 tropical SHADOZ stations, defined as within ±19° latitude, display statistically uniform stratospheric column ozone, 229 ± 3.9 DU (Dobson units), and a tropospheric zonal wave-one pattern with a 14 DU mean amplitude; (3) the TTL ozone column, which is also zonally uniform, masks complex vertical structure, and this argues against using satellites for lower stratospheric ozone trends; and (4) reprocessing has led to more uniform stratospheric column amounts across sites and reduced bias in stratospheric profiles. As a consequence, the uncertainty in total column ozone now averages 5%.

  13. A comparative study of breast cancer diagnosis based on neural network ensemble via improved training algorithms.

    Science.gov (United States)

    Azami, Hamed; Escudero, Javier

    2015-08-01

    Breast cancer is one of the most common types of cancer in women all over the world. Early diagnosis of this kind of cancer can significantly increase the chances of long-term survival. Since diagnosis of breast cancer is a complex problem, neural network (NN) approaches have been used as a promising solution. Considering the low speed of the back-propagation (BP) algorithm to train a feed-forward NN, we consider a number of improved NN trainings for the Wisconsin breast cancer dataset: BP with momentum, BP with adaptive learning rate, BP with adaptive learning rate and momentum, Polak-Ribikre conjugate gradient algorithm (CGA), Fletcher-Reeves CGA, Powell-Beale CGA, scaled CGA, resilient BP (RBP), one-step secant and quasi-Newton methods. An NN ensemble, which is a learning paradigm to combine a number of NN outputs, is used to improve the accuracy of the classification task. Results demonstrate that NN ensemble-based classification methods have better performance than NN-based algorithms. The highest overall average accuracy is 97.68% obtained by NN ensemble trained by RBP for 50%-50% training-test evaluation method.

  14. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    Science.gov (United States)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  15. Deterministic indexing for packed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Skjoldjensen, Frederik Rye

    2017-01-01

    Given a string S of length n, the classic string indexing problem is to preprocess S into a compact data structure that supports efficient subsequent pattern queries. In the deterministic variant the goal is to solve the string indexing problem without any randomization (at preprocessing time...... or query time). In the packed variant the strings are stored with several character in a single word, giving us the opportunity to read multiple characters simultaneously. Our main result is a new string index in the deterministic and packed setting. Given a packed string S of length n over an alphabet σ...

  16. Forests and ozone: productivity, carbon storage, and feedbacks.

    Science.gov (United States)

    Wang, Bin; Shugart, Herman H; Shuman, Jacquelyn K; Lerdau, Manuel T

    2016-02-22

    Tropospheric ozone is a serious air-pollutant, with large impacts on plant function. This study demonstrates that tropospheric ozone, although it damages plant metabolism, does not necessarily reduce ecosystem processes such as productivity or carbon sequestration because of diversity change and compensatory processes at the community scale ameliorate negative impacts at the individual level. This study assesses the impact of ozone on forest composition and ecosystem dynamics with an individual-based gap model that includes basic physiology as well as species-specific metabolic properties. Elevated tropospheric ozone leads to no reduction of forest productivity and carbon stock and to increased isoprene emissions, which result from enhanced dominance by isoprene-emitting species (which tolerate ozone stress better than non-emitters). This study suggests that tropospheric ozone may not diminish forest carbon sequestration capacity. This study also suggests that, because of the often positive relationship between isoprene emission and ozone formation, there is a positive feedback loop between forest communities and ozone, which further aggravates ozone pollution.

  17. Entropy of network ensembles

    Science.gov (United States)

    Bianconi, Ginestra

    2009-03-01

    In this paper we generalize the concept of random networks to describe network ensembles with nontrivial features by a statistical mechanics approach. This framework is able to describe undirected and directed network ensembles as well as weighted network ensembles. These networks might have nontrivial community structure or, in the case of networks embedded in a given space, they might have a link probability with a nontrivial dependence on the distance between the nodes. These ensembles are characterized by their entropy, which evaluates the cardinality of networks in the ensemble. In particular, in this paper we define and evaluate the structural entropy, i.e., the entropy of the ensembles of undirected uncorrelated simple networks with given degree sequence. We stress the apparent paradox that scale-free degree distributions are characterized by having small structural entropy while they are so widely encountered in natural, social, and technological complex systems. We propose a solution to the paradox by proving that scale-free degree distributions are the most likely degree distribution with the corresponding value of the structural entropy. Finally, the general framework we present in this paper is able to describe microcanonical ensembles of networks as well as canonical or hidden-variable network ensembles with significant implications for the formulation of network-constructing algorithms.

  18. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    Science.gov (United States)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  19. Ensembl variation resources

    Directory of Open Access Journals (Sweden)

    Marin-Garcia Pablo

    2010-05-01

    Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.

  20. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  1. Generation of scenarios from calibrated ensemble forecasts with a dual ensemble copula coupling approach

    DEFF Research Database (Denmark)

    Ben Bouallègue, Zied; Heppelmann, Tobias; Theis, Susanne E.

    2016-01-01

    the original ensemble forecasts. Based on the assumption of error stationarity, parametric methods aim to fully describe the forecast dependence structures. In this study, the concept of ECC is combined with past data statistics in order to account for the autocorrelation of the forecast error. The new...... approach, called d-ECC, is applied to wind forecasts from the high resolution ensemble system COSMO-DE-EPS run operationally at the German weather service. Scenarios generated by ECC and d-ECC are compared and assessed in the form of time series by means of multivariate verification tools and in a product...

  2. Evaluation of tropospheric and stratospheric ozone trends over Western Europe from ground-based FTIR network observations

    Directory of Open Access Journals (Sweden)

    C. Vigouroux

    2008-12-01

    Full Text Available Within the European project UFTIR (Time series of Upper Free Troposphere observations from an European ground-based FTIR network, six ground-based stations in Western Europe, from 79° N to 28° N, all equipped with Fourier Transform infrared (FTIR instruments and part of the Network for the Detection of Atmospheric Composition Change (NDACC, have joined their efforts to evaluate the trends of several direct and indirect greenhouse gases over the period 1995–2004. The retrievals of CO, CH4, C2H6, N2O, CHClF2, and O3 have been optimized. Using the optimal estimation method, some vertical information can be obtained in addition to total column amounts. A bootstrap resampling method has been implemented to determine annual partial and total column trends for the target gases. The present work focuses on the ozone results. The retrieved time series of partial and total ozone columns are validated with ground-based correlative data (Brewer, Dobson, UV-Vis, ozonesondes, and Lidar. The observed total column ozone trends are in agreement with previous studies: 1 no total column ozone trend is seen at the lowest latitude station Izaña (28° N; 2 slightly positive total column trends are seen at the two mid-latitude stations Zugspitze and Jungfraujoch (47° N, only one of them being significant; 3 the highest latitude stations Harestua (60° N, Kiruna (68° N and Ny-Ålesund (79° N show significant positive total column trends. Following the vertical information contained in the ozone FTIR retrievals, we provide partial columns trends for the layers: ground-10 km, 10–18 km, 18–27 km, and 27–42 km, which helps to distinguish the contributions from dynamical and chemical changes on the total column ozone trends. We obtain no statistically significant trends in the ground-10 km layer for five out of the six ground-based stations. We find significant positive trends for the lowermost

  3. NYYD Ensemble

    Index Scriptorium Estoniae

    2002-01-01

    NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx

  4. Observing lowermost tropospheric ozone pollution with a new multispectral synergic approach of IASI infrared and GOME-2 ultraviolet satellite measurements

    Science.gov (United States)

    Cuesta, Juan; Foret, Gilles; Dufour, Gaëlle; Eremenko, Maxim; Coman, Adriana; Gaubert, Benjamin; Beekmann, Matthias; Liu, Xiong; Cai, Zhaonan; Von Clarmann, Thomas; Spurr, Robert; Flaud, Jean-Marie

    2014-05-01

    Tropospheric ozone is currently one of the air pollutants posing greatest threats to human health and ecosystems. Monitoring ozone pollution at the regional, continental and global scale is a crucial societal issue. Only spaceborne remote sensing is capable of observing tropospheric ozone at such scales. The spatio-temporal coverage of new satellite-based instruments, such as IASI or GOME-2, offer a great potential for monitoring air quality by synergism with regional chemistry-transport models, for both inter-validation and full data assimilation. However, current spaceborne observations using single-band either UV or IR measurements show limited sensitivity to ozone in the atmospheric boundary layer, which is the major concern for air quality. Very recently, we have developed an innovative multispectral approach, so-called IASI+GOME-2, which combines IASI and GOME-2 observations, respectively in the IR and UV. This unique multispectral approach has allowed the observation of ozone plumes in the lowermost troposphere (LMT, below 3 km of altitude) over Europe, for the first time from space. Our first analyses are focused on typical ozone pollution events during the summer of 2009 over Europe. During these events, LMT ozone plumes at different regions are produced photo-chemically in the boundary layer, transported upwards to the free troposphere and also downwards from the stratosphere. We have analysed them using IASI+GOME-2 observations, in comparison with single-band methods (IASI, GOME-2 and OMI). Only IASI+GOME-2 depicts ozone plumes located below 3 km of altitude (both over land and ocean). Indeed, the multispectral sensitivity in the LMT is greater by 40% and it peaks at 2 to 2.5 km of altitude over land, thus at least 0.8 to 1 km below that for all single-band methods. Over Europe during the summer of 2009, IASI+GOME-2 shows 1% mean bias and 21% precision for direct comparisons with ozonesondes and also good agreement with CHIMERE model simulations

  5. An efficient deterministic secure quantum communication scheme based on cluster states and identity authentication

    International Nuclear Information System (INIS)

    Wen-Jie, Liu; Han-Wu, Chen; Zhi-Qiang, Li; Zhi-Hao, Liu; Wen-Bo, Hu; Ting-Huai, Ma

    2009-01-01

    A novel efficient deterministic secure quantum communication scheme based on four-qubit cluster states and single-photon identity authentication is proposed. In this scheme, the two authenticated users can transmit two bits of classical information per cluster state, and its efficiency of the quantum communication is 1/3, which is approximately 1.67 times that of the previous protocol presented by Wang et al [Chin. Phys. Lett. 23 (2006) 2658]. Security analysis shows the present scheme is secure against intercept-resend attack and the impersonator's attack. Furthermore, it is more economic with present-day techniques and easily processed by a one-way quantum computer. (general)

  6. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  7. Deterministic chaos in the processor load

    International Nuclear Information System (INIS)

    Halbiniak, Zbigniew; Jozwiak, Ireneusz J.

    2007-01-01

    In this article we present the results of research whose purpose was to identify the phenomenon of deterministic chaos in the processor load. We analysed the time series of the processor load during efficiency tests of database software. Our research was done on a Sparc Alpha processor working on the UNIX Sun Solaris 5.7 operating system. The conducted analyses proved the presence of the deterministic chaos phenomenon in the processor load in this particular case

  8. Development of multimodel ensemble based district level medium ...

    Indian Academy of Sciences (India)

    tively by computing the anomaly correlation coef- ficient between the predicted rainfall and observed rainfall. High resolution (lat./long.) gridded data ..... particularly in the prediction of intensity and mesoscale rainfall features causing inland flooding. During recent years, Ensemble. Prediction System (EPS) has emerged as ...

  9. Cr(VI) formation during ozonation of Cr-containing materials in ...

    African Journals Online (AJOL)

    Ozonation, or advanced oxidation processes (utilising ozone decomposition products as oxidants) are widely used in industrial wastewater and drinking water treatment plants. In these applications the use of ozone is based on ozone and its decomposition by-products being strong oxidants. In this paper, the possible ...

  10. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    Science.gov (United States)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  11. Deterministic and Stochastic Study of Wind Farm Harmonic Currents

    DEFF Research Database (Denmark)

    Sainz, Luis; Mesas, Juan Jose; Teodorescu, Remus

    2010-01-01

    Wind farm harmonic emissions are a well-known power quality problem, but little data based on actual wind farm measurements are available in literature. In this paper, harmonic emissions of an 18 MW wind farm are investigated using extensive measurements, and the deterministic and stochastic char...

  12. Urban Ozone Concentration Forecasting with Artificial Neural Network in Corsica

    Directory of Open Access Journals (Sweden)

    Tamas Wani

    2014-03-01

    Full Text Available Atmospheric pollutants concentration forecasting is an important issue in air quality monitoring. Qualitair Corse, the organization responsible for monitoring air quality in Corsica (France, needs to develop a short-term prediction model to lead its mission of information towards the public. Various deterministic models exist for local forecasting, but need important computing resources, a good knowledge of atmospheric processes and can be inaccurate because of local climatical or geographical particularities, as observed in Corsica, a mountainous island located in the Mediterranean Sea. As a result, we focus in this study on statistical models, and particularly Artificial Neural Networks (ANNs that have shown good results in the prediction of ozone concentration one hour ahead with data measured locally. The purpose of this study is to build a predictor realizing predictions of ozone 24 hours ahead in Corsica in order to be able to anticipate pollution peaks formation and to take appropriate preventive measures. Specific meteorological conditions are known to lead to particular pollution event in Corsica (e.g. Saharan dust events. Therefore, an ANN model will be used with pollutant and meteorological data for operational forecasting. Index of agreement of this model was calculated with a one year test dataset and reached 0.88.

  13. The state of the art of flood forecasting - Hydrological Ensemble Prediction Systems

    Science.gov (United States)

    Thielen-Del Pozo, J.; Pappenberger, F.; Salamon, P.; Bogner, K.; Burek, P.; de Roo, A.

    2010-09-01

    Flood forecasting systems form a key part of ‘preparedness' strategies for disastrous floods and provide hydrological services, civil protection authorities and the public with information of upcoming events. Provided the warning leadtime is sufficiently long, adequate preparatory actions can be taken to efficiently reduce the impacts of the flooding. Because of the specific characteristics of each catchment, varying data availability and end-user demands, the design of the best flood forecasting system may differ from catchment to catchment. However, despite the differences in concept and data needs, there is one underlying issue that spans across all systems. There has been an growing awareness and acceptance that uncertainty is a fundamental issue of flood forecasting and needs to be dealt with at the different spatial and temporal scales as well as the different stages of the flood generating processes. Today, operational flood forecasting centres change increasingly from single deterministic forecasts to probabilistic forecasts with various representations of the different contributions of uncertainty. The move towards these so-called Hydrological Ensemble Prediction Systems (HEPS) in flood forecasting represents the state of the art in forecasting science, following on the success of the use of ensembles for weather forecasting (Buizza et al., 2005) and paralleling the move towards ensemble forecasting in other related disciplines such as climate change predictions. The use of HEPS has been internationally fostered by initiatives such as "The Hydrologic Ensemble Prediction Experiment" (HEPEX), created with the aim to investigate how best to produce, communicate and use hydrologic ensemble forecasts in hydrological short-, medium- und long term prediction of hydrological processes. The advantages of quantifying the different contributions of uncertainty as well as the overall uncertainty to obtain reliable and useful flood forecasts also for extreme events

  14. Precision production: enabling deterministic throughput for precision aspheres with MRF

    Science.gov (United States)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  15. Ozone pollution and ozone biomonitoring in European cities Part II. Ozone-induced plant injury and its relationship with descriptors of ozone pollution

    DEFF Research Database (Denmark)

    Klumpp, A.; Ansel, W.; Klumpp, G.

    2006-01-01

    within local networks were relatively small, but seasonal and inter-annual differences were strong due to the variability of meteorological conditions and related ozone concentrations. The 2001 data revealed a significant relationship between foliar injury degree and various descriptors of ozone...... pollution such as mean value, AOT20 and AOT40. Examining individual sites of the local monitoring networks separately, however, yielded noticeable differences. Some sites showed no association between ozone pollution and ozone-induced effects, whereas others featured almost linear relationships...

  16. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  17. Reconciliation of Halogen-Induced Ozone Loss with the Total-Column Ozone Record

    Science.gov (United States)

    Shepherd, T. G.; Plummer, D. A.; Scinocca, J. F.; Hegglin, M. I.; Fioletov, V. E.; Reader, M. C.; Remsberg, E.; von Clarmann, T.; Wang, H. J.

    2014-01-01

    The observed depletion of the ozone layer from the 1980s onwards is attributed to halogen source gases emitted by human activities. However, the precision of this attribution is complicated by year-to-year variations in meteorology, that is, dynamical variability, and by changes in tropospheric ozone concentrations. As such, key aspects of the total-column ozone record, which combines changes in both tropospheric and stratospheric ozone, remain unexplained, such as the apparent absence of a decline in total-column ozone levels before 1980, and of any long-term decline in total-column ozone levels in the tropics. Here we use a chemistry-climate model to estimate changes in halogen-induced ozone loss between 1960 and 2010; the model is constrained by observed meteorology to remove the eects of dynamical variability, and driven by emissions of tropospheric ozone precursors to separate out changes in tropospheric ozone. We show that halogen-induced ozone loss closely followed stratospheric halogen loading over the studied period. Pronounced enhancements in ozone loss were apparent in both hemispheres following the volcanic eruptions of El Chichon and, in particular, Mount Pinatubo, which significantly enhanced stratospheric aerosol loads. We further show that approximately 40% of the long-term non-volcanic ozone loss occurred before 1980, and that long-term ozone loss also occurred in the tropical stratosphere. Finally, we show that halogeninduced ozone loss has declined by over 10% since stratospheric halogen loading peaked in the late 1990s, indicating that the recovery of the ozone layer is well underway.

  18. Introducing Synchronisation in Deterministic Network Models

    DEFF Research Database (Denmark)

    Schiøler, Henrik; Jessen, Jan Jakob; Nielsen, Jens Frederik D.

    2006-01-01

    The paper addresses performance analysis for distributed real time systems through deterministic network modelling. Its main contribution is the introduction and analysis of models for synchronisation between tasks and/or network elements. Typical patterns of synchronisation are presented leading...... to the suggestion of suitable network models. An existing model for flow control is presented and an inherent weakness is revealed and remedied. Examples are given and numerically analysed through deterministic network modelling. Results are presented to highlight the properties of the suggested models...

  19. Analysis of ensemble learning using simple perceptrons based on online learning theory

    Science.gov (United States)

    Miyoshi, Seiji; Hara, Kazuyuki; Okada, Masato

    2005-03-01

    Ensemble learning of K nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. One purpose of statistical learning theory is to theoretically obtain the generalization error. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among students. The differential equations that describe the dynamical behaviors of these order parameters are derived in the case of general learning rules. The concrete forms of these differential equations are derived analytically in the cases of three well-known rules: Hebbian learning, perceptron learning, and AdaTron (adaptive perceptron) learning. Ensemble generalization errors of these three rules are calculated by using the results determined by solving their differential equations. As a result, these three rules show different characteristics in their affinity for ensemble learning, that is “maintaining variety among students.” Results show that AdaTron learning is superior to the other two rules with respect to that affinity.

  20. Representing Color Ensembles.

    Science.gov (United States)

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  1. Deterministic Compressed Sensing

    Science.gov (United States)

    2011-11-01

    39 4.3 Digital Communications . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.4 Group Testing ...deterministic de - sign matrices. All bounds ignore the O() constants. . . . . . . . . . . 131 xvi List of Algorithms 1 Iterative Hard Thresholding Algorithm...sensing is information theoretically possible using any (2k, )-RIP sensing matrix . The following celebrated results of Candès, Romberg and Tao [54

  2. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    Science.gov (United States)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  3. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-12-01

    This paper presents a deterministic uncertainty analysis (DUA) method for calculating uncertainties that has the potential to significantly reduce the number of computer runs compared to conventional statistical analysis. The method is based upon the availability of derivative and sensitivity data such as that calculated using the well known direct or adjoint sensitivity analysis techniques. Formation of response surfaces using derivative data and the propagation of input probability distributions are discussed relative to their role in the DUA method. A sample problem that models the flow of water through a borehole is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. Propogation of uncertainties by the DUA method is compared for ten cases in which the number of reference model runs was varied from one to ten. The DUA method gives a more accurate representation of the true cumulative distribution of the flow rate based upon as few as two model executions compared to fifty model executions using a statistical approach. 16 refs., 4 figs., 5 tabs

  4. JEnsembl: a version-aware Java API to Ensembl data systems.

    Science.gov (United States)

    Paterson, Trevor; Law, Andy

    2012-11-01

    The Ensembl Project provides release-specific Perl APIs for efficient high-level programmatic access to data stored in various Ensembl database schema. Although Perl scripts are perfectly suited for processing large volumes of text-based data, Perl is not ideal for developing large-scale software applications nor embedding in graphical interfaces. The provision of a novel Java API would facilitate type-safe, modular, object-orientated development of new Bioinformatics tools with which to access, analyse and visualize Ensembl data. The JEnsembl API implementation provides basic data retrieval and manipulation functionality from the Core, Compara and Variation databases for all species in Ensembl and EnsemblGenomes and is a platform for the development of a richer API to Ensembl datasources. The JEnsembl architecture uses a text-based configuration module to provide evolving, versioned mappings from database schema to code objects. A single installation of the JEnsembl API can therefore simultaneously and transparently connect to current and previous database instances (such as those in the public archive) thus facilitating better analysis repeatability and allowing 'through time' comparative analyses to be performed. Project development, released code libraries, Maven repository and documentation are hosted at SourceForge (http://jensembl.sourceforge.net).

  5. CONTRIBUTION TO INDOOR OZONE LEVELS OF AN OZONE GENERATOR

    Science.gov (United States)

    This report gives results of a study of a commonly used commercially available ozone generator, undertaken to determine its impact on indoor ozone levels. xperiment were conducted in a typical mechanically ventilated office and in a test house. he generated ozone and the in-room ...

  6. Development of Compact Ozonizer with High Ozone Output by Pulsed Power

    Science.gov (United States)

    Tanaka, Fumiaki; Ueda, Satoru; Kouno, Kanako; Sakugawa, Takashi; Akiyama, Hidenori; Kinoshita, Youhei

    Conventional ozonizer with a high ozone output using silent or surface discharges needs a cooling system and a dielectric barrier, and therefore becomes a large machine. A compact ozonizer without the cooling system and the dielectric barrier has been developed by using a pulsed power generated discharge. The wire to plane electrodes made of metal have been used. However, the ozone output was low. Here, a compact and high repetition rate pulsed power generator is used as an electric source of a compact ozonizer. The ozone output of 6.1 g/h and the ozone yield of 86 g/kWh are achieved at 500 pulses per second, input average power of 280 W and an air flow rate of 20 L/min.

  7. Ozone from fireworks: Chemical processes or measurement interference?

    Science.gov (United States)

    Xu, Zheng; Nie, Wei; Chi, Xuguang; Huang, Xin; Zheng, Longfei; Xu, Zhengning; Wang, Jiaping; Xie, Yuning; Qi, Ximeng; Wang, Xinfeng; Xue, Likun; Ding, Aijun

    2018-08-15

    Fireworks have been identified as one ozone source by photolyzing NO 2 or O 2 and are believed to potentially be important for the nighttime ozone during firework events. In this study, we conducted both lab and field experiments to test two types of fireworks with low and high energy with the goal to distinguish whether the visible ozone signal during firework displays is real. The results suggest that previous understanding of the ozone formation mechanism during fireworks is misunderstood. Ultraviolet ray (UV)-based ozone monitors are interfered by aerosols and some specific VOCs. High-energy fireworks emit high concentrations of particular matters and low VOCs that the artificial ozone can be easily removed by an aerosol filter. Low-energy fireworks emit large amounts of VOCs mostly from the combustion of the cardboard from fireworks that largely interferes with the ozone monitor. Benzene and phenol might be major contributors to the artificial ozone signal. We further checked the nighttime ozone concentration in Jinan and Beijing, China, during Chinese New Year, a period with intense fireworks. A signal of 3-8ppbv ozone was detected and positively correlated to NO and SO 2 , suggesting a considerable influence of these chemicals in interfering with ambient ozone monitoring. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Deterministic uncertainty analysis

    International Nuclear Information System (INIS)

    Worley, B.A.

    1987-01-01

    Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig

  9. Recognition of deterministic ETOL languages in logarithmic space

    DEFF Research Database (Denmark)

    Jones, Neil D.; Skyum, Sven

    1977-01-01

    It is shown that if G is a deterministic ETOL system, there is a nondeterministic log space algorithm to determine membership in L(G). Consequently, every deterministic ETOL language is recognizable in polynomial time. As a corollary, all context-free languages of finite index, and all Indian...

  10. Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim

    2011-01-01

    A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used

  11. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  12. Improving the ensemble-optimization method through covariance-matrix adaptation

    NARCIS (Netherlands)

    Fonseca, R.M.; Leeuwenburgh, O.; Hof, P.M.J. van den; Jansen, J.D.

    2015-01-01

    Ensemble optimization (referred to throughout the remainder of the paper as EnOpt) is a rapidly emerging method for reservoirmodel-based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current

  13. Ozone uptake (flux) as it relates to ozone-induced foliar symptoms of Prunus serotina and Populus maximowizii x trichocarpa

    International Nuclear Information System (INIS)

    Orendovici-Best, T.; Skelly, J.M.; Davis, D.D.; Ferdinand, J.A.; Savage, J.E.; Stevenson, R.E.

    2008-01-01

    Field studies were conducted during 2003 and 2004 from early June to the end of August, at 20 sites of lower or higher elevation within north-central Pennsylvania, using seedlings of black cherry (Prunus serotina, Ehrh.) and ramets of hybrid poplar (Populus maximowizii x trichocarpa). A linear model was developed to estimate the influence of local environmental conditions on stomatal conductance. The most significant factors explaining stomatal variance were tree species, air temperature, leaf vapor pressure deficit, elevation, and time of day. Overall, environmental factors explained less than 35% of the variation in stomatal conductance. Ozone did not affect gas exchange rates in either poplar or cherry. Ozone-induced foliar injury was positively correlated with cumulative ozone exposures, expressed as SUM40. Overall, the amount of foliar injury was better correlated to a flux-based approach rather than to an exposure-based approach. More severe foliar injuries were observed on plants growing at higher elevations. - Within heterogeneous environments, ozone flux does not completely explain the variation observed in ozone-induced visible injury

  14. Secondary maxima in ozone profiles

    Directory of Open Access Journals (Sweden)

    R. Lemoine

    2004-01-01

    Full Text Available Ozone profiles from balloon soundings as well as SAGEII ozone profiles were used to detect anomalous large ozone concentrations of ozone in the lower stratosphere. These secondary ozone maxima are found to be the result of differential advection of ozone-poor and ozone-rich air associated with Rossby wave breaking events. The frequency and intensity of secondary ozone maxima and their geographical distribution is presented. The occurrence and amplitude of ozone secondary maxima is connected to ozone variability and trend at Uccle and account for a large part of the total ozone and lower stratospheric ozone variability.

  15. Oxidation of Ce(III) in Foam Decontaminant by Ozone

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Chong Hun; Yoon, I. H.; Choi, W. K.; Moon, J. K.; Yang, H. B. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, J. S. [Gachon University, Seongnam (Korea, Republic of)

    2016-10-15

    A nanoparticle-based foam decontaminant is composed of a surfactant and nanoparticles for the generation and maintenance of foam, and a chemical decontamination agent made of Ce(IV) dissolved in nitric acid. Ce(IV) will be reduced to Ce(III) through the decontamination process. Oxidizing cerium(III) can be reused as a decontamination agent, Ce(IV). Oxidation treatment technology by ozone uses its strong oxidizing power. It can be regarded as an environmentally friendly process, because ozone cannot be stored and transported like other industrial gases (because it quickly decays into diatomic oxygen) and must therefore be produced on site, and used ozone can decompose immediately. The ozonation treatment of Ce(III) in foam decontaminant containing a surfactant is necessary for the effective regeneration of Ce(III). Thus, the present study was undertaken to determine the optimal conditions for ozonation treatment in the regeneration of Ce(III) into Ce(IV) in the nanoparticle-based foam decontaminant containing surfactant. This study was undertaken to determine the optimal conditions for ozonation treatment in the regeneration of Ce(III) to Ce(IV) in nanoparticle-based foam decontaminant containing a TBS surfactant. The oxidation conversion rate of Ce(III) was increased with an increase in the flow rate of the gas mixture and ozone injection amount. The oxidation time required for the 100% oxidation conversion of Ce(III) to Ce(IV) at a specific ozone injection amount can be predicted from these experimental data.

  16. Oxidation of Ce(III) in Foam Decontaminant by Ozone

    International Nuclear Information System (INIS)

    Jung, Chong Hun; Yoon, I. H.; Choi, W. K.; Moon, J. K.; Yang, H. B.; Lee, J. S.

    2016-01-01

    A nanoparticle-based foam decontaminant is composed of a surfactant and nanoparticles for the generation and maintenance of foam, and a chemical decontamination agent made of Ce(IV) dissolved in nitric acid. Ce(IV) will be reduced to Ce(III) through the decontamination process. Oxidizing cerium(III) can be reused as a decontamination agent, Ce(IV). Oxidation treatment technology by ozone uses its strong oxidizing power. It can be regarded as an environmentally friendly process, because ozone cannot be stored and transported like other industrial gases (because it quickly decays into diatomic oxygen) and must therefore be produced on site, and used ozone can decompose immediately. The ozonation treatment of Ce(III) in foam decontaminant containing a surfactant is necessary for the effective regeneration of Ce(III). Thus, the present study was undertaken to determine the optimal conditions for ozonation treatment in the regeneration of Ce(III) into Ce(IV) in the nanoparticle-based foam decontaminant containing surfactant. This study was undertaken to determine the optimal conditions for ozonation treatment in the regeneration of Ce(III) to Ce(IV) in nanoparticle-based foam decontaminant containing a TBS surfactant. The oxidation conversion rate of Ce(III) was increased with an increase in the flow rate of the gas mixture and ozone injection amount. The oxidation time required for the 100% oxidation conversion of Ce(III) to Ce(IV) at a specific ozone injection amount can be predicted from these experimental data

  17. Imprinting and recalling cortical ensembles.

    Science.gov (United States)

    Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael

    2016-08-12

    Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. Copyright © 2016, American Association for the Advancement of Science.

  18. Ozone concentrations in the Brazilian Amazonia during BASE-A

    International Nuclear Information System (INIS)

    Setzer, A.W.; Kirchhoff, V.W.J.H.; Pereira, M.C.

    1991-01-01

    During the Biomass Burning Airborne and Spaceborne Experiment--Amazonia, thermal images of fires were made with the Advanced Very High Resolution Radiometer (AVHRR) on board meteorological NOAA series satellites. The results of ozone measurements made on board the Brazilian Institute for Space Research (INPE) airplane during September of 1989 are presented and analyzed in relation to the temporal and geographical location of fires detected before and during the sampling. Results show that on a synoptic scale, concentrations of ozone rise sharply in regions of more intense burning

  19. Investigating energy-based pool structure selection in the structure ensemble modeling with experimental distance constraints: The example from a multidomain protein Pub1.

    Science.gov (United States)

    Zhu, Guanhua; Liu, Wei; Bao, Chenglong; Tong, Dudu; Ji, Hui; Shen, Zuowei; Yang, Daiwen; Lu, Lanyuan

    2018-05-01

    The structural variations of multidomain proteins with flexible parts mediate many biological processes, and a structure ensemble can be determined by selecting a weighted combination of representative structures from a simulated structure pool, producing the best fit to experimental constraints such as interatomic distance. In this study, a hybrid structure-based and physics-based atomistic force field with an efficient sampling strategy is adopted to simulate a model di-domain protein against experimental paramagnetic relaxation enhancement (PRE) data that correspond to distance constraints. The molecular dynamics simulations produce a wide range of conformations depicted on a protein energy landscape. Subsequently, a conformational ensemble recovered with low-energy structures and the minimum-size restraint is identified in good agreement with experimental PRE rates, and the result is also supported by chemical shift perturbations and small-angle X-ray scattering data. It is illustrated that the regularizations of energy and ensemble-size prevent an arbitrary interpretation of protein conformations. Moreover, energy is found to serve as a critical control to refine the structure pool and prevent data overfitting, because the absence of energy regularization exposes ensemble construction to the noise from high-energy structures and causes a more ambiguous representation of protein conformations. Finally, we perform structure-ensemble optimizations with a topology-based structure pool, to enhance the understanding on the ensemble results from different sources of pool candidates. © 2018 Wiley Periodicals, Inc.

  20. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    Science.gov (United States)

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the

  1. Verification of an ensemble prediction system for storm surge forecast in the Adriatic Sea

    Science.gov (United States)

    Mel, Riccardo; Lionello, Piero

    2014-12-01

    In the Adriatic Sea, storm surges present a significant threat to Venice and to the flat coastal areas of the northern coast of the basin. Sea level forecast is of paramount importance for the management of daily activities and for operating the movable barriers that are presently being built for the protection of the city. In this paper, an EPS (ensemble prediction system) for operational forecasting of storm surge in the northern Adriatic Sea is presented and applied to a 3-month-long period (October-December 2010). The sea level EPS is based on the HYPSE (hydrostatic Padua Sea elevation) model, which is a standard single-layer nonlinear shallow water model, whose forcings (mean sea level pressure and surface wind fields) are provided by the ensemble members of the ECMWF (European Center for Medium-Range Weather Forecasts) EPS. Results are verified against observations at five tide gauges located along the Croatian and Italian coasts of the Adriatic Sea. Forecast uncertainty increases with the predicted value of the storm surge and with the forecast lead time. The EMF (ensemble mean forecast) provided by the EPS has a rms (root mean square) error lower than the DF (deterministic forecast), especially for short (up to 3 days) lead times. Uncertainty for short lead times of the forecast and for small storm surges is mainly caused by uncertainty of the initial condition of the hydrodynamical model. Uncertainty for large lead times and large storm surges is mainly caused by uncertainty in the meteorological forcings. The EPS spread increases with the rms error of the forecast. For large lead times the EPS spread and the forecast error substantially coincide. However, the EPS spread in this study, which does not account for uncertainty in the initial condition, underestimates the error during the early part of the forecast and for small storm surge values. On the contrary, it overestimates the rms error for large surge values. The PF (probability forecast) of the EPS

  2. Using a satisfiability solver to identify deterministic finite state automata

    NARCIS (Netherlands)

    Heule, M.J.H.; Verwer, S.

    2009-01-01

    We present an exact algorithm for identification of deterministic finite automata (DFA) which is based on satisfiability (SAT) solvers. Despite the size of the low level SAT representation, our approach seems to be competitive with alternative techniques. Our contributions are threefold: First, we

  3. Ensemble Network Architecture for Deep Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Xi-liang Chen

    2018-01-01

    Full Text Available The popular deep Q learning algorithm is known to be instability because of the Q-value’s shake and overestimation action values under certain conditions. These issues tend to adversely affect their performance. In this paper, we develop the ensemble network architecture for deep reinforcement learning which is based on value function approximation. The temporal ensemble stabilizes the training process by reducing the variance of target approximation error and the ensemble of target values reduces the overestimate and makes better performance by estimating more accurate Q-value. Our results show that this architecture leads to statistically significant better value evaluation and more stable and better performance on several classical control tasks at OpenAI Gym environment.

  4. World Music Ensemble: Kulintang

    Science.gov (United States)

    Beegle, Amy C.

    2012-01-01

    As instrumental world music ensembles such as steel pan, mariachi, gamelan and West African drums are becoming more the norm than the exception in North American school music programs, there are other world music ensembles just starting to gain popularity in particular parts of the United States. The kulintang ensemble, a drum and gong ensemble…

  5. The stratospheric ozone and the ozone layer

    International Nuclear Information System (INIS)

    Zea Mazo, Jorge Anibal; Leon Aristizabal Gloria Esperanza; Eslava Ramirez Jesus Antonio

    2000-01-01

    An overview is presented of the principal characteristics of the stratospheric ozone in the Earth's atmosphere, with particular emphasis on the tropics and the ozone hole over the poles. Some effects produced in the atmosphere as a consequence of the different human activities will be described, and some data on stratospheric ozone will be shown. We point out the existence of a nucleus of least ozone in the tropics, stretching from South America to central Africa, with annual mean values less than 240 DU, a value lower than in the middle latitudes and close to the mean values at the South Pole. The existence of such a minimum is confirmed by mean values from measurements made on satellites or with earthbound instruments, for different sectors in Colombia, like Medellin, Bogota and Leticia

  6. OZONE CONCENTRATION ATTRIBUTABLE PREMATURE DEATH IN POLAND

    Directory of Open Access Journals (Sweden)

    Krzysztof Skotak

    2010-03-01

    Full Text Available Ozone in the lower part of the atmosphere (troposphere, strong photochemical oxidant, is not directly emitted to the atmosphere but formed through a series of complex reactions. Ozone concentrations depends on ozone precursors air contamination (mainly nitrogen dioxide and non-methane volatile organic compounds and meteorological conditions (temperature and solar radiation. The main sectors emitted ozone precursors are road transport, power and heat generation plants, household (heating, industry, and petrol storage and distribution. Ozone and some of its precursors are also transported long distances in the atmosphere and are therefore considered a transboundary problem. As a result, the ozone concentrations are often low in busy urban areas and higher in suburban and rural areas. Nowadays, instead of particulate matter, ozone is one of the most widespread global air pollution problems. In and around urban areas, relatively large gradients of ozone can be observed. Because of its high reactivity in elevated concentrations ozone causes serious health problems and damage to ecosystems, agricultural crops and materials. Main ill-health endpoints as a results of ozone concentrations can be characterised as an effect of pulmonary and cardiovascular system, time morbidity and mortality series, development of atherosclerosis and asthma and finally reduction in life expectancy. The associations with increased daily mortality due to ozone concentrations are confirmed by many researches and epidemiological studies. Estimation of the level selected ill-health endpoints (mortality in total and due to cardiovascular and respiratory causes as a result of the short-term ozone exposure in Poland was the main aim of the project. Final results have been done based on estimation method elaborated by WHO, ozone measurements from National Air Quality Monitoring System and statistical information such as mortality rate and populations. All analysis have been done in

  7. Finding diversity for building one-day ahead Hydrological Ensemble Prediction System based on artificial neural network stacks

    Science.gov (United States)

    Brochero, Darwin; Anctil, Francois; Gagné, Christian; López, Karol

    2013-04-01

    In this study, we addressed the application of Artificial Neural Networks (ANN) in the context of Hydrological Ensemble Prediction Systems (HEPS). Such systems have become popular in the past years as a tool to include the forecast uncertainty in the decision making process. HEPS considers fundamentally the uncertainty cascade model [4] for uncertainty representation. Analogously, the machine learning community has proposed models of multiple classifier systems that take into account the variability in datasets, input space, model structures, and parametric configuration [3]. This approach is based primarily on the well-known "no free lunch theorem" [1]. Consequently, we propose a framework based on two separate but complementary topics: data stratification and input variable selection (IVS). Thus, we promote an ANN prediction stack in which each predictor is trained based on input spaces defined by the IVS application on different stratified sub-samples. All this, added to the inherent variability of classical ANN optimization, leads us to our ultimate goal: diversity in the prediction, defined as the complementarity of the individual predictors. The stratification application on the 12 basins used in this study, which originate from the second and third workshop of the MOPEX project [2], shows that the informativeness of the data is far more important than the quantity used for ANN training. Additionally, the input space variability leads to ANN stacks that outperform an ANN stack model trained with 100% of the available information but with a random selection of dataset used in the early stopping method (scenario R100P). The results show that from a deterministic view, the main advantage focuses on the efficient selection of the training information, which is an equally important concept for the calibration of conceptual hydrological models. On the other hand, the diversity achieved is reflected in a substantial improvement in the scores that define the

  8. Ozone uptake by adult urban trees based on sap flow measurement

    International Nuclear Information System (INIS)

    Wang Hua; Zhou Weiqi; Wang Xiaoke; Gao Fuyuan; Zheng Hua; Tong Lei; Ouyang Zhiyun

    2012-01-01

    The O 3 uptake in 17 adult trees of six urban species was evaluated by the sap flow-based approach under free atmospheric conditions. The results showed very large species differences in ground area scaled whole-tree ozone uptake (F O 3 ), with estimates ranging from 0.61 ± 0.07 nmol m −2 s −1 in Robinia pseudoacacia to 4.80 ± 1.04 nmol m −2 s −1 in Magnolia liliiflora. However, average F O 3 by deciduous foliages was not significantly higher than that by evergreen ones (3.13 vs 2.21 nmol m −2 s −1 , p = 0.160). Species of high canopy conductance for O 3 (G O 3 ) took up more O 3 than those of low G O 3 , but that their sensitivity to vapour pressure deficit (D) were also higher, and their F O 3 decreased faster with increasing D, regardless of species. The responses of F O 3 to D and total radiation led to the relative high flux of O 3 uptake, indicating high ozone risk for urban tree species. - Highlights: ► O 3 uptake by urban trees varied considering contrasting species and study period. ►The responses of G O 3 to microclimate lead to relative high O 3 uptake by urban trees. ►Many urban species are susceptible to O 3 damage. ►The annual O 3 uptake in our study is greatly less than that from modeling approaches. ►The difference suggests considering the species-specific flux in O 3 risk assessment. - Sap flow-based O 3 uptake among urban species suggests high capacity and variation of ozone uptake, as well as potentially detrimental effects to urban species.

  9. Techno-economic simulation data based deterministic and stochastic for product engineering research and development BATAN

    International Nuclear Information System (INIS)

    Petrus Zacharias; Abdul Jami

    2010-01-01

    Researches conducted by Batan's researchers have resulted in a number competences that can be used to produce goods and services, which will be applied to industrial sector. However, there are difficulties how to convey and utilize the R and D products into industrial sector. Evaluation results show that each research result should be completed with techno-economy analysis to obtain the feasibility of a product for industry. Further analysis on multy-product concept, in which one business can produce many main products, will be done. For this purpose, a software package simulating techno-economy I economic feasibility which uses deterministic and stochastic data (Monte Carlo method) was been carried out for multi-product including side product. The programming language used in Visual Basic Studio Net 2003 and SQL as data base processing software. This software applied sensitivity test to identify which investment criteria is sensitive for the prospective businesses. Performance test (trial test) has been conducted and the results are in line with the design requirement, such as investment feasibility and sensitivity displayed deterministically and stochastically. These result can be interpreted very well to support business decision. Validation has been performed using Microsoft Excel (for single product). The result of the trial test and validation show that this package is suitable for demands and is ready for use. (author)

  10. On-line Learning of Unlearnable True Teacher through Mobile Ensemble Teachers

    Science.gov (United States)

    Hirama, Takeshi; Hukushima, Koji

    2008-09-01

    The on-line learning of a hierarchical learning model is studied by a method based on statistical mechanics. In our model, a student of a simple perceptron learns from not a true teacher directly, but ensemble teachers who learn from a true teacher with a perceptron learning rule. Since the true teacher and ensemble teachers are expressed as nonmonotonic and simple perceptrons, respectively, the ensemble teachers go around the unlearnable true teacher with the distance between them fixed in an asymptotic steady state. The generalization performance of the student is shown to exceed that of the ensemble teachers in a transient state, as was shown in similar ensemble-teachers models. Furthermore, it is found that moving the ensemble teachers even in the steady state, in contrast to the fixed ensemble teachers, is efficient for the performance of the student.

  11. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  12. Are deterministic methods suitable for short term reserve planning?

    International Nuclear Information System (INIS)

    Voorspools, Kris R.; D'haeseleer, William D.

    2005-01-01

    Although deterministic methods for establishing minutes reserve (such as the N-1 reserve or the percentage reserve) ignore the stochastic nature of reliability issues, they are commonly used in energy modelling as well as in practical applications. In order to check the validity of such methods, two test procedures are developed. The first checks if the N-1 reserve is a logical fixed value for minutes reserve. The second test procedure investigates whether deterministic methods can realise a stable reliability that is independent of demand. In both evaluations, the loss-of-load expectation is used as the objective stochastic criterion. The first test shows no particular reason to choose the largest unit as minutes reserve. The expected jump in reliability, resulting in low reliability for reserve margins lower than the largest unit and high reliability above, is not observed. The second test shows that both the N-1 reserve and the percentage reserve methods do not provide a stable reliability level that is independent of power demand. For the N-1 reserve, the reliability increases with decreasing maximum demand. For the percentage reserve, the reliability decreases with decreasing demand. The answer to the question raised in the title, therefore, has to be that the probability based methods are to be preferred over the deterministic methods

  13. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  14. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  15. Emissions lifetimes and ozone formation in power plant plumes

    Energy Technology Data Exchange (ETDEWEB)

    Ryerson, T.B.; Buhr, M.P.; Frost, G.J.; Goldan, P.D.; Holloway, J.S.; Huebler, G.; Jobson, B.T.; Kuster, W.C.; McKeen, S.A.; Parrish, D.D.; Roberts, J.M.; Sueper, D.T.; Trainer, M.; Williams, J.; Fehsenfeld, F.C. [NOAA Aeronomy Laboratory, Boulder, CO (United States)

    1998-09-20

    The concept of ozone production efficiency (OPE) per unit NO{sub x} is based on photochemical models and provides a tool with which to assess potential regional tropospheric ozone control strategies involving NO{sub x} emissions reductions. An aircraft study provided data from which power plant emissions removal rates and measurement-based estimates of OPE are estimated. This study was performed as part of the Southern Oxidants Study - 1995 Nashville intensive and focuses on the evolution of NO{sub x}, SO{sub 2}, and ozone concentrations in coal-fired power plant plumes during transport. Two approaches are examined. A mass balance approach accounts for mixing effects within the boundary layer and is used to calculate effective boundary layer removal rates for NO{sub x} and SO{sub 2} and to estimate net OPE, Net OPE is more directly comparable to photochemical model results than previous measurement-based estimates. Derived net production efficiencies from mass balance range from 1 to 3 molecules of ozone produced per molecule of NO{sub x} emitted. A concentration ratio approach provides an estimate of removal rates of primary emissions relative to a tracer species. This approach can be combined with emissions ratio information to provide upper limit estimates of OPE that range from 2 to 7. Both approaches illustrate the dependence of ozone production on NO{sub x} source strength in these large point source plumes. The dependence of total ozone production, ozone production efficiency, and the rate of ozone production on NO{sub x} source strength is examined. These results are interpreted in light of potential ozone control strategies for the region. 42 refs., 8 figs., 5 tabs.

  16. Emissions lifetimes and ozone formation in power plant plumes

    International Nuclear Information System (INIS)

    Ryerson, T.B.; Buhr, M.P.; Frost, G.J.; Goldan, P.D.; Holloway, J.S.; Huebler, G.; Jobson, B.T.; Kuster, W.C.; McKeen, S.A.; Parrish, D.D.; Roberts, J.M.; Sueper, D.T.; Trainer, M.; Williams, J.; Fehsenfeld, F.C.

    1998-01-01

    The concept of ozone production efficiency (OPE) per unit NO x is based on photochemical models and provides a tool with which to assess potential regional tropospheric ozone control strategies involving NO x emissions reductions. An aircraft study provided data from which power plant emissions removal rates and measurement-based estimates of OPE are estimated. This study was performed as part of the Southern Oxidants Study - 1995 Nashville intensive and focuses on the evolution of NO x , SO 2 , and ozone concentrations in coal-fired power plant plumes during transport. Two approaches are examined. A mass balance approach accounts for mixing effects within the boundary layer and is used to calculate effective boundary layer removal rates for NO x and SO 2 and to estimate net OPE, Net OPE is more directly comparable to photochemical model results than previous measurement-based estimates. Derived net production efficiencies from mass balance range from 1 to 3 molecules of ozone produced per molecule of NO x emitted. A concentration ratio approach provides an estimate of removal rates of primary emissions relative to a tracer species. This approach can be combined with emissions ratio information to provide upper limit estimates of OPE that range from 2 to 7. Both approaches illustrate the dependence of ozone production on NO x source strength in these large point source plumes. The dependence of total ozone production, ozone production efficiency, and the rate of ozone production on NO x source strength is examined. These results are interpreted in light of potential ozone control strategies for the region. 42 refs., 8 figs., 5 tabs

  17. Equivalence relations between deterministic and quantum mechanical systems

    International Nuclear Information System (INIS)

    Hooft, G.

    1988-01-01

    Several quantum mechanical models are shown to be equivalent to certain deterministic systems because a basis can be found in terms of which the wave function does not spread. This suggests that apparently indeterministic behavior typical for a quantum mechanical world can be the result of locally deterministic laws of physics. We show how certain deterministic systems allow the construction of a Hilbert space and a Hamiltonian so that at long distance scales they may appear to behave as quantum field theories, including interactions but as yet no mass term. These observations are suggested to be useful for building theories at the Planck scale

  18. Operational State Complexity of Deterministic Unranked Tree Automata

    Directory of Open Access Journals (Sweden)

    Xiaoxue Piao

    2010-08-01

    Full Text Available We consider the state complexity of basic operations on tree languages recognized by deterministic unranked tree automata. For the operations of union and intersection the upper and lower bounds of both weakly and strongly deterministic tree automata are obtained. For tree concatenation we establish a tight upper bound that is of a different order than the known state complexity of concatenation of regular string languages. We show that (n+1 ( (m+12^n-2^(n-1 -1 vertical states are sufficient, and necessary in the worst case, to recognize the concatenation of tree languages recognized by (strongly or weakly deterministic automata with, respectively, m and n vertical states.

  19. Web-Based Tools for Modelling and Analysis of Multivariate Data: California Ozone Pollution Activity

    Science.gov (United States)

    Dinov, Ivo D.; Christou, Nicolas

    2011-01-01

    This article presents a hands-on web-based activity motivated by the relation between human health and ozone pollution in California. This case study is based on multivariate data collected monthly at 20 locations in California between 1980 and 2006. Several strategies and tools for data interrogation and exploratory data analysis, model fitting…

  20. Regional interdependency of precipitation indices across Denmark in two ensembles of high-resolution RCMs

    DEFF Research Database (Denmark)

    Sunyer Pinya, Maria Antonia; Madsen, Henrik; Rosbjerg, Dan

    2013-01-01

    all these methods is that the climate models are independent. This study addresses the validity of this assumption for two ensembles of regional climate models (RCMs) from the Ensemble-Based Predictions of Climate Changes and their Impacts (ENSEMBLES) project based on the land cells covering Denmark....... Daily precipitation indices from an ensemble of RCMs driven by the 40-yrECMWFRe-Analysis (ERA-40) and an ensemble of the same RCMs driven by different general circulation models (GCMs) are analyzed. Two different methods are used to estimate the amount of independent information in the ensembles....... These are based on different statistical properties of a measure of climate model error. Additionally, a hierarchical cluster analysis is carried out. Regardless of the method used, the effective number of RCMs is smaller than the total number of RCMs. The estimated effective number of RCMs varies depending...

  1. Ensemble modelling of nitrogen fluxes: data fusion for a Swedish meso-scale catchment

    Directory of Open Access Journals (Sweden)

    J.-F. Exbrayat

    2010-12-01

    Full Text Available Model predictions of biogeochemical fluxes at the landscape scale are highly uncertain, both with respect to stochastic (parameter and structural uncertainty. In this study 5 different models (LASCAM, LASCAM-S, a self-developed tool, SWAT and HBV-N-D designed to simulate hydrological fluxes as well as mobilisation and transport of one or several nitrogen species were applied to the mesoscale River Fyris catchment in mid-eastern Sweden.

    Hydrological calibration against 5 years of recorded daily discharge at two stations gave highly variable results with Nash-Sutcliffe Efficiency (NSE ranging between 0.48 and 0.83. Using the calibrated hydrological parameter sets, the parameter uncertainty linked to the nitrogen parameters was explored in order to cover the range of possible predictions of exported loads for 3 nitrogen species: nitrate (NO3, ammonium (NH4 and total nitrogen (Tot-N. For each model and each nitrogen species, predictions were ranked in two different ways according to the performance indicated by two different goodness-of-fit measures: the coefficient of determination R2 and the root mean square error RMSE. A total of 2160 deterministic Single Model Ensembles (SME was generated using an increasing number of members (from the 2 best to the 10 best single predictions. Finally the best SME for each model, nitrogen species and discharge station were selected and merged into 330 different Multi-Model Ensembles (MME. The evolution of changes in R2 and RMSE was used as a performance descriptor of the ensemble procedure.

    In each studied case, numerous ensemble merging schemes were identified which outperformed any of their members. Improvement rates were generally higher when worse members were introduced. The highest improvements were achieved for the nitrogen SMEs compiled with multiple linear regression models with R2 selected members, which

  2. Earth's ozone layer

    International Nuclear Information System (INIS)

    Lasa, J.

    1991-01-01

    The paper contain the actual results of investigations of the influence of the human activity on the Earth's ozone layer. History of the ozone measurements and of the changes in its concentrations within the last few years are given. The influence of the trace gases on both local and global ozone concentrations are discussed. The probable changes of the ozone concentrations are presented on the basis of the modelling investigations. The effect of a decrease in global ozone concentration on human health and on biosphere are also presented. (author). 33 refs, 36 figs, 5 tabs

  3. Integrated Science Assessment (ISA) of Ozone and Related ...

    Science.gov (United States)

    EPA announced the availability of the final report, Integrated Science Assessment of Ozone and Related Photochemical Oxidants. This document represents a concise synthesis and evaluation of the most policy-relevant science and will ultimately provide the scientific bases for EPA’s decision regarding the adequacy of the current national ambient air quality standards for ozone to protect human health, public welfare, and the environment. Critical evaluation and integration of the evidence on health and environmental effects of ozone to provide scientific support for the review of the NAAQS for ozone.

  4. Improving the ensemble optimization method through covariance matrix adaptation (CMA-EnOpt)

    NARCIS (Netherlands)

    Fonseca, R.M.; Leeuwenburgh, O.; Hof, P.M.J. van den; Jansen, J.D.

    2013-01-01

    Ensemble Optimization (EnOpt) is a rapidly emerging method for reservoir model based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current implementations of EnOpt use a Gaussian ensemble with a

  5. Urban Summertime Ozone of China: Peak Ozone Hour and Nighttime Mixing

    Science.gov (United States)

    Qu, H.; Wang, Y.; Zhang, R.

    2017-12-01

    We investigate the observed diurnal cycle of summertime ozone in the cities of China using a regional chemical transport model. The simulated daytime ozone is in general agreement with the observations. Model simulations suggest that the ozone peak time and peak concentration are a function of NOx (NO + NO2) and volatile organic compound (VOC) emissions. The differences between simulated and observed ozone peak time and peak concentration in some regions can be applied to understand biases in the emission inventories. For example, the VOCs emissions are underestimated over the Pearl River Delta (PRD) region, and either NOx emissions are underestimated or VOC emissions are overestimated over the Yangtze River Delta (YRD) regions. In contrast to the general good daytime ozone simulations, the simulated nighttime ozone has a large low bias of up to 40 ppbv. Nighttime ozone in urban areas is sensitive to the nocturnal boundary-layer mixing, and enhanced nighttime mixing (from the surface to 200-500 m) is necessary for the model to reproduce the observed level of ozone.

  6. Total ozone changes in the 1987 Antarctic ozone hole

    Science.gov (United States)

    Krueger, Arlin J.; Schoeberl, Mark R.; Doiron, Scott D.; Sechrist, Frank; Galimore, Reginald

    1988-01-01

    The development of the Antarctic ozone minimum was observed in 1987 with the Nimbus 7 Total Ozone Mapping Spectrometer (TOMS) instrument. In the first half of August the near-polar (60 and 70 deg S) ozone levels were similar to those of recent years. By September, however, the ozone at 70 and 80 deg S was clearly lower than any previous year including 1985, the prior record low year. The levels continued to decrease throughout September until October 5 when a new record low of 109 DU was established at a point near the South Pole. This value is 29 DU less than the lowest observed in 1985 and 48 DU less than the 1986 low. The zonal mean total ozone at 60 deg S remained constant throughout the time of ozone hole formation. The ozone decline was punctuated by local minima formed away from the polar night boundary at about 75 deg S. The first of these, on August 15 to 17, formed just east of the Palmer Peninsula and appears to be a mountain wave. The second major minimum formed on September 5 to 7 again downwind of the Palmer Peninsula. This event was larger in scale than the August minimum and initiated the decline of ozone across the polar region. The 1987 ozone hole was nearly circular and pole centered for its entire life. In previous years the hole was perturbed by intrusions of the circumpolar maximum into the polar regions, thus causing the hole to be elliptical. The 1987 hole also remained in place until the end of November, a few days longer than in 1985, and this persistence resulted in the latest time for recovery to normal values yet observed.

  7. Application of Ozone MBBR Process in Refinery Wastewater Treatment

    Science.gov (United States)

    Lin, Wang

    2018-01-01

    Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.

  8. Hybrid nanomembrane-based capacitors for the determination of the dielectric constant of semiconducting molecular ensembles

    Science.gov (United States)

    Petrini, Paula A.; Silva, Ricardo M. L.; de Oliveira, Rafael F.; Merces, Leandro; Bof Bufon, Carlos C.

    2018-06-01

    Considerable advances in the field of molecular electronics have been achieved over the recent years. One persistent challenge, however, is the exploitation of the electronic properties of molecules fully integrated into devices. Typically, the molecular electronic properties are investigated using sophisticated techniques incompatible with a practical device technology, such as the scanning tunneling microscopy. The incorporation of molecular materials in devices is not a trivial task as the typical dimensions of electrical contacts are much larger than the molecular ones. To tackle this issue, we report on hybrid capacitors using mechanically-compliant nanomembranes to encapsulate ultrathin molecular ensembles for the investigation of molecular dielectric properties. As the prototype material, copper (II) phthalocyanine (CuPc) has been chosen as information on its dielectric constant (k CuPc) at the molecular scale is missing. Here, hybrid nanomembrane-based capacitors containing metallic nanomembranes, insulating Al2O3 layers, and the CuPc molecular ensembles have been fabricated and evaluated. The Al2O3 is used to prevent short circuits through the capacitor plates as the molecular layer is considerably thin (electrical measurements of devices with molecular layers of different thicknesses, the CuPc dielectric constant has been reliably determined (k CuPc = 4.5 ± 0.5). These values suggest a mild contribution of the molecular orientation on the CuPc dielectric properties. The reported nanomembrane-based capacitor is a viable strategy for the dielectric characterization of ultrathin molecular ensembles integrated into a practical, real device technology.

  9. Inquiry Based Projects Using Student Ozone Measurements and the Status of Using Plants as Bio-Indicators

    Science.gov (United States)

    Ladd, I. H.; Fishman, J.; Pippin, M.; Sachs, S.; Skelly, J.; Chappelka, A.; Neufeld, H.; Burkey, K.

    2006-05-01

    Students around the world work cooperatively with their teachers and the scientific research community measuring local surface ozone levels using a hand-held optical scanner and ozone sensitive chemical strips. Through the GLOBE (Global Learning and Observations to Benefit the Environment) Program, students measuring local ozone levels are connected with the chemistry of the air they breathe and how human activity impacts air quality. Educational tools have been developed and correlated with the National Science and Mathematics Standards to facilitate integrating the study of surface ozone with core curriculum. Ozone air pollution has been identified as the major pollutant causing foliar injury to plants when they are exposed to concentrations of surface ozone. The inclusion of native and agricultural plants with measuring surface ozone provides an Earth system approach to understanding surface ozone. An implementation guide for investigating ozone induced foliar injury has been developed and field tested. The guide, Using Sensitive Plants as Bio-Indicators of Ozone Pollution, provides: the background information and protocol for implementing an "Ozone Garden" with native and agricultural plants; and, a unique opportunity to involve students in a project that will develop and increase their awareness of surface ozone air pollution and its impact on plants.

  10. Quantification of ozone uptake at the stand level in a Pinus canariensis forest in Tenerife, Canary Islands: An approach based on sap flow measurements

    International Nuclear Information System (INIS)

    Wieser, Gerhard; Luis, Vanessa C.; Cuevas, Emilio

    2006-01-01

    Ozone uptake was studied in a pine forest in Tenerife, Canary Islands, an ecotone with strong seasonal changes in climate. Ambient ozone concentration showed a pronounced seasonal course with high concentrations during the dry and warm period and low concentrations during the wet and cold season. Ozone uptake by contrast showed no clear seasonal trend. This is because canopy conductance significantly decreased with soil water availability and vapour pressure deficit. Mean daily ozone uptake averaged 1.9 nmol m -2 s -1 during the wet and cold season, and 1.5 nmol m -2 s -1 during the warm and dry period. The corresponding daily mean ambient ozone concentrations were 42 and 51 nl l -1 , respectively. Thus we conclude that in Mediterranean type forest ecosystems the flux based approach is more capable for risk assessment than an external, concentration based approach. - Sap flow measurements can be used for estimating ozone uptake at the stand level and for parameterisation of O 3 uptake models

  11. Stochastic Modeling and Deterministic Limit of Catalytic Surface Processes

    DEFF Research Database (Denmark)

    Starke, Jens; Reichert, Christian; Eiswirth, Markus

    2007-01-01

    Three levels of modeling, microscopic, mesoscopic and macroscopic are discussed for the CO oxidation on low-index platinum single crystal surfaces. The introduced models on the microscopic and mesoscopic level are stochastic while the model on the macroscopic level is deterministic. It can......, such that in contrast to the microscopic model the spatial resolution is reduced. The derivation of deterministic limit equations is in correspondence with the successful description of experiments under low-pressure conditions by deterministic reaction-diffusion equations while for intermediate pressures phenomena...

  12. Ozone modeling

    International Nuclear Information System (INIS)

    McIllvaine, C.M.

    1994-01-01

    Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO 2 ), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NO x concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NO x coordinates of the point, known as the NMOC/NO x ratio. Results obtained by the described model are presented

  13. A deterministic, gigabit serial timing, synchronization and data link for the RHIC LLRF

    International Nuclear Information System (INIS)

    Hayes, T.; Smith, K.S.; Severino, F.

    2011-01-01

    A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To provide synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.

  14. California Baseline Ozone Transport Study (CABOTS): Ozonesonde Measurements

    Science.gov (United States)

    Eiserloh, A. J., Jr.; Chiao, S.; Spitze, J.; Cauley, S.; Clark, J.; Roberts, M.

    2016-12-01

    Because the EPA recently lowered the ambient air quality standard for the 8-hr average of ozone (O3) to70 ppbv, California must continue to achieve significant reductions in ozone precursor emissions and prepare for new State Implementation Plans (SIP) to demonstrate how ground-level ambient ozone will be reduced below the new health-based standard. Prior studies suggest that background levels of ozone traveling across the Pacific Ocean can significantly influence surface ozone throughout California, particularly during the spring. Evidence has been presented indicating that background levels of ozone continue to increase in the western United States over the recent few decades, implying more ozone exceedances in the future. To better understand the contributions of the external natural and anthropogenic pollution sources as well as atmospheric processes for surface ozone concentrations in California during the spring and summer months, the California Baseline Ozone Transport Study (CABOTS) has been established. One major goal of CABOTS is to implement near daily ozonesonde measurements along the California Coast to quantify background ozone aloft before entering the State during high ozone season. CABOTS has been ongoing from May through August of 2016 launching ozonesondes from Bodega Bay and Half Moon Bay, California. The temporal progression of ozonesonde measurements and subsequent analysis of the data will be discussed with a focus on the contribution of background ozone to surface ozone sites inland as well as likely origins of layers aloft. Comparisons of current ozonesondes versus prior ozonesonde studies of California will also be performed. A few selected cases of high ozone layers moving onshore from different sources will be discussed as well.

  15. Fluorescent Binary Ensemble Based on Pyrene Derivative and Sodium Dodecyl Sulfate Assemblies as a Chemical Tongue for Discriminating Metal Ions and Brand Water.

    Science.gov (United States)

    Zhang, Lijun; Huang, Xinyan; Cao, Yuan; Xin, Yunhong; Ding, Liping

    2017-12-22

    Enormous effort has been put to the detection and recognition of various heavy metal ions due to their involvement in serious environmental pollution and many major diseases. The present work has developed a single fluorescent sensor ensemble that can distinguish and identify a variety of heavy metal ions. A pyrene-based fluorophore (PB) containing a metal ion receptor group was specially designed and synthesized. Anionic surfactant sodium dodecyl sulfate (SDS) assemblies can effectively adjust its fluorescence behavior. The selected binary ensemble based on PB/SDS assemblies can exhibit multiple emission bands and provide wavelength-based cross-reactive responses to a series of metal ions to realize pattern recognition ability. The combination of surfactant assembly modulation and the receptor for metal ions empowers the present sensor ensemble with strong discrimination power, which could well differentiate 13 metal ions, including Cu 2+ , Co 2+ , Ni 2+ , Cr 3+ , Hg 2+ , Fe 3+ , Zn 2+ , Cd 2+ , Al 3+ , Pb 2+ , Ca 2+ , Mg 2+ , and Ba 2+ . Moreover, this single sensing ensemble could be further applied for identifying different brands of drinking water.

  16. 3-D visualization of ensemble weather forecasts - Part 2: Forecasting warm conveyor belt situations for aircraft-based field campaigns

    Science.gov (United States)

    Rautenhaus, M.; Grams, C. M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present the application of interactive 3-D visualization of ensemble weather predictions to forecasting warm conveyor belt situations during aircraft-based atmospheric research campaigns. Motivated by forecast requirements of the T-NAWDEX-Falcon 2012 campaign, a method to predict 3-D probabilities of the spatial occurrence of warm conveyor belts has been developed. Probabilities are derived from Lagrangian particle trajectories computed on the forecast wind fields of the ECMWF ensemble prediction system. Integration of the method into the 3-D ensemble visualization tool Met.3D, introduced in the first part of this study, facilitates interactive visualization of WCB features and derived probabilities in the context of the ECMWF ensemble forecast. We investigate the sensitivity of the method with respect to trajectory seeding and forecast wind field resolution. Furthermore, we propose a visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region and, thus, to assist the forecaster in interpreting the obtained probabilities. A case study, revisiting a forecast case from T-NAWDEX-Falcon, illustrates the practical application of Met.3D and demonstrates the use of 3-D and uncertainty visualization for weather forecasting and for planning flight routes in the medium forecast range (three to seven days before take-off).

  17. 77 FR 24440 - Approval and Promulgation of Implementation Plans; Georgia; Atlanta; Ozone 2002 Base Year...

    Science.gov (United States)

    2012-04-24

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 [EPA-R04-OAR-2010-0021(b); FRL-9661-9] Approval and Promulgation of Implementation Plans; Georgia; Atlanta; Ozone 2002 Base Year Emissions Inventory AGENCY... 2002 base year emissions inventory portion of the state implementation plan (SIP) revision submitted by...

  18. Geophysical validation of SCIAMACHY Limb Ozone Profiles

    Directory of Open Access Journals (Sweden)

    E. J. Brinksma

    2006-01-01

    Full Text Available We discuss the quality of the two available SCIAMACHY limb ozone profile products. They were retrieved with the University of Bremen IFE's algorithm version 1.61 (hereafter IFE, and the official ESA offline algorithm (hereafter OL versions 2.4 and 2.5. The ozone profiles were compared to a suite of correlative measurements from ground-based lidar and microwave, sondes, SAGE II and SAGE III (Stratospheric Aerosol and Gas Experiment. To correct for the expected Envisat pointing errors, which have not been corrected implicitly in either of the algorithms, we applied a constant altitude shift of -1.5 km to the SCIAMACHY ozone profiles. The IFE ozone profile data between 16 and 40 km are biased low by 3-6%. The average difference profiles have a typical standard deviation of 10% between 20 and 35 km. We show that more than 20% of the SCIAMACHY official ESA offline (OL ozone profiles version 2.4 and 2.5 have unrealistic ozone values, most of these are north of 15° S. The remaining OL profiles compare well to correlative instruments above 24 km. Between 20 and 24 km, they underestimate ozone by 15±5%.

  19. An application and verification of ensemble forecasting on wind power to assess operational risk indicators in power grids

    Energy Technology Data Exchange (ETDEWEB)

    Alessandrini, S.; Ciapessoni, E.; Cirio, D.; Pitto, A.; Sperati, S. [Ricerca sul Sistema Energetico RSE S.p.A., Milan (Italy). Power System Development Dept. and Environment and Sustainable Development Dept.; Pinson, P. [Technical University of Denmark, Lyngby (Denmark). DTU Informatics

    2012-07-01

    Wind energy is part of the so-called not schedulable renewable sources, i.e. it must be exploited when it is available, otherwise it is lost. In European regulation it has priority of dispatch over conventional generation, to maximize green energy production. However, being variable and uncertain, wind (and solar) generation raises several issues for the security of the power grids operation. In particular, Transmission System Operators (TSOs) need as accurate as possible forecasts. Nowadays a deterministic approach in wind power forecasting (WPF) could easily be considered insufficient to face the uncertainty associated to wind energy. In order to obtain information about the accuracy of a forecast and a reliable estimation of its uncertainty, probabilistic forecasting is becoming increasingly widespread. In this paper we investigate the performances of the COnsortium for Small-scale MOdelling Limited area Ensemble Prediction System (COSMO-LEPS). First the ensemble application is followed by assessment of its properties (i.e. consistency, reliability) using different verification indices and diagrams calculated on wind power. Then we provide examples of how EPS based wind power forecast can be used in power system security analyses. Quantifying the forecast uncertainty allows to determine more accurately the regulation reserve requirements, hence improving security of operation and reducing system costs. In particular, the paper also presents a probabilistic power flow (PPF) technique developed at RSE and aimed to evaluate the impact of wind power forecast accuracy on the probability of security violations in power systems. (orig.)

  20. Simulating Quantitative Cellular Responses Using Asynchronous Threshold Boolean Network Ensembles

    Directory of Open Access Journals (Sweden)

    Shah Imran

    2011-07-01

    Full Text Available Abstract Background With increasing knowledge about the potential mechanisms underlying cellular functions, it is becoming feasible to predict the response of biological systems to genetic and environmental perturbations. Due to the lack of homogeneity in living tissues it is difficult to estimate the physiological effect of chemicals, including potential toxicity. Here we investigate a biologically motivated model for estimating tissue level responses by aggregating the behavior of a cell population. We assume that the molecular state of individual cells is independently governed by discrete non-deterministic signaling mechanisms. This results in noisy but highly reproducible aggregate level responses that are consistent with experimental data. Results We developed an asynchronous threshold Boolean network simulation algorithm to model signal transduction in a single cell, and then used an ensemble of these models to estimate the aggregate response across a cell population. Using published data, we derived a putative crosstalk network involving growth factors and cytokines - i.e., Epidermal Growth Factor, Insulin, Insulin like Growth Factor Type 1, and Tumor Necrosis Factor α - to describe early signaling events in cell proliferation signal transduction. Reproducibility of the modeling technique across ensembles of Boolean networks representing cell populations is investigated. Furthermore, we compare our simulation results to experimental observations of hepatocytes reported in the literature. Conclusion A systematic analysis of the results following differential stimulation of this model by growth factors and cytokines suggests that: (a using Boolean network ensembles with asynchronous updating provides biologically plausible noisy individual cellular responses with reproducible mean behavior for large cell populations, and (b with sufficient data our model can estimate the response to different concentrations of extracellular ligands. Our

  1. A two-stage method of quantitative flood risk analysis for reservoir real-time operation using ensemble-based hydrologic forecasts

    Science.gov (United States)

    Liu, P.

    2013-12-01

    Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.

  2. Tropospheric ozone. Formation, properties, effects. Expert opinion

    International Nuclear Information System (INIS)

    Elstner, E.F.

    1996-01-01

    The formation and dispersion of tropospheric ozone are discussed only marginally in this expert opinion; the key interest is in the effects of ground level ozone on plants, animals, and humans. The expert opinion is based on an analysis of the available scientific publications. (orig./MG) [de

  3. DETERMINISTIC METHODS USED IN FINANCIAL ANALYSIS

    Directory of Open Access Journals (Sweden)

    MICULEAC Melania Elena

    2014-06-01

    Full Text Available The deterministic methods are those quantitative methods that have as a goal to appreciate through numerical quantification the creation and expression mechanisms of factorial and causal, influence and propagation relations of effects, where the phenomenon can be expressed through a direct functional relation of cause-effect. The functional and deterministic relations are the causal relations where at a certain value of the characteristics corresponds a well defined value of the resulting phenomenon. They can express directly the correlation between the phenomenon and the influence factors, under the form of a function-type mathematical formula.

  4. Effect of Pulse Width on Ozone Generation in Pulsed Streamer Discharges

    OpenAIRE

    Tamaribuchi, Hiroyuki; Wang, Douyan; Namihira, Takao; Katsuki, Sunao; Akiyama, Hidenori; タマリブチ, ヒロユキ; オウ, トエン; ナミヒラ, タカオ; カツキ, スナオ; アキヤマ, ヒデノリ; 溜渕, 浩之; 王, 斗艶; 浪平, 隆男; 勝木, 淳; 秋山, 秀典

    2007-01-01

    Ozone has been used in treatment of drinking water andwaste water (e.g., deodorization, decolorization, anddisinfection). Though general ozonizers based on silentdischarge or barrier discharge have been used to supplyozone at many industrial situations, there is still someproblem, such as improvements of ozone concentrationand ozone yield.In this work, ozone was generated by pulsed powerdischarge in order to improve the characteristics of ozonegeneration. High electric field with short pulse ...

  5. Evaluation of LDA Ensembles Classifiers for Brain Computer Interface

    International Nuclear Information System (INIS)

    Arjona, Cristian; Pentácolo, José; Gareis, Iván; Atum, Yanina; Gentiletti, Gerardo; Acevedo, Rubén; Rufiner, Leonardo

    2011-01-01

    The Brain Computer Interface (BCI) translates brain activity into computer commands. To increase the performance of the BCI, to decode the user intentions it is necessary to get better the feature extraction and classification techniques. In this article the performance of a three linear discriminant analysis (LDA) classifiers ensemble is studied. The system based on ensemble can theoretically achieved better classification results than the individual counterpart, regarding individual classifier generation algorithm and the procedures for combine their outputs. Classic algorithms based on ensembles such as bagging and boosting are discussed here. For the application on BCI, it was concluded that the generated results using ER and AUC as performance index do not give enough information to establish which configuration is better.

  6. Ensemble Deep Learning for Biomedical Time Series Classification

    Directory of Open Access Journals (Sweden)

    Lin-peng Jin

    2016-01-01

    Full Text Available Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.

  7. Device and Method for Gathering Ensemble Data Sets

    Science.gov (United States)

    Racette, Paul E. (Inventor)

    2014-01-01

    An ensemble detector uses calibrated noise references to produce ensemble sets of data from which properties of non-stationary processes may be extracted. The ensemble detector comprising: a receiver; a switching device coupled to the receiver, the switching device configured to selectively connect each of a plurality of reference noise signals to the receiver; and a gain modulation circuit coupled to the receiver and configured to vary a gain of the receiver based on a forcing signal; whereby the switching device selectively connects each of the plurality of reference noise signals to the receiver to produce an output signal derived from the plurality of reference noise signals and the forcing signal.

  8. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    International Nuclear Information System (INIS)

    Elkhoraibi, T.; Hashemi, A.; Ostadan, F.

    2014-01-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  9. Probabilistic and deterministic soil structure interaction analysis including ground motion incoherency effects

    Energy Technology Data Exchange (ETDEWEB)

    Elkhoraibi, T., E-mail: telkhora@bechtel.com; Hashemi, A.; Ostadan, F.

    2014-04-01

    Soil-structure interaction (SSI) is a major step for seismic design of massive and stiff structures typical of the nuclear facilities and civil infrastructures such as tunnels, underground stations, dams and lock head structures. Currently most SSI analyses are performed deterministically, incorporating limited range of variation in soil and structural properties and without consideration of the ground motion incoherency effects. This often leads to overestimation of the seismic response particularly the In-Structure-Response Spectra (ISRS) with significant impositions of design and equipment qualification costs, especially in the case of high-frequency sensitive equipment at stiff soil or rock sites. The reluctance to incorporate a more comprehensive probabilistic approach is mainly due to the fact that the computational cost of performing probabilistic SSI analysis even without incoherency function considerations has been prohibitive. As such, bounding deterministic approaches have been preferred by the industry and accepted by the regulatory agencies. However, given the recently available and growing computing capabilities, the need for a probabilistic-based approach to the SSI analysis is becoming clear with the advances in performance-based engineering and the utilization of fragility analysis in the decision making process whether by the owners or the regulatory agencies. This paper demonstrates the use of both probabilistic and deterministic SSI analysis techniques to identify important engineering demand parameters in the structure. A typical nuclear industry structure is used as an example for this study. The system is analyzed for two different site conditions: rock and deep soil. Both deterministic and probabilistic SSI analysis approaches are performed, using the program SASSI, with and without ground motion incoherency considerations. In both approaches, the analysis begins at the hard rock level using the low frequency and high frequency hard rock

  10. Solar dynamics influence on the atmospheric ozone

    International Nuclear Information System (INIS)

    Gogosheva, T.; Grigorieva, V.; Mendeva, B.; Krastev, D.; Petkov, B.

    2007-01-01

    A response of the atmospheric ozone to the solar dynamics has been studied using the total ozone content data, taken from the satellite experiments GOME on ERS-2 and TOMS-EP together with data obtained from the ground-based spectrophotometer Photon operating in Stara Zagora, Bulgaria during the period 1999-2005. We also use data from surface ozone observations performed in Sofia, Bulgaria. The solar activity was characterized by the sunspot daily numbers W, the solar radio flux at 10.7 cm (F10.7) and the MgII wing-to-core ratio solar index. The impact of the solar activity on the total ozone has been investigated analysing the ozone response to sharp changes of these parameters. Some of the examined cases showed a positive correlation between the ozone and the solar parameters, however, a negative correlation in other cases was found. There were some cases when the sharp increases of the solar activity did not provoke any ozone changes. The solar radiation changes during an eclipse can be considered a particular case of the solar dynamics as this event causes a sharp change of irradiance within a comparatively short time interval. The results of both - the total and surface ozone measurements carried out during the eclipses on 11 August 1999, 31 May 2003 and 29 March 2006 are presented. It was found that the atmospheric ozone behavior shows strong response to the fast solar radiation changes which take place during solar eclipse. (authors)

  11. Ozone impact minimization through coordinated scheduling of turnaround operations from multiple olefin plants in an ozone nonattainment area

    Science.gov (United States)

    Ge, Sijie; Wang, Sujing; Xu, Qiang; Ho, Thomas

    2018-03-01

    Turnaround operations (start-up and shutdown) are critical operations in olefin plants, which emit large quantities of VOCs, NOx and CO. The emission has great potentials to impact the ozone level in ozone nonattainment areas. This study demonstrates a novel practice to minimize the ozone impact through coordinated scheduling of turnaround operations from multiple olefin plants located in Houston, Texas, an ozone nonattainment area. The study considered two olefin plants scheduled to conduct turnaround operations: one start-up and one shutdown, simultaneously on the same day within a five-hour window. Through dynamic simulations of the turnaround operations using ASPEN Plus Dynamics and air quality simulations using CAMx, the study predicts the ozone impact from the combined effect of the two turnaround operations under different starting-time scenarios. The simulations predict that the ozone impact from planned turnaround operations ranges from a maximum of 11.4 ppb to a minimum of 1.4 ppb. Hence, a reduction of up to 10.0 ppb can be achieved on a single day based on the selected two simulation days. This study demonstrates a cost-effective and environmentally benign ozone control practice for relevant stakeholders, including environmental agencies, regional plant operators, and local communities.

  12. Improving quantitative precipitation nowcasting with a local ensemble transform Kalman filter radar data assimilation system: observing system simulation experiments

    Directory of Open Access Journals (Sweden)

    Chih-Chien Tsai

    2014-03-01

    Full Text Available This study develops a Doppler radar data assimilation system, which couples the local ensemble transform Kalman filter with the Weather Research and Forecasting model. The benefits of this system to quantitative precipitation nowcasting (QPN are evaluated with observing system simulation experiments on Typhoon Morakot (2009, which brought record-breaking rainfall and extensive damage to central and southern Taiwan. The results indicate that the assimilation of radial velocity and reflectivity observations improves the three-dimensional winds and rain-mixing ratio most significantly because of the direct relations in the observation operator. The patterns of spiral rainbands become more consistent between different ensemble members after radar data assimilation. The rainfall intensity and distribution during the 6-hour deterministic nowcast are also improved, especially for the first 3 hours. The nowcasts with and without radar data assimilation have similar evolution trends driven by synoptic-scale conditions. Furthermore, we carry out a series of sensitivity experiments to develop proper assimilation strategies, in which a mixed localisation method is proposed for the first time and found to give further QPN improvement in this typhoon case.

  13. A study on reducing update frequency of the forecast samples in the ensemble-based 4DVar data assimilation method

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Aimei; Xu, Daosheng [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Chinese Academy of Meteorological Sciences, Beijing (China). State Key Lab. of Severe Weather; Qiu, Xiaobin [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Tianjin Institute of Meteorological Science (China); Qiu, Chongjian [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province

    2013-02-15

    In the ensemble-based four dimensional variational assimilation method (SVD-En4DVar), a singular value decomposition (SVD) technique is used to select the leading eigenvectors and the analysis variables are expressed as the orthogonal bases expansion of the eigenvectors. The experiments with a two-dimensional shallow-water equation model and simulated observations show that the truncation error and rejection of observed signals due to the reduced-dimensional reconstruction of the analysis variable are the major factors that damage the analysis when the ensemble size is not large enough. However, a larger-sized ensemble is daunting computational burden. Experiments with a shallow-water equation model also show that the forecast error covariances remain relatively constant over time. For that reason, we propose an approach that increases the members of the forecast ensemble while reducing the update frequency of the forecast error covariance in order to increase analysis accuracy and to reduce the computational cost. A series of experiments were conducted with the shallow-water equation model to test the efficiency of this approach. The experimental results indicate that this approach is promising. Further experiments with the WRF model show that this approach is also suitable for the real atmospheric data assimilation problem, but the update frequency of the forecast error covariances should not be too low. (orig.)

  14. Ozone modeling

    Energy Technology Data Exchange (ETDEWEB)

    McIllvaine, C M

    1994-07-01

    Exhaust gases from power plants that burn fossil fuels contain concentrations of sulfur dioxide (SO{sub 2}), nitric oxide (NO), particulate matter, hydrocarbon compounds and trace metals. Estimated emissions from the operation of a hypothetical 500 MW coal-fired power plant are given. Ozone is considered a secondary pollutant, since it is not emitted directly into the atmosphere but is formed from other air pollutants, specifically, nitrogen oxides (NO), and non-methane organic compounds (NMOQ) in the presence of sunlight. (NMOC are sometimes referred to as hydrocarbons, HC, or volatile organic compounds, VOC, and they may or may not include methane). Additionally, ozone formation Alternative is a function of the ratio of NMOC concentrations to NO{sub x} concentrations. A typical ozone isopleth is shown, generated with the Empirical Kinetic Modeling Approach (EKMA) option of the Environmental Protection Agency's (EPA) Ozone Isopleth Plotting Mechanism (OZIPM-4) model. Ozone isopleth diagrams, originally generated with smog chamber data, are more commonly generated with photochemical reaction mechanisms and tested against smog chamber data. The shape of the isopleth curves is a function of the region (i.e. background conditions) where ozone concentrations are simulated. The location of an ozone concentration on the isopleth diagram is defined by the ratio of NMOC and NO{sub x} coordinates of the point, known as the NMOC/NO{sub x} ratio. Results obtained by the described model are presented.

  15. Localization of atomic ensembles via superfluorescence

    International Nuclear Information System (INIS)

    Macovei, Mihai; Evers, Joerg; Keitel, Christoph H.; Zubairy, M. Suhail

    2007-01-01

    The subwavelength localization of an ensemble of atoms concentrated to a small volume in space is investigated. The localization relies on the interaction of the ensemble with a standing wave laser field. The light scattered in the interaction of the standing wave field and the atom ensemble depends on the position of the ensemble relative to the standing wave nodes. This relation can be described by a fluorescence intensity profile, which depends on the standing wave field parameters and the ensemble properties and which is modified due to collective effects in the ensemble of nearby particles. We demonstrate that the intensity profile can be tailored to suit different localization setups. Finally, we apply these results to two localization schemes. First, we show how to localize an ensemble fixed at a certain position in the standing wave field. Second, we discuss localization of an ensemble passing through the standing wave field

  16. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  17. Impact of a future H2-based road transportation sector on the composition and chemistry of the atmosphere - Part 2: Stratospheric ozone

    Science.gov (United States)

    Wang, D.; Jia, W.; Olsen, S. C.; Wuebbles, D. J.; Dubey, M. K.; Rockett, A. A.

    2013-07-01

    The prospective future adoption of molecular hydrogen (H2) to power the road transportation sector could greatly improve tropospheric air quality but also raises the question of whether the adoption would have adverse effects on the stratospheric ozone. The possibility of undesirable impacts must be fully evaluated to guide future policy decisions. Here we evaluate the possible impact of a future (2050) H2-based road transportation sector on stratospheric composition and chemistry, especially on the stratospheric ozone, with the MOZART (Model for OZone And Related chemical Tracers) model. Since future growth is highly uncertain, we evaluate the impact of two world evolution scenarios, one based on an IPCC (Intergovernmental Panel on Climate Change) high-emitting scenario (A1FI) and the other on an IPCC low-emitting scenario (B1), as well as two technological options: H2 fuel cells and H2 internal combustion engines. We assume a H2 leakage rate of 2.5% and a complete market penetration of H2 vehicles in 2050. The model simulations show that a H2-based road transportation sector would reduce stratospheric ozone concentrations as a result of perturbed catalytic ozone destruction cycles. The magnitude of the impact depends on which growth scenario evolves and which H2 technology option is applied. For the evolution growth scenario, stratospheric ozone decreases more in the H2 fuel cell scenarios than in the H2 internal combustion engine scenarios because of the NOx emissions in the latter case. If the same technological option is applied, the impact is larger in the A1FI emission scenario. The largest impact, a 0.54% decrease in annual average global mean stratospheric column ozone, is found with a H2 fuel cell type road transportation sector in the A1FI scenario; whereas the smallest impact, a 0.04% increase in stratospheric ozone, is found with applications of H2 internal combustion engine vehicles in the B1 scenario. The impacts of the other two scenarios fall

  18. Assessing an ensemble Kalman filter inference of Manning's n coefficient of an idealized tidal inlet against a polynomial chaos-based MCMC

    Science.gov (United States)

    Siripatana, Adil; Mayo, Talea; Sraj, Ihab; Knio, Omar; Dawson, Clint; Le Maitre, Olivier; Hoteit, Ibrahim

    2017-08-01

    Bayesian estimation/inversion is commonly used to quantify and reduce modeling uncertainties in coastal ocean model, especially in the framework of parameter estimation. Based on Bayes rule, the posterior probability distribution function (pdf) of the estimated quantities is obtained conditioned on available data. It can be computed either directly, using a Markov chain Monte Carlo (MCMC) approach, or by sequentially processing the data following a data assimilation approach, which is heavily exploited in large dimensional state estimation problems. The advantage of data assimilation schemes over MCMC-type methods arises from the ability to algorithmically accommodate a large number of uncertain quantities without significant increase in the computational requirements. However, only approximate estimates are generally obtained by this approach due to the restricted Gaussian prior and noise assumptions that are generally imposed in these methods. This contribution aims at evaluating the effectiveness of utilizing an ensemble Kalman-based data assimilation method for parameter estimation of a coastal ocean model against an MCMC polynomial chaos (PC)-based scheme. We focus on quantifying the uncertainties of a coastal ocean ADvanced CIRCulation (ADCIRC) model with respect to the Manning's n coefficients. Based on a realistic framework of observation system simulation experiments (OSSEs), we apply an ensemble Kalman filter and the MCMC method employing a surrogate of ADCIRC constructed by a non-intrusive PC expansion for evaluating the likelihood, and test both approaches under identical scenarios. We study the sensitivity of the estimated posteriors with respect to the parameters of the inference methods, including ensemble size, inflation factor, and PC order. A full analysis of both methods, in the context of coastal ocean model, suggests that an ensemble Kalman filter with appropriate ensemble size and well-tuned inflation provides reliable mean estimates and

  19. Multi-objective optimization for generating a weighted multi-model ensemble

    Science.gov (United States)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic

  20. Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones

    Science.gov (United States)

    Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto

    2015-04-01

    Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions

  1. Extending Correlation Filter-Based Visual Tracking by Tree-Structured Ensemble and Spatial Windowing.

    Science.gov (United States)

    Gundogdu, Erhan; Ozkan, Huseyin; Alatan, A Aydin

    2017-11-01

    Correlation filters have been successfully used in visual tracking due to their modeling power and computational efficiency. However, the state-of-the-art correlation filter-based (CFB) tracking algorithms tend to quickly discard the previous poses of the target, since they consider only a single filter in their models. On the contrary, our approach is to register multiple CFB trackers for previous poses and exploit the registered knowledge when an appearance change occurs. To this end, we propose a novel tracking algorithm [of complexity O(D) ] based on a large ensemble of CFB trackers. The ensemble [of size O(2 D ) ] is organized over a binary tree (depth D ), and learns the target appearance subspaces such that each constituent tracker becomes an expert of a certain appearance. During tracking, the proposed algorithm combines only the appearance-aware relevant experts to produce boosted tracking decisions. Additionally, we propose a versatile spatial windowing technique to enhance the individual expert trackers. For this purpose, spatial windows are learned for target objects as well as the correlation filters and then the windowed regions are processed for more robust correlations. In our extensive experiments on benchmark datasets, we achieve a substantial performance increase by using the proposed tracking algorithm together with the spatial windowing.

  2. CSL model checking of deterministic and stochastic Petri nets

    NARCIS (Netherlands)

    Martinez Verdugo, J.M.; Haverkort, Boudewijn R.H.M.; German, R.; Heindl, A.

    2006-01-01

    Deterministic and Stochastic Petri Nets (DSPNs) are a widely used high-level formalism for modeling discrete-event systems where events may occur either without consuming time, after a deterministic time, or after an exponentially distributed time. The underlying process dened by DSPNs, under

  3. Tracer-tracer relations as a tool for research on polar ozone loss

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, Rolf

    2010-07-01

    The report includes the following chapters: (1) Introduction: ozone in the atmosphere, anthropogenic influence on the ozone layer, polar stratospheric ozone loss; (2) Tracer-tracer relations in the stratosphere: tracer-tracer relations as a tool in atmospheric research; impact of cosmic-ray-induced heterogeneous chemistry on polar ozone; (3) quantifying polar ozone loss from ozone-tracer relations: principles of tracer-tracer correlation techniques; reference ozone-tracer relations in the early polar vortex; impact of mixing on ozone-tracer relations in the polar vortex; impact of mesospheric intrusions on ozone-tracer relations in the stratospheric polar vortex calculation of chemical ozone loss in the arctic in March 2003 based on ILAS-II measurements; (4) epilogue.

  4. EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms

    KAUST Repository

    Rapakoulia, Trisevgeni; Theofilatos, Konstantinos A.; Kleftogiannis, Dimitrios A.; Likothanasis, Spiridon D.; Tsakalidis, Athanasios K.; Mavroudi, Seferina P.

    2014-01-01

    do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel

  5. The Antarctic ozone hole

    International Nuclear Information System (INIS)

    Jones, Anna E

    2008-01-01

    Since the mid 1970s, the ozone layer over Antarctica has experienced massive destruction during every spring. In this article, we will consider the atmosphere, and what ozone and the ozone layer actually are. We explore the chemistry responsible for the ozone destruction, and learn about why conditions favour ozone destruction over Antarctica. For the historical perspective, the events leading up to the discovery of the 'hole' are presented, as well as the response from the international community and the measures taken to protect the ozone layer now and into the future

  6. Molecular storage of ozone in a clathrate hydrate: an attempt at preserving ozone at high concentrations.

    Directory of Open Access Journals (Sweden)

    Takahiro Nakajima

    Full Text Available This paper reports an experimental study of the formation of a mixed O(3+ O(2+ CO(2 hydrate and its frozen storage under atmospheric pressure, which aimed to establish a hydrate-based technology for preserving ozone (O(3, a chemically unstable substance, for various industrial, medical and consumer uses. By improving the experimental technique that we recently devised for forming an O(3+ O(2+ CO(2 hydrate, we succeeded in significantly increasing the fraction of ozone contained in the hydrate. For a hydrate formed at a system pressure of 3.0 MPa, the mass fraction of ozone was initially about 0.9%; and even after a 20-day storage at -25°C and atmospheric pressure, it was still about 0.6%. These results support the prospect of establishing an economical, safe, and easy-to-handle ozone-preservation technology of practical use.

  7. Observing Tropospheric Ozone From Space

    Science.gov (United States)

    Fishman, Jack

    2000-01-01

    The importance of tropospheric ozone embraces a spectrum of relevant scientific issues ranging from local environmental concerns, such as damage to the biosphere and human health, to those that impact global change questions, Such is climate warming. From an observational perspective, the challenge is to determine the tropospheric ozone global distribution. Because its lifetime is short compared with other important greenhouse gases that have been monitored over the past several decades, the distribution of tropospheric ozone cannot be inferred from a relatively small set of monitoring stations. Therefore, the best way to obtain a true global picture is from the use of space-based instrumentation where important spatial gradients over vast ocean expanses and other uninhabited areas can be properly characterized. In this paper, the development of the capability to measure tropospheric ozone from space over the past 15 years is summarized. Research in the late 1980s successfully led to the determination of the climatology of tropospheric ozone as a function of season; more recently, the methodology has improved to the extent where regional air pollution episodes can be characterized. The most recent modifications now provide quasi-global (50 N) to 50 S) maps on a daily basis. Such a data set would allow for the study of long-range (intercontinental) transport of air pollution and the quantification of how regional emissions feed into the global tropospheric ozone budget. Future measurement capabilities within this decade promise to offer the ability to provide Concurrent maps of the precursors to the in situ formation of tropospheric ozone from which the scientific community will gain unprecedented insight into the processes that control global tropospheric chemistry

  8. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    2017-01-01

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)=Z(t) Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are X 2 -distributed....

  9. The cointegrated vector autoregressive model with general deterministic terms

    DEFF Research Database (Denmark)

    Johansen, Søren; Nielsen, Morten Ørregaard

    In the cointegrated vector autoregression (CVAR) literature, deterministic terms have until now been analyzed on a case-by-case, or as-needed basis. We give a comprehensive unified treatment of deterministic terms in the additive model X(t)= Z(t) + Y(t), where Z(t) belongs to a large class...... of deterministic regressors and Y(t) is a zero-mean CVAR. We suggest an extended model that can be estimated by reduced rank regression and give a condition for when the additive and extended models are asymptotically equivalent, as well as an algorithm for deriving the additive model parameters from the extended...... model parameters. We derive asymptotic properties of the maximum likelihood estimators and discuss tests for rank and tests on the deterministic terms. In particular, we give conditions under which the estimators are asymptotically (mixed) Gaussian, such that associated tests are khi squared distributed....

  10. Evidence for a continuous decline in lower stratospheric ozone offsetting ozone layer recovery

    Science.gov (United States)

    Ball, William T.; Alsing, Justin; Mortlock, Daniel J.; Staehelin, Johannes; Haigh, Joanna D.; Peter, Thomas; Tummon, Fiona; Stübi, Rene; Stenke, Andrea; Anderson, John; Bourassa, Adam; Davis, Sean M.; Degenstein, Doug; Frith, Stacey; Froidevaux, Lucien; Roth, Chris; Sofieva, Viktoria; Wang, Ray; Wild, Jeannette; Yu, Pengfei; Ziemke, Jerald R.; Rozanov, Eugene V.

    2018-02-01

    Ozone forms in the Earth's atmosphere from the photodissociation of molecular oxygen, primarily in the tropical stratosphere. It is then transported to the extratropics by the Brewer-Dobson circulation (BDC), forming a protective ozone layer around the globe. Human emissions of halogen-containing ozone-depleting substances (hODSs) led to a decline in stratospheric ozone until they were banned by the Montreal Protocol, and since 1998 ozone in the upper stratosphere is rising again, likely the recovery from halogen-induced losses. Total column measurements of ozone between the Earth's surface and the top of the atmosphere indicate that the ozone layer has stopped declining across the globe, but no clear increase has been observed at latitudes between 60° S and 60° N outside the polar regions (60-90°). Here we report evidence from multiple satellite measurements that ozone in the lower stratosphere between 60° S and 60° N has indeed continued to decline since 1998. We find that, even though upper stratospheric ozone is recovering, the continuing downward trend in the lower stratosphere prevails, resulting in a downward trend in stratospheric column ozone between 60° S and 60° N. We find that total column ozone between 60° S and 60° N appears not to have decreased only because of increases in tropospheric column ozone that compensate for the stratospheric decreases. The reasons for the continued reduction of lower stratospheric ozone are not clear; models do not reproduce these trends, and thus the causes now urgently need to be established.

  11. Evidence for a Continuous Decline in Lower Stratospheric Ozone Offsetting Ozone Layer Recovery

    Science.gov (United States)

    Ball, William T.; Alsing, Justin; Mortlock, Daniel J.; Staehelin, Johannes; Haigh, Joanna D.; Peter, Thomas; Tummon, Fiona; Stuebi, Rene; Stenke, Andrea; Anderson, John; hide

    2018-01-01

    Ozone forms in the Earth's atmosphere from the photodissociation of molecular oxygen, primarily in the tropical stratosphere. It is then transported to the extratropics by the Brewer-Dobson circulation (BDC), forming a protective "ozone layer" around the globe. Human emissions of halogen-containing ozone-depleting substances (hODSs) led to a decline in stratospheric ozone until they were banned by the Montreal Protocol, and since 1998 ozone in the upper stratosphere is rising again, likely the recovery from halogen-induced losses. Total column measurements of ozone between the Earth's surface and the top of the atmosphere indicate that the ozone layer has stopped declining across the globe, but no clear increase has been observed at latitudes between 60degS and 60degN outside the polar regions (60-90deg). Here we report evidence from multiple satellite measurements that ozone in the lower stratosphere between 60degS and 60degN has indeed continued to decline since 1998. We find that, even though upper stratospheric ozone is recovering, the continuing downward trend in the lower stratosphere prevails, resulting in a downward trend in stratospheric column ozone between 60degS and 60degN. We find that total column ozone between 60degS and 60degN appears not to have decreased only because of increases in tropospheric column ozone that compensate for the stratospheric decreases. The reasons for the continued reduction of lower stratospheric ozone are not clear; models do not reproduce these trends, and thus the causes now urgently need to be established.

  12. Ozonated Olive Oils and Troubles

    Directory of Open Access Journals (Sweden)

    Bulent Uysal

    2014-04-01

    Full Text Available One of the commonly used methods for ozone therapy is ozonated oils. Most prominent type of used oils is extra virgin olive oil. But still, each type of unsaturated oils may be used for ozonation. There are a lot of wrong knowledge on the internet about ozonated oils and its use as we