WorldWideScience

Sample records for weighted ensemble path sampling

  1. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  2. Path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-08-20

    An ensemble-based approach is developed to conduct optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where an ensemble of deterministic predictions is used to model and quantify uncertainty. In an operational setting, much about dynamics, topography, and forcing of the ocean environment is uncertain. To address this uncertainty, the flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each of the resulting realizations of the uncertain current field, we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of the sampling strategy and to develop insight into extensions dealing with general circulation ocean models. In particular, the ensemble method enables us to perform a statistical analysis of travel times and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  3. Girsanov reweighting for path ensembles and Markov state models

    Science.gov (United States)

    Donati, L.; Hartmann, C.; Keller, B. G.

    2017-06-01

    The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.

  4. Time-optimal path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong; Le Maitre, Olivier; Hoteit, Ibrahim; Knio, Omar

    2016-01-01

    the performance of sampling strategy, and develop insight into extensions dealing with regional or general circulation models. In particular, the ensemble method enables us to perform a statistical analysis of travel times, and consequently develop a path planning

  5. Nonadiabatic transition path sampling

    International Nuclear Information System (INIS)

    Sherman, M. C.; Corcelli, S. A.

    2016-01-01

    Fewest-switches surface hopping (FSSH) is combined with transition path sampling (TPS) to produce a new method called nonadiabatic path sampling (NAPS). The NAPS method is validated on a model electron transfer system coupled to a Langevin bath. Numerically exact rate constants are computed using the reactive flux (RF) method over a broad range of solvent frictions that span from the energy diffusion (low friction) regime to the spatial diffusion (high friction) regime. The NAPS method is shown to quantitatively reproduce the RF benchmark rate constants over the full range of solvent friction. Integrating FSSH within the TPS framework expands the applicability of both approaches and creates a new method that will be helpful in determining detailed mechanisms for nonadiabatic reactions in the condensed-phase.

  6. Ensemble Weight Enumerators for Protograph LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  7. Creating ensembles of decision trees through sampling

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick

    2005-08-30

    A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.

  8. Path Minima Queries in Dynamic Weighted Trees

    DEFF Research Database (Denmark)

    Davoodi, Pooya; Brodal, Gerth Stølting; Satti, Srinivasa Rao

    2011-01-01

    In the path minima problem on a tree, each edge is assigned a weight and a query asks for the edge with minimum weight on a path between two nodes. For the dynamic version of the problem, where the edge weights can be updated, we give data structures that achieve optimal query time\\todo{what about...

  9. Current path in light emitting diodes based on nanowire ensembles

    International Nuclear Information System (INIS)

    Limbach, F; Hauswald, C; Lähnemann, J; Wölz, M; Brandt, O; Trampert, A; Hanke, M; Jahn, U; Calarco, R; Geelhaar, L; Riechert, H

    2012-01-01

    Light emitting diodes (LEDs) have been fabricated using ensembles of free-standing (In, Ga)N/GaN nanowires (NWs) grown on Si substrates in the self-induced growth mode by molecular beam epitaxy. Electron-beam-induced current analysis, cathodoluminescence as well as biased μ-photoluminescence spectroscopy, transmission electron microscopy, and electrical measurements indicate that the electroluminescence of such LEDs is governed by the differences in the individual current densities of the single-NW LEDs operated in parallel, i.e. by the inhomogeneity of the current path in the ensemble LED. In addition, the optoelectronic characterization leads to the conclusion that these NWs exhibit N-polarity and that the (In, Ga)N quantum well states in the NWs are subject to a non-vanishing quantum confined Stark effect. (paper)

  10. Time-optimal path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong

    2016-01-06

    An ensemble-based approach is developed to conduct time-optimal path planning in unsteady ocean currents under uncertainty. We focus our attention on two-dimensional steady and unsteady uncertain flows, and adopt a sampling methodology that is well suited to operational forecasts, where a set deterministic predictions is used to model and quantify uncertainty in the predictions. In the operational setting, much about dynamics, topography and forcing of the ocean environment is uncertain, and as a result a single path produced by a model simulation has limited utility. To overcome this limitation, we rely on a finitesize ensemble of deterministic forecasts to quantify the impact of variability in the dynamics. The uncertainty of flow field is parametrized using a finite number of independent canonical random variables with known densities, and the ensemble is generated by sampling these variables. For each the resulting realizations of the uncertain current field, we predict the optimal path by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values for the BVP solver. This allows us to examine and analyze the performance of sampling strategy, and develop insight into extensions dealing with regional or general circulation models. In particular, the ensemble method enables us to perform a statistical analysis of travel times, and consequently develop a path planning approach that accounts for these statistics. The proposed methodology is tested for a number of scenarios. We first validate our algorithms by reproducing simple canonical solutions, and then demonstrate our approach in more complex flow fields, including idealized, steady and unsteady double-gyre flows.

  11. Approximate Shortest Homotopic Paths in Weighted Regions

    KAUST Repository

    Cheng, Siu-Wing; Jin, Jiongxin; Vigneron, Antoine; Wang, Yajun

    2010-01-01

    Let P be a path between two points s and t in a polygonal subdivision T with obstacles and weighted regions. Given a relative error tolerance ε ∈(0,1), we present the first algorithm to compute a path between s and t that can be deformed to P

  12. Approximate shortest homotopic paths in weighted regions

    KAUST Repository

    Cheng, Siuwing; Jin, Jiongxin; Vigneron, Antoine E.; Wang, Yajun

    2012-01-01

    A path P between two points s and t in a polygonal subdivision T with obstacles and weighted regions defines a class of paths that can be deformed to P without passing over any obstacle. We present the first algorithm that, given P and a relative

  13. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  14. Approximate Shortest Homotopic Paths in Weighted Regions

    KAUST Repository

    Cheng, Siu-Wing

    2010-01-01

    Let P be a path between two points s and t in a polygonal subdivision T with obstacles and weighted regions. Given a relative error tolerance ε ∈(0,1), we present the first algorithm to compute a path between s and t that can be deformed to P without passing over any obstacle and the path cost is within a factor 1 + ε of the optimum. The running time is O(h 3/ε2 kn polylog(k, n, 1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2010 Springer-Verlag.

  15. Approximate shortest homotopic paths in weighted regions

    KAUST Repository

    Cheng, Siuwing

    2012-02-01

    A path P between two points s and t in a polygonal subdivision T with obstacles and weighted regions defines a class of paths that can be deformed to P without passing over any obstacle. We present the first algorithm that, given P and a relative error tolerance ε (0, 1), computes a path from this class with cost at most 1 + ε times the optimum. The running time is O(h 3/ε 2kn polylog (k,n,1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2012 World Scientific Publishing Company.

  16. Equipartition terms in transition path ensemble: Insights from molecular dynamics simulations of alanine dipeptide

    Science.gov (United States)

    Li, Wenjin

    2018-02-01

    Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.

  17. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Directory of Open Access Journals (Sweden)

    Luis C González

    Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  18. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  19. Water evaporation: a transition path sampling study.

    Science.gov (United States)

    Varilly, Patrick; Chandler, David

    2013-02-07

    We use transition path sampling to study evaporation in the SPC/E model of liquid water. On the basis of thousands of evaporation trajectories, we characterize the members of the transition state ensemble (TSE), which exhibit a liquid-vapor interface with predominantly negative mean curvature at the site of evaporation. We also find that after evaporation is complete, the distributions of translational and angular momenta of the evaporated water are Maxwellian with a temperature equal to that of the liquid. To characterize the evaporation trajectories in their entirety, we find that it suffices to project them onto just two coordinates: the distance of the evaporating molecule to the instantaneous liquid-vapor interface and the velocity of the water along the average interface normal. In this projected space, we find that the TSE is well-captured by a simple model of ballistic escape from a deep potential well, with no additional barrier to evaporation beyond the cohesive strength of the liquid. Equivalently, they are consistent with a near-unity probability for a water molecule impinging upon a liquid droplet to condense. These results agree with previous simulations and with some, but not all, recent experiments.

  20. Weighted ensemble transform Kalman filter for image assimilation

    Directory of Open Access Journals (Sweden)

    Sebastien Beyou

    2013-01-01

    Full Text Available This study proposes an extension of the Weighted Ensemble Kalman filter (WEnKF proposed by Papadakis et al. (2010 for the assimilation of image observations. The main focus of this study is on a novel formulation of the Weighted filter with the Ensemble Transform Kalman filter (WETKF, incorporating directly as a measurement model a non-linear image reconstruction criterion. This technique has been compared to the original WEnKF on numerical and real world data of 2-D turbulence observed through the transport of a passive scalar. In particular, it has been applied for the reconstruction of oceanic surface current vorticity fields from sea surface temperature (SST satellite data. This latter technique enables a consistent recovery along time of oceanic surface currents and vorticity maps in presence of large missing data areas and strong noise.

  1. ANALYSIS OF SST IMAGES BY WEIGHTED ENSEMBLE TRANSFORM KALMAN FILTER

    OpenAIRE

    Sai , Gorthi; Beyou , Sébastien; Memin , Etienne

    2011-01-01

    International audience; This paper presents a novel, efficient scheme for the analysis of Sea Surface Temperature (SST) ocean images. We consider the estimation of the velocity fields and vorticity values from a sequence of oceanic images. The contribution of this paper lies in proposing a novel, robust and simple approach based onWeighted Ensemble Transform Kalman filter (WETKF) data assimilation technique for the analysis of real SST images, that may contain coast regions or large areas of ...

  2. Path planning in uncertain flow fields using ensemble method

    KAUST Repository

    Wang, Tong; Le Maî tre, Olivier P.; Hoteit, Ibrahim; Knio, Omar

    2016-01-01

    , we predict the path that minimizes the travel time by solving a boundary value problem (BVP), based on the Pontryagin maximum principle. A family of backward-in-time trajectories starting at the end position is used to generate suitable initial values

  3. Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.

    Science.gov (United States)

    Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel

    2017-06-01

    Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.

  4. Statistical Analysis of the First Passage Path Ensemble of Jump Processes

    Science.gov (United States)

    von Kleist, Max; Schütte, Christof; Zhang, Wei

    2018-02-01

    The transition mechanism of jump processes between two different subsets in state space reveals important dynamical information of the processes and therefore has attracted considerable attention in the past years. In this paper, we study the first passage path ensemble of both discrete-time and continuous-time jump processes on a finite state space. The main approach is to divide each first passage path into nonreactive and reactive segments and to study them separately. The analysis can be applied to jump processes which are non-ergodic, as well as continuous-time jump processes where the waiting time distributions are non-exponential. In the particular case that the jump processes are both Markovian and ergodic, our analysis elucidates the relations between the study of the first passage paths and the study of the transition paths in transition path theory. We provide algorithms to numerically compute statistics of the first passage path ensemble. The computational complexity of these algorithms scales with the complexity of solving a linear system, for which efficient methods are available. Several examples demonstrate the wide applicability of the derived results across research areas.

  5. Asymmetric similarity-weighted ensembles for image segmentation

    DEFF Research Database (Denmark)

    Cheplygina, V.; Van Opbroek, A.; Ikram, M. A.

    2016-01-01

    Supervised classification is widely used for image segmentation. To work effectively, these techniques need large amounts of labeled training data, that is representative of the test data. Different patient groups, different scanners or different scanning protocols can lead to differences between...... the images, thus representative data might not be available. Transfer learning techniques can be used to account for these differences, thus taking advantage of all the available data acquired with different protocols. We investigate the use of classifier ensembles, where each classifier is weighted...... and the direction of measurement needs to be chosen carefully. We also show that a point set similarity measure is robust across different studies, and outperforms state-of-the-art results on a multi-center brain tissue segmentation task....

  6. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  7. Competitive Learning Neural Network Ensemble Weighted by Predicted Performance

    Science.gov (United States)

    Ye, Qiang

    2010-01-01

    Ensemble approaches have been shown to enhance classification by combining the outputs from a set of voting classifiers. Diversity in error patterns among base classifiers promotes ensemble performance. Multi-task learning is an important characteristic for Neural Network classifiers. Introducing a secondary output unit that receives different…

  8. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Huang, Maoyi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hou, Zhangshuan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bao, Jie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-08-01

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the use of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.

  9. Path analysis for selection of feijoa with greater pulp weight

    Directory of Open Access Journals (Sweden)

    Joel Donazzolo

    Full Text Available ABSTRACT: The objective of this paper was to identify the direct and indirect effects of feijoa fruits (Acca sellowiana traitson pulp weight, in order to use these traits in indirect genotypes selection. Fruits of five feijoa plants were collected in Rio Grande do Sul, in the years of 2009, 2010 and 2011. Six traits were evaluated: diameter, length, total weight, pulp weight, peel thickness and number of seeds per fruit. In the path analysis, with or without ridge regression, pulp weight was considered as the basic variable, and the other traits were considered as explanatory variables. Total weight and fruit diameter had high direct effect, and are the main traits associated with pulp weight. These traits may serve as criteria for indirect selection to increase feijoa pulp weight, since they are easy to be measured.

  10. Probability weighted ensemble transfer learning for predicting interactions between HIV-1 and human proteins.

    Directory of Open Access Journals (Sweden)

    Suyu Mei

    Full Text Available Reconstruction of host-pathogen protein interaction networks is of great significance to reveal the underlying microbic pathogenesis. However, the current experimentally-derived networks are generally small and should be augmented by computational methods for less-biased biological inference. From the point of view of computational modelling, data scarcity, data unavailability and negative data sampling are the three major problems for host-pathogen protein interaction networks reconstruction. In this work, we are motivated to address the three concerns and propose a probability weighted ensemble transfer learning model for HIV-human protein interaction prediction (PWEN-TLM, where support vector machine (SVM is adopted as the individual classifier of the ensemble model. In the model, data scarcity and data unavailability are tackled by homolog knowledge transfer. The importance of homolog knowledge is measured by the ROC-AUC metric of the individual classifiers, whose outputs are probability weighted to yield the final decision. In addition, we further validate the assumption that only the homolog knowledge is sufficient to train a satisfactory model for host-pathogen protein interaction prediction. Thus the model is more robust against data unavailability with less demanding data constraint. As regards with negative data construction, experiments show that exclusiveness of subcellular co-localized proteins is unbiased and more reliable than random sampling. Last, we conduct analysis of overlapped predictions between our model and the existing models, and apply the model to novel host-pathogen PPIs recognition for further biological research.

  11. Constructing Better Classifier Ensemble Based on Weighted Accuracy and Diversity Measure

    Directory of Open Access Journals (Sweden)

    Xiaodong Zeng

    2014-01-01

    Full Text Available A weighted accuracy and diversity (WAD method is presented, a novel measure used to evaluate the quality of the classifier ensemble, assisting in the ensemble selection task. The proposed measure is motivated by a commonly accepted hypothesis; that is, a robust classifier ensemble should not only be accurate but also different from every other member. In fact, accuracy and diversity are mutual restraint factors; that is, an ensemble with high accuracy may have low diversity, and an overly diverse ensemble may negatively affect accuracy. This study proposes a method to find the balance between accuracy and diversity that enhances the predictive ability of an ensemble for unknown data. The quality assessment for an ensemble is performed such that the final score is achieved by computing the harmonic mean of accuracy and diversity, where two weight parameters are used to balance them. The measure is compared to two representative measures, Kappa-Error and GenDiv, and two threshold measures that consider only accuracy or diversity, with two heuristic search algorithms, genetic algorithm, and forward hill-climbing algorithm, in ensemble selection tasks performed on 15 UCI benchmark datasets. The empirical results demonstrate that the WAD measure is superior to others in most cases.

  12. Multi-objective optimization for generating a weighted multi-model ensemble

    Science.gov (United States)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic

  13. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    Science.gov (United States)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  14. Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations

    Directory of Open Access Journals (Sweden)

    Shin'ya Nakano

    2014-05-01

    Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.

  15. AWE-WQ: fast-forwarding molecular dynamics using the accelerated weighted ensemble.

    Science.gov (United States)

    Abdul-Wahid, Badi'; Feng, Haoyun; Rajan, Dinesh; Costaouec, Ronan; Darve, Eric; Thain, Douglas; Izaguirre, Jesús A

    2014-10-27

    A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 μs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy.

  16. A Bayesian posterior predictive framework for weighting ensemble regional climate models

    Directory of Open Access Journals (Sweden)

    Y. Fan

    2017-06-01

    Full Text Available We present a novel Bayesian statistical approach to computing model weights in climate change projection ensembles in order to create probabilistic projections. The weight of each climate model is obtained by weighting the current day observed data under the posterior distribution admitted under competing climate models. We use a linear model to describe the model output and observations. The approach accounts for uncertainty in model bias, trend and internal variability, including error in the observations used. Our framework is general, requires very little problem-specific input, and works well with default priors. We carry out cross-validation checks that confirm that the method produces the correct coverage.

  17. Entropic sampling in the path integral Monte Carlo method

    International Nuclear Information System (INIS)

    Vorontsov-Velyaminov, P N; Lyubartsev, A P

    2003-01-01

    We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

  18. Advanced path sampling of the kinetic network of small proteins

    NARCIS (Netherlands)

    Du, W.

    2014-01-01

    This thesis is focused on developing advanced path sampling simulation methods to study protein folding and unfolding, and to build kinetic equilibrium networks describing these processes. In Chapter 1 the basic knowledge of protein structure and folding theories were introduced and a brief overview

  19. Efficient Unbiased Rendering using Enlightened Local Path Sampling

    DEFF Research Database (Denmark)

    Kristensen, Anders Wang

    measurements, which are the solution to the adjoint light transport problem. The second is a representation of the distribution of radiance and importance in the scene. We also derive a new method of particle sampling, which is advantageous compared to existing methods. Together we call the resulting algorithm....... The downside to using these algorithms is that they can be slow to converge. Due to the nature of Monte Carlo methods, the results are random variables subject to variance. This manifests itself as noise in the images, which can only be reduced by generating more samples. The reason these methods are slow...... is because of a lack of eeffective methods of importance sampling. Most global illumination algorithms are based on local path sampling, which is essentially a recipe for constructing random walks. Using this procedure paths are built based on information given explicitly as part of scene description...

  20. A Bayesian Method for Weighted Sampling

    OpenAIRE

    Lo, Albert Y.

    1993-01-01

    Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

  1. Ensemble averaged coherent state path integral for disordered bosons with a repulsive interaction (Derivation of mean field equations)

    International Nuclear Information System (INIS)

    Mieck, B.

    2007-01-01

    We consider bosonic atoms with a repulsive contact interaction in a trap potential for a Bose-Einstein condensation (BEC) and additionally include a random potential. The ensemble averages for two models of static (I) and dynamic (II) disorder are performed and investigated in parallel. The bosonic many body systems of the two disorder models are represented by coherent state path integrals on the Keldysh time contour which allow exact ensemble averages for zero and finite temperatures. These ensemble averages of coherent state path integrals therefore present alternatives to replica field theories or super-symmetric averaging techniques. Hubbard-Stratonovich transformations (HST) lead to two corresponding self-energies for the hermitian repulsive interaction and for the non-hermitian disorder-interaction. The self-energy of the repulsive interaction is absorbed by a shift into the disorder-self-energy which comprises as an element of a larger symplectic Lie algebra sp(4M) the self-energy of the repulsive interaction as a subalgebra (which is equivalent to the direct product of M x sp(2); 'M' is the number of discrete time intervals of the disorder-self-energy in the generating function). After removal of the remaining Gaussian integral for the self-energy of the repulsive interaction, the first order variations of the coherent state path integrals result in the exact mean field or saddle point equations, solely depending on the disorder-self-energy matrix. These equations can be solved by continued fractions and are reminiscent to the 'Nambu-Gorkov' Green function formalism in superconductivity because anomalous terms or pair condensates of the bosonic atoms are also included into the selfenergies. The derived mean field equations of the models with static (I) and dynamic (II) disorder are particularly applicable for BEC in d=3 spatial dimensions because of the singularity of the density of states at vanishing wavevector. However, one usually starts out from

  2. Evaluating the effect of disturbed ensemble distributions on SCFG based statistical sampling of RNA secondary structures

    Directory of Open Access Journals (Sweden)

    Scheid Anika

    2012-07-01

    Full Text Available Abstract Background Over the past years, statistical and Bayesian approaches have become increasingly appreciated to address the long-standing problem of computational RNA structure prediction. Recently, a novel probabilistic method for the prediction of RNA secondary structures from a single sequence has been studied which is based on generating statistically representative and reproducible samples of the entire ensemble of feasible structures for a particular input sequence. This method samples the possible foldings from a distribution implied by a sophisticated (traditional or length-dependent stochastic context-free grammar (SCFG that mirrors the standard thermodynamic model applied in modern physics-based prediction algorithms. Specifically, that grammar represents an exact probabilistic counterpart to the energy model underlying the Sfold software, which employs a sampling extension of the partition function (PF approach to produce statistically representative subsets of the Boltzmann-weighted ensemble. Although both sampling approaches have the same worst-case time and space complexities, it has been indicated that they differ in performance (both with respect to prediction accuracy and quality of generated samples, where neither of these two competing approaches generally outperforms the other. Results In this work, we will consider the SCFG based approach in order to perform an analysis on how the quality of generated sample sets and the corresponding prediction accuracy changes when different degrees of disturbances are incorporated into the needed sampling probabilities. This is motivated by the fact that if the results prove to be resistant to large errors on the distinct sampling probabilities (compared to the exact ones, then it will be an indication that these probabilities do not need to be computed exactly, but it may be sufficient and more efficient to approximate them. Thus, it might then be possible to decrease the worst

  3. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  4. Evaluation of effective factors on low birth weight neonates' mortality using path analysis

    Directory of Open Access Journals (Sweden)

    Babaee Gh

    2008-06-01

    Full Text Available Background: This study have conducted in order to determine of direct or indirect effective factors on mortality of neonates with low birth weight by path analysis.Methods: In this cohort study 445 paired mothers and their neonates were participated in Tehran city. The data were gathered through an answer sheet contain mother age, gestational age, apgar score, pregnancy induced hypertension (PIH and birth weight. Sampling was convenience and neonates of women were included in this study who were referred to 15 government and private hospitals in Tehran city. Live being status of neonates was determined until 24 hours after delivery.Results: The most changes in mortality rate is related to birth weight and its negative score means that increasing in weight leads to increase chance of live being. Second score is related to apgar sore and its negative score means that increasing in apgar score leads to decrease chance of neonate death. Third score is gestational age and its negative score means that increasing in weight leads to increase chance of live being. The less changes in mortality rate is due to hypertensive disorders in pregnancy.Conclusion: The methodology has been used could be adopted in other investigations to distinguish and measuring effect of predictive factors on the risk of an outcome.

  5. Finite-size anomalies of the Drude weight: Role of symmetries and ensembles

    Science.gov (United States)

    Sánchez, R. J.; Varma, V. K.

    2017-12-01

    We revisit the numerical problem of computing the high temperature spin stiffness, or Drude weight, D of the spin-1 /2 X X Z chain using exact diagonalization to systematically analyze its dependence on system symmetries and ensemble. Within the canonical ensemble and for states with zero total magnetization, we find D vanishes exactly due to spin-inversion symmetry for all but the anisotropies Δ˜M N=cos(π M /N ) with N ,M ∈Z+ coprimes and N >M , provided system sizes L ≥2 N , for which states with different spin-inversion signature become degenerate due to the underlying s l2 loop algebra symmetry. All these loop-algebra degenerate states carry finite currents which we conjecture [based on data from the system sizes and anisotropies Δ˜M N (with N magnetic flux not only breaks spin-inversion in the zero magnetization sector but also lifts the loop-algebra degeneracies in all symmetry sectors—this effect is more pertinent at smaller Δ due to the larger contributions to D coming from the low-magnetization sectors which are more sensitive to the system's symmetries. Thus we generically find a finite D for fluxed rings and arbitrary 0 lifted.

  6. Simulated Tempering Distributed Replica Sampling, Virtual Replica Exchange, and Other Generalized-Ensemble Methods for Conformational Sampling.

    Science.gov (United States)

    Rauscher, Sarah; Neale, Chris; Pomès, Régis

    2009-10-13

    Generalized-ensemble algorithms in temperature space have become popular tools to enhance conformational sampling in biomolecular simulations. A random walk in temperature leads to a corresponding random walk in potential energy, which can be used to cross over energetic barriers and overcome the problem of quasi-nonergodicity. In this paper, we introduce two novel methods: simulated tempering distributed replica sampling (STDR) and virtual replica exchange (VREX). These methods are designed to address the practical issues inherent in the replica exchange (RE), simulated tempering (ST), and serial replica exchange (SREM) algorithms. RE requires a large, dedicated, and homogeneous cluster of CPUs to function efficiently when applied to complex systems. ST and SREM both have the drawback of requiring extensive initial simulations, possibly adaptive, for the calculation of weight factors or potential energy distribution functions. STDR and VREX alleviate the need for lengthy initial simulations, and for synchronization and extensive communication between replicas. Both methods are therefore suitable for distributed or heterogeneous computing platforms. We perform an objective comparison of all five algorithms in terms of both implementation issues and sampling efficiency. We use disordered peptides in explicit water as test systems, for a total simulation time of over 42 μs. Efficiency is defined in terms of both structural convergence and temperature diffusion, and we show that these definitions of efficiency are in fact correlated. Importantly, we find that ST-based methods exhibit faster temperature diffusion and correspondingly faster convergence of structural properties compared to RE-based methods. Within the RE-based methods, VREX is superior to both SREM and RE. On the basis of our observations, we conclude that ST is ideal for simple systems, while STDR is well-suited for complex systems.

  7. Predictor-weighting strategies for probabilistic wind power forecasting with an analog ensemble

    Directory of Open Access Journals (Sweden)

    Constantin Junk

    2015-04-01

    Full Text Available Unlike deterministic forecasts, probabilistic predictions provide estimates of uncertainty, which is an additional value for decision-making. Previous studies have proposed the analog ensemble (AnEn, which is a technique to generate uncertainty information from a purely deterministic forecast. The objective of this study is to improve the AnEn performance for wind power forecasts by developing static and dynamic weighting strategies, which optimize the predictor combination with a brute-force continuous ranked probability score (CRPS minimization and a principal component analysis (PCA of the predictors. Predictors are taken from the high-resolution deterministic forecasts of the European Centre for Medium-Range Weather Forecasts (ECMWF, including forecasts of wind at several heights, geopotential height, pressure, and temperature, among others. The weighting strategies are compared at five wind farms in Europe and the U.S. situated in regions with different terrain complexity, both on and offshore, and significantly improve the deterministic and probabilistic AnEn forecast performance compared to the AnEn with 10‑m wind speed and direction as predictors and compared to PCA-based approaches. The AnEn methodology also provides reliable estimation of the forecast uncertainty. The optimized predictor combinations are strongly dependent on terrain complexity, local wind regimes, and atmospheric stratification. Since the proposed predictor-weighting strategies can accomplish both the selection of relevant predictors as well as finding their optimal weights, the AnEn performance is improved by up to 20 % at on and offshore sites.

  8. The MEXSAS2 Sample and the Ensemble X-ray Variability of Quasars

    Energy Technology Data Exchange (ETDEWEB)

    Serafinelli, Roberto [Dipartimento di Fisica, Università di Roma Tor Vergata, Rome (Italy); Dipartimento di Fisica, Università di Roma Sapienza, Rome (Italy); Vagnetti, Fausto; Chiaraluce, Elia [Dipartimento di Fisica, Università di Roma Tor Vergata, Rome (Italy); Middei, Riccardo, E-mail: roberto.serafinelli@roma2.infn.it [Dipartimento di Matematica e Fisica, Università Roma Tre, Rome (Italy)

    2017-10-11

    We present the second Multi-Epoch X-ray Serendipitous AGN Sample (MEXSAS2), extracted from the 6th release of the XMM Serendipitous Source Catalog (XMMSSC-DR6), cross-matched with Sloan Digital Sky Survey quasar Catalogs DR7Q and DR12Q. Our sample also includes the available measurements for masses, bolometric luminosities, and Eddington ratios. Analyses of the ensemble structure function and spectral variability are presented, together with their dependences on such parameters. We confirm a decrease of the structure function with the X-ray luminosity, and find a weak dependence on the black hole mass. We introduce a new spectral variability estimator, taking errors on both fluxes and spectral indices into account. We confirm an ensemble softer when brighter trend, with no dependence of such estimator on black hole mass, Eddington ratio, redshift, X-ray and bolometric luminosity.

  9. The MEXSAS2 Sample and the Ensemble X-ray Variability of Quasars

    Directory of Open Access Journals (Sweden)

    Roberto Serafinelli

    2017-10-01

    Full Text Available We present the second Multi-Epoch X-ray Serendipitous AGN Sample (MEXSAS2, extracted from the 6th release of the XMM Serendipitous Source Catalog (XMMSSC-DR6, cross-matched with Sloan Digital Sky Survey quasar Catalogs DR7Q and DR12Q. Our sample also includes the available measurements for masses, bolometric luminosities, and Eddington ratios. Analyses of the ensemble structure function and spectral variability are presented, together with their dependences on such parameters. We confirm a decrease of the structure function with the X-ray luminosity, and find a weak dependence on the black hole mass. We introduce a new spectral variability estimator, taking errors on both fluxes and spectral indices into account. We confirm an ensemble softer when brighter trend, with no dependence of such estimator on black hole mass, Eddington ratio, redshift, X-ray and bolometric luminosity.

  10. Statistics of equally weighted random paths on a class of self-similar structures

    International Nuclear Information System (INIS)

    Knezevic, Milan; Knezevic, Dragica; Spasojevic, Djordje

    2004-01-01

    We study the statistics of equally weighted random walk paths on a family of Sierpinski gasket lattices whose members are labelled by an integer b (2 ≤ b 2, mean path end-to-end distance grows more slowly than any power of its length N. We provide arguments for the emergence of usual power law critical behaviour in the limit b → ∞ when fractal lattices become almost compact

  11. A Frequency-Weighted Energy Operator and complementary ensemble empirical mode decomposition for bearing fault detection

    Science.gov (United States)

    Imaouchen, Yacine; Kedadouche, Mourad; Alkama, Rezak; Thomas, Marc

    2017-01-01

    Signal processing techniques for non-stationary and noisy signals have recently attracted considerable attentions. Among them, the empirical mode decomposition (EMD) which is an adaptive and efficient method for decomposing signals from high to low frequencies into intrinsic mode functions (IMFs). Ensemble EMD (EEMD) is proposed to overcome the mode mixing problem of the EMD. In the present paper, the Complementary EEMD (CEEMD) is used for bearing fault detection. As a noise-improved method, the CEEMD not only overcomes the mode mixing, but also eliminates the residual of added white noise persisting into the IMFs and enhance the calculation efficiency of the EEMD method. Afterward, a selection method is developed to choose relevant IMFs containing information about defects. Subsequently, a signal is reconstructed from the sum of relevant IMFs and a Frequency-Weighted Energy Operator is tailored to extract both the amplitude and frequency modulations from the selected IMFs. This operator outperforms the conventional energy operator and the enveloping methods, especially in the presence of strong noise and multiple vibration interferences. Furthermore, simulation and experimental results showed that the proposed method improves performances for detecting the bearing faults. The method has also high computational efficiency and is able to detect the fault at an early stage of degradation.

  12. Malliavin Weight Sampling: A Practical Guide

    Directory of Open Access Journals (Sweden)

    Patrick B. Warren

    2013-12-01

    Full Text Available Malliavin weight sampling (MWS is a stochastic calculus technique for computing the derivatives of averaged system properties with respect to parameters in stochastic simulations, without perturbing the system’s dynamics. It applies to systems in or out of equilibrium, in steady state or time-dependent situations, and has applications in the calculation of response coefficients, parameter sensitivities and Jacobian matrices for gradient-based parameter optimisation algorithms. The implementation of MWS has been described in the specific contexts of kinetic Monte Carlo and Brownian dynamics simulation algorithms. Here, we present a general theoretical framework for deriving the appropriate MWS update rule for any stochastic simulation algorithm. We also provide pedagogical information on its practical implementation.

  13. Creating ensembles of oblique decision trees with evolutionary algorithms and sampling

    Science.gov (United States)

    Cantu-Paz, Erick [Oakland, CA; Kamath, Chandrika [Tracy, CA

    2006-06-13

    A decision tree system that is part of a parallel object-oriented pattern recognition system, which in turn is part of an object oriented data mining system. A decision tree process includes the step of reading the data. If necessary, the data is sorted. A potential split of the data is evaluated according to some criterion. An initial split of the data is determined. The final split of the data is determined using evolutionary algorithms and statistical sampling techniques. The data is split. Multiple decision trees are combined in ensembles.

  14. Performance Assessment of Multi-Source Weighted-Ensemble Precipitation (MSWEP Product over India

    Directory of Open Access Journals (Sweden)

    Akhilesh S. Nair

    2017-01-01

    Full Text Available Error characterization is vital for the advancement of precipitation algorithms, the evaluation of numerical model outputs, and their integration in various hydro-meteorological applications. The Tropical Rainfall Measuring Mission (TRMM Multi-satellite Precipitation Analysis (TMPA has been a benchmark for successive Global Precipitation Measurement (GPM based products. This has given way to the evolution of many multi-satellite precipitation products. This study evaluates the performance of the newly released multi-satellite Multi-Source Weighted-Ensemble Precipitation (MSWEP product, whose temporal variability was determined based on several data products including TMPA 3B42 RT. The evaluation was conducted over India with respect to the IMD-gauge-based rainfall for pre-monsoon, monsoon, and post monsoon seasons at daily scale for a 35-year (1979–2013 period. The rainfall climatology is examined over India and over four geographical extents within India known to be subject to uniform rainfall. The performance evaluation of rainfall time series was carried out. In addition to this, the performance of the product over different rainfall classes was evaluated along with the contribution of each class to the total rainfall. Further, seasonal evaluation of the MSWEP products was based on the categorical and volumetric indices from the contingency table. Upon evaluation it was observed that the MSWEP products show large errors in detecting the higher quantiles of rainfall (>75th and > 95th quantiles. The MSWEP precipitation product available at a 0.25° × 0.25° spatial resolution and daily temporal resolution matched well with the daily IMD rainfall over India. Overall results suggest that a suitable region and season-dependent bias correction is essential before its integration in hydrological applications. While the MSWEP was observed to perform well for daily rainfall, it suffered from poor detection capabilities for higher quantiles, making

  15. Multilevel ensemble Kalman filtering

    KAUST Repository

    Hoel, Haakon

    2016-01-08

    The ensemble Kalman filter (EnKF) is a sequential filtering method that uses an ensemble of particle paths to estimate the means and covariances required by the Kalman filter by the use of sample moments, i.e., the Monte Carlo method. EnKF is often both robust and efficient, but its performance may suffer in settings where the computational cost of accurate simulations of particles is high. The multilevel Monte Carlo method (MLMC) is an extension of classical Monte Carlo methods which by sampling stochastic realizations on a hierarchy of resolutions may reduce the computational cost of moment approximations by orders of magnitude. In this work we have combined the ideas of MLMC and EnKF to construct the multilevel ensemble Kalman filter (MLEnKF) for the setting of finite dimensional state and observation spaces. The main ideas of this method is to compute particle paths on a hierarchy of resolutions and to apply multilevel estimators on the ensemble hierarchy of particles to compute Kalman filter means and covariances. Theoretical results and a numerical study of the performance gains of MLEnKF over EnKF will be presented. Some ideas on the extension of MLEnKF to settings with infinite dimensional state spaces will also be presented.

  16. Multilevel ensemble Kalman filtering

    KAUST Repository

    Hoel, Haakon; Chernov, Alexey; Law, Kody; Nobile, Fabio; Tempone, Raul

    2016-01-01

    The ensemble Kalman filter (EnKF) is a sequential filtering method that uses an ensemble of particle paths to estimate the means and covariances required by the Kalman filter by the use of sample moments, i.e., the Monte Carlo method. EnKF is often both robust and efficient, but its performance may suffer in settings where the computational cost of accurate simulations of particles is high. The multilevel Monte Carlo method (MLMC) is an extension of classical Monte Carlo methods which by sampling stochastic realizations on a hierarchy of resolutions may reduce the computational cost of moment approximations by orders of magnitude. In this work we have combined the ideas of MLMC and EnKF to construct the multilevel ensemble Kalman filter (MLEnKF) for the setting of finite dimensional state and observation spaces. The main ideas of this method is to compute particle paths on a hierarchy of resolutions and to apply multilevel estimators on the ensemble hierarchy of particles to compute Kalman filter means and covariances. Theoretical results and a numerical study of the performance gains of MLEnKF over EnKF will be presented. Some ideas on the extension of MLEnKF to settings with infinite dimensional state spaces will also be presented.

  17. A unified thermostat scheme for efficient configurational sampling for classical/quantum canonical ensembles via molecular dynamics

    Science.gov (United States)

    Zhang, Zhijun; Liu, Xinzijian; Chen, Zifei; Zheng, Haifeng; Yan, Kangyu; Liu, Jian

    2017-07-01

    We show a unified second-order scheme for constructing simple, robust, and accurate algorithms for typical thermostats for configurational sampling for the canonical ensemble. When Langevin dynamics is used, the scheme leads to the BAOAB algorithm that has been recently investigated. We show that the scheme is also useful for other types of thermostats, such as the Andersen thermostat and Nosé-Hoover chain, regardless of whether the thermostat is deterministic or stochastic. In addition to analytical analysis, two 1-dimensional models and three typical real molecular systems that range from the gas phase, clusters, to the condensed phase are used in numerical examples for demonstration. Accuracy may be increased by an order of magnitude for estimating coordinate-dependent properties in molecular dynamics (when the same time interval is used), irrespective of which type of thermostat is applied. The scheme is especially useful for path integral molecular dynamics because it consistently improves the efficiency for evaluating all thermodynamic properties for any type of thermostat.

  18. Rare events via multiple reaction channels sampled by path replica exchange

    NARCIS (Netherlands)

    Bolhuis, P.G.

    2008-01-01

    Transition path sampling (TPS) was developed for studying activated processes in complex systems with unknown reaction coordinate. Transition interface sampling (TIS) allows efficient evaluation of the rate constants. However, when the transition can occur via more than one reaction channel

  19. Mechanistic Insights on Human Phosphoglucomutase Revealed by Transition Path Sampling and Molecular Dynamics Calculations.

    Science.gov (United States)

    Brás, Natércia F; Fernandes, Pedro A; Ramos, Maria J; Schwartz, Steven D

    2018-02-06

    Human α-phosphoglucomutase 1 (α-PGM) catalyzes the isomerization of glucose-1-phosphate into glucose-6-phosphate (G6P) through two sequential phosphoryl transfer steps with a glucose-1,6-bisphosphate (G16P) intermediate. Given that the release of G6P in the gluconeogenesis raises the glucose output levels, α-PGM represents a tempting pharmacological target for type 2 diabetes. Here, we provide the first theoretical study of the catalytic mechanism of human α-PGM. We performed transition-path sampling simulations to unveil the atomic details of the two catalytic chemical steps, which could be key for developing transition state (TS) analogue molecules with inhibitory properties. Our calculations revealed that both steps proceed through a concerted S N 2-like mechanism, with a loose metaphosphate-like TS. Even though experimental data suggests that the two steps are identical, we observed noticeable differences: 1) the transition state ensemble has a well-defined TS region and a late TS for the second step, and 2) larger coordinated protein motions are required to reach the TS of the second step. We have identified key residues (Arg23, Ser117, His118, Lys389), and the Mg 2+ ion that contribute in different ways to the reaction coordinate. Accelerated molecular dynamics simulations suggest that the G16P intermediate may reorient without leaving the enzymatic binding pocket, through significant conformational rearrangements of the G16P and of specific loop regions of the human α-PGM. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Filter Bank Regularized Common Spatial Pattern Ensemble for Small Sample Motor Imagery Classification.

    Science.gov (United States)

    Park, Sang-Hoon; Lee, David; Lee, Sang-Goog

    2018-02-01

    For the last few years, many feature extraction methods have been proposed based on biological signals. Among these, the brain signals have the advantage that they can be obtained, even by people with peripheral nervous system damage. Motor imagery electroencephalograms (EEG) are inexpensive to measure, offer a high temporal resolution, and are intuitive. Therefore, these have received a significant amount of attention in various fields, including signal processing, cognitive science, and medicine. The common spatial pattern (CSP) algorithm is a useful method for feature extraction from motor imagery EEG. However, performance degradation occurs in a small-sample setting (SSS), because the CSP depends on sample-based covariance. Since the active frequency range is different for each subject, it is also inconvenient to set the frequency range to be different every time. In this paper, we propose the feature extraction method based on a filter bank to solve these problems. The proposed method consists of five steps. First, motor imagery EEG is divided by a using filter bank. Second, the regularized CSP (R-CSP) is applied to the divided EEG. Third, we select the features according to mutual information based on the individual feature algorithm. Fourth, parameter sets are selected for the ensemble. Finally, we classify using ensemble based on features. The brain-computer interface competition III data set IVa is used to evaluate the performance of the proposed method. The proposed method improves the mean classification accuracy by 12.34%, 11.57%, 9%, 4.95%, and 4.47% compared with CSP, SR-CSP, R-CSP, filter bank CSP (FBCSP), and SR-FBCSP. Compared with the filter bank R-CSP ( , ), which is a parameter selection version of the proposed method, the classification accuracy is improved by 3.49%. In particular, the proposed method shows a large improvement in performance in the SSS.

  1. Generating maximally-path-entangled number states in two spin ensembles coupled to a superconducting flux qubit

    Science.gov (United States)

    Maleki, Yusef; Zheltikov, Aleksei M.

    2018-01-01

    An ensemble of nitrogen-vacancy (NV) centers coupled to a circuit QED device is shown to enable an efficient, high-fidelity generation of high-N00N states. Instead of first creating entanglement and then increasing the number of entangled particles N , our source of high-N00N states first prepares a high-N Fock state in one of the NV ensembles and then entangles it to the rest of the system. With such a strategy, high-N N00N states can be generated in just a few operational steps with an extraordinary fidelity. Once prepared, such a state can be stored over a longer period of time due to the remarkably long coherence time of NV centers.

  2. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions.

    Science.gov (United States)

    Cendagorta, Joseph R; Bačić, Zlatko; Tuckerman, Mark E

    2018-03-14

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  3. An open-chain imaginary-time path-integral sampling approach to the calculation of approximate symmetrized quantum time correlation functions

    Science.gov (United States)

    Cendagorta, Joseph R.; Bačić, Zlatko; Tuckerman, Mark E.

    2018-03-01

    We introduce a scheme for approximating quantum time correlation functions numerically within the Feynman path integral formulation. Starting with the symmetrized version of the correlation function expressed as a discretized path integral, we introduce a change of integration variables often used in the derivation of trajectory-based semiclassical methods. In particular, we transform to sum and difference variables between forward and backward complex-time propagation paths. Once the transformation is performed, the potential energy is expanded in powers of the difference variables, which allows us to perform the integrals over these variables analytically. The manner in which this procedure is carried out results in an open-chain path integral (in the remaining sum variables) with a modified potential that is evaluated using imaginary-time path-integral sampling rather than requiring the generation of a large ensemble of trajectories. Consequently, any number of path integral sampling schemes can be employed to compute the remaining path integral, including Monte Carlo, path-integral molecular dynamics, or enhanced path-integral molecular dynamics. We believe that this approach constitutes a different perspective in semiclassical-type approximations to quantum time correlation functions. Importantly, we argue that our approximation can be systematically improved within a cumulant expansion formalism. We test this approximation on a set of one-dimensional problems that are commonly used to benchmark approximate quantum dynamical schemes. We show that the method is at least as accurate as the popular ring-polymer molecular dynamics technique and linearized semiclassical initial value representation for correlation functions of linear operators in most of these examples and improves the accuracy of correlation functions of nonlinear operators.

  4. Neighborhood Social Predictors of Weight-related Measures in Underserved African Americans in the PATH Trial.

    Science.gov (United States)

    McDaniel, Tyler C; Wilson, Dawn K; Coulon, Sandra M; Hand, Gregory A; Siceloff, E Rebekah

    2015-11-05

    African Americans have the highest rate of obesity in the United States relative to other ethnic minority groups. Bioecological factors including neighborhood social and physical environmental variables may be important predictors of weight-related measures specifically body mass index (BMI) in African American adults. Baseline data from the Positive Action for Today's Health (PATH) trial were collected from 417 African American adults. Overall a multiple regression model for BMI was significant, showing positive associations with average daily moderate-to-vigorous physical activity (MVPA) (B =-.21, Psocial interaction (B =-.13, Psocial interaction was associated with healthier BMI, highlighting it as a potential critical factor for future interventions in underserved, African American communities.

  5. Computing the Stretch Factor of Paths, Trees, and Cycles in Weighted Fixed Orientation Metrics

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a graph embedded in the L_1-plane. The stretch factor of G is the maximum over all pairs of distinct vertices p and q of G of the ratio L_1^G(p,q)/L_1(p,q), where L_1^G(p,q) is the L_1-distance in G between p and q. We show how to compute the stretch factor of an n-vertex path in O(n*(log...... n)^2) worst-case time and O(n) space and we mention generalizations to trees and cycles, to general weighted fixed orientation metrics, and to higher dimensions....

  6. Weight Management Preferences in a Non-Treatment Seeking Sample

    Directory of Open Access Journals (Sweden)

    Victoria B. Barry

    2013-12-01

    Full Text Available Background: Obesity is a serious public health issue in the United States, with the CDC reporting that most adult Americans are now either overweight or obese. Little is known about the comparative acceptability of available weight management approaches in non-treatment seeking samples. Method: This report presents preliminary survey data collected from an online sample on weight management preferences for 8 different weight management strategies including a proposed incentive-based program. Participants were 72 individuals (15 men, 55 women and 2 transgendered individuals who self-re-ported being overweight or obese, or who currently self-reported a normal weight but had attempted to lose weight in the past. Results: ANOVA and Pair-wise comparison indicated clear preferences for certain treatments over others in the full sample; most notably, the most popular option in our sample for managing weight was to diet and exercise without professional assistance. Several differences in preference between the three weight groups were also observed. Conclusions: Dieting and exercising without any professional assistance is the most highly endorsed weight management option among all groups. Overweight and obese individuals may find self-management strategies for weight loss less attractive than normal weight individuals, but still prefer it to other alternatives. This has implications for the development and dissemination of empirically based self-management strategies for weight.

  7. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  8. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  9. Degree distribution of shortest path trees and bias of network sampling algorithms

    NARCIS (Netherlands)

    Bhamidi, S.; Goodman, J.A.; Hofstad, van der R.W.; Komjáthy, J.

    2013-01-01

    In this article, we explicitly derive the limiting distribution of the degree distribution of the shortest path tree from a single source on various random network models with edge weights. We determine the power-law exponent of the degree distribution of this tree and compare it to the degree

  10. Degree distribution of shortest path trees and bias of network sampling algorithms

    NARCIS (Netherlands)

    Bhamidi, S.; Goodman, J.A.; Hofstad, van der R.W.; Komjáthy, J.

    2015-01-01

    In this article, we explicitly derive the limiting degree distribution of the shortest path tree from a single source on various random network models with edge weights. We determine the asymptotics of the degree distribution for large degrees of this tree and compare it to the degree distribution

  11. Life course path analysis of birth weight, childhood growth, and adult systolic blood pressure

    DEFF Research Database (Denmark)

    Gamborg, Michael; Andersen, Per Kragh; Baker, Jennifer L

    2009-01-01

    body size, and thereby the total effect, of size and changes in size on later outcomes. Using data on childhood body size and adult systolic blood pressure from a sample of 1,284 Danish men born between 1936 and 1970, the authors compared results from path analysis with results from 3 standard...... regression methods. Path analysis produced easily interpretable results, and compared with standard regression methods it produced a noteworthy gain in statistical power. The effect of change in relative body size on adult blood pressure was more pronounced after age 11 years than in earlier childhood....... These results suggest that increases in body size prior to age 11 years are less harmful to adult blood pressure than increases occurring after this age....

  12. Non-response weighting adjustment approach in survey sampling ...

    African Journals Online (AJOL)

    Hence the discussion is illustrated with real examples from surveys (in particular 2003 KDHS) conducted by Central Bureau of Statistics (CBS) - Kenya. Some suggestions are made for improving the quality of non-response weighting. Keywords: Survey non-response; non-response adjustment factors; weighting; sampling ...

  13. GIS-based groundwater potential analysis using novel ensemble weights-of-evidence with logistic regression and functional tree models.

    Science.gov (United States)

    Chen, Wei; Li, Hui; Hou, Enke; Wang, Shengquan; Wang, Guirong; Panahi, Mahdi; Li, Tao; Peng, Tao; Guo, Chen; Niu, Chao; Xiao, Lele; Wang, Jiale; Xie, Xiaoshen; Ahmad, Baharin Bin

    2018-09-01

    The aim of the current study was to produce groundwater spring potential maps using novel ensemble weights-of-evidence (WoE) with logistic regression (LR) and functional tree (FT) models. First, a total of 66 springs were identified by field surveys, out of which 70% of the spring locations were used for training the models and 30% of the spring locations were employed for the validation process. Second, a total of 14 affecting factors including aspect, altitude, slope, plan curvature, profile curvature, stream power index (SPI), topographic wetness index (TWI), sediment transport index (STI), lithology, normalized difference vegetation index (NDVI), land use, soil, distance to roads, and distance to streams was used to analyze the spatial relationship between these affecting factors and spring occurrences. Multicollinearity analysis and feature selection of the correlation attribute evaluation (CAE) method were employed to optimize the affecting factors. Subsequently, the novel ensembles of the WoE, LR, and FT models were constructed using the training dataset. Finally, the receiver operating characteristic (ROC) curves, standard error, confidence interval (CI) at 95%, and significance level P were employed to validate and compare the performance of three models. Overall, all three models performed well for groundwater spring potential evaluation. The prediction capability of the FT model, with the highest AUC values, the smallest standard errors, the narrowest CIs, and the smallest P values for the training and validation datasets, is better compared to those of other models. The groundwater spring potential maps can be adopted for the management of water resources and land use by planners and engineers. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Energy-Aware Path Planning for UAS Persistent Sampling and Surveillance

    Science.gov (United States)

    Shaw-Cortez, Wenceslao

    The focus of this work is to develop an energy-aware path planning algorithm that maximizes UAS endurance, while performing sampling and surveillance missions in a known, stationary wind environment. The energy-aware aspect is specifically tailored to extract energy from the wind to reduce thrust use, thereby increasing aircraft endurance. Wind energy extraction is performed by static soaring and dynamic soaring. Static soaring involves using upward wind currents to increase altitude and potential energy. Dynamic soaring involves taking advantage of wind gradients to exchange potential and kinetic energy. The path planning algorithm developed in this work uses optimization to combine these soaring trajectories with the overarching sampling and surveillance mission. The path planning algorithm uses a simplified aircraft model to tractably optimize soaring trajectories. This aircraft model is presented and along with the derivation of the equations of motion. A nonlinear program is used to create the soaring trajectories based on a given optimization problem. This optimization problem is defined using a heuristic decision tree, which defines appropriate problems given a sampling and surveillance mission and a wind model. Simulations are performed to assess the path planning algorithm. The results are used to identify properties of soaring trajectories as well as to determine what wind conditions support minimal thrust soaring. Additional results show how the path planning algorithm can be tuned between maximizing aircraft endurance and performing the sampling and surveillance mission. A means of trajectory stitching is demonstrated to show how the periodic soaring segments can be combined together to provide a full solution to an infinite/long horizon problem.

  15. Accelerated sampling by infinite swapping of path integral molecular dynamics with surface hopping

    Science.gov (United States)

    Lu, Jianfeng; Zhou, Zhennan

    2018-02-01

    To accelerate the thermal equilibrium sampling of multi-level quantum systems, the infinite swapping limit of a recently proposed multi-level ring polymer representation is investigated. In the infinite swapping limit, the ring polymer evolves according to an averaged Hamiltonian with respect to all possible surface index configurations of the ring polymer and thus connects the surface hopping approach to the mean-field path-integral molecular dynamics. A multiscale integrator for the infinite swapping limit is also proposed to enable efficient sampling based on the limiting dynamics. Numerical results demonstrate the huge improvement of sampling efficiency of the infinite swapping compared with the direct simulation of path-integral molecular dynamics with surface hopping.

  16. Can Weighting Compensate for Sampling Issues in Internet Surveys?

    NARCIS (Netherlands)

    Vaske, J.J.; Jacobs, M.H.; Sijtsma, M.T.J.; Beaman, J.

    2011-01-01

    While Internet surveys have increased in popularity, results may not be representative of target populations. Weighting is commonly used to compensate for sampling issues. This article compared two surveys conducted in the Netherlands—a random mail survey (n = 353) and a convenience Internet survey

  17. Communication: importance sampling including path correlation in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Pan, Feng; Tao, Guohua

    2013-03-07

    Full semiclassical (SC) initial value representation (IVR) for time correlation functions involves a double phase space average over a set of two phase points, each of which evolves along a classical path. Conventionally, the two initial phase points are sampled independently for all degrees of freedom (DOF) in the Monte Carlo procedure. Here, we present an efficient importance sampling scheme by including the path correlation between the two initial phase points for the bath DOF, which greatly improves the performance of the SC-IVR calculations for large molecular systems. Satisfactory convergence in the study of quantum coherence in vibrational relaxation has been achieved for a benchmark system-bath model with up to 21 DOF.

  18. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  19. Path integral for stochastic inflation: Nonperturbative volume weighting, complex histories, initial conditions, and the end of inflation

    Science.gov (United States)

    Gratton, Steven

    2011-09-01

    In this paper we present a path integral formulation of stochastic inflation. Volume weighting can be naturally implemented from this new perspective in a very straightforward way when compared to conventional Langevin approaches. With an in-depth study of inflation in a quartic potential, we investigate how the inflaton evolves and how inflation typically ends both with and without volume weighting. The calculation can be carried to times beyond those accessible to conventional Fokker-Planck approaches. Perhaps unexpectedly, complex histories sometimes emerge with volume weighting. The reward for this excursion into the complex plane is an insight into how volume-weighted inflation both loses memory of initial conditions and ends via slow roll. The slow-roll end of inflation mitigates certain “Youngness Paradox”-type criticisms of the volume-weighted paradigm. Thus it is perhaps time to rehabilitate proper-time volume weighting as a viable measure for answering at least some interesting cosmological questions.

  20. Path integral for stochastic inflation: Nonperturbative volume weighting, complex histories, initial conditions, and the end of inflation

    International Nuclear Information System (INIS)

    Gratton, Steven

    2011-01-01

    In this paper we present a path integral formulation of stochastic inflation. Volume weighting can be naturally implemented from this new perspective in a very straightforward way when compared to conventional Langevin approaches. With an in-depth study of inflation in a quartic potential, we investigate how the inflaton evolves and how inflation typically ends both with and without volume weighting. The calculation can be carried to times beyond those accessible to conventional Fokker-Planck approaches. Perhaps unexpectedly, complex histories sometimes emerge with volume weighting. The reward for this excursion into the complex plane is an insight into how volume-weighted inflation both loses memory of initial conditions and ends via slow roll. The slow-roll end of inflation mitigates certain ''Youngness Paradox''-type criticisms of the volume-weighted paradigm. Thus it is perhaps time to rehabilitate proper-time volume weighting as a viable measure for answering at least some interesting cosmological questions.

  1. Weighted statistical parameters for irregularly sampled time series

    Science.gov (United States)

    Rimoldini, Lorenzo

    2014-01-01

    Unevenly spaced time series are common in astronomy because of the day-night cycle, weather conditions, dependence on the source position in the sky, allocated telescope time and corrupt measurements, for example, or inherent to the scanning law of satellites like Hipparcos and the forthcoming Gaia. Irregular sampling often causes clumps of measurements and gaps with no data which can severely disrupt the values of estimators. This paper aims at improving the accuracy of common statistical parameters when linear interpolation (in time or phase) can be considered an acceptable approximation of a deterministic signal. A pragmatic solution is formulated in terms of a simple weighting scheme, adapting to the sampling density and noise level, applicable to large data volumes at minimal computational cost. Tests on time series from the Hipparcos periodic catalogue led to significant improvements in the overall accuracy and precision of the estimators with respect to the unweighted counterparts and those weighted by inverse-squared uncertainties. Automated classification procedures employing statistical parameters weighted by the suggested scheme confirmed the benefits of the improved input attributes. The classification of eclipsing binaries, Mira, RR Lyrae, Delta Cephei and Alpha2 Canum Venaticorum stars employing exclusively weighted descriptive statistics achieved an overall accuracy of 92 per cent, about 6 per cent higher than with unweighted estimators.

  2. Heating and thermal control of brazing technique to break contamination path for potential Mars sample return

    Science.gov (United States)

    Bao, Xiaoqi; Badescu, Mircea; Sherrit, Stewart; Bar-Cohen, Yoseph; Campos, Sergio

    2017-04-01

    The potential return of Mars sample material is of great interest to the planetary science community, as it would enable extensive analysis of samples with highly sensitive laboratory instruments. It is important to make sure such a mission concept would not bring any living microbes, which may possibly exist on Mars, back to Earth's environment. In order to ensure the isolation of Mars microbes from Earth's Atmosphere, a brazing sealing and sterilizing technique was proposed to break the Mars-to-Earth contamination path. Effectively, heating the brazing zone in high vacuum space and controlling the sample temperature for integrity are key challenges to the implementation of this technique. The break-thechain procedures for container configurations, which are being considered, were simulated by multi-physics finite element models. Different heating methods including induction and resistive/radiation were evaluated. The temperature profiles of Martian samples in a proposed container structure were predicted. The results show that the sealing and sterilizing process can be controlled such that the samples temperature is maintained below the level that may cause damage, and that the brazing technique is a feasible approach to breaking the contamination path.

  3. Adaptive enhanced sampling with a path-variable for the simulation of protein folding and aggregation

    Science.gov (United States)

    Peter, Emanuel K.

    2017-12-01

    In this article, we present a novel adaptive enhanced sampling molecular dynamics (MD) method for the accelerated simulation of protein folding and aggregation. We introduce a path-variable L based on the un-biased momenta p and displacements dq for the definition of the bias s applied to the system and derive 3 algorithms: general adaptive bias MD, adaptive path-sampling, and a hybrid method which combines the first 2 methodologies. Through the analysis of the correlations between the bias and the un-biased gradient in the system, we find that the hybrid methodology leads to an improved force correlation and acceleration in the sampling of the phase space. We apply our method on SPC/E water, where we find a conservation of the average water structure. We then use our method to sample dialanine and the folding of TrpCage, where we find a good agreement with simulation data reported in the literature. Finally, we apply our methodologies on the initial stages of aggregation of a hexamer of Alzheimer's amyloid β fragment 25-35 (Aβ 25-35) and find that transitions within the hexameric aggregate are dominated by entropic barriers, while we speculate that especially the conformation entropy plays a major role in the formation of the fibril as a rate limiting factor.

  4. Sampling of high molecular weight hydrocarbons with adsorbent tubes

    International Nuclear Information System (INIS)

    Stroemberg, B.

    1996-12-01

    Adsorption tubes have been used to determine the content of hydrocarbons in gas samples from small scale combustion and gasification of biomass. Compounds from benzene (mw 78) to indeno (1,2,3-cd) pyrene (mw 276) have been examined. The results show that it is possible to analyze polyaromatic hydrocarbons (PAH) with 4 aromatic rings (mw 202). Detection limits for these compounds are 3 . PAH with higher molecule weight can be identified and quantified in samples with high amounts of PAH e.g. at gasification of biomass. Sampling on adsorption tubes is extremely quick and easy. The tube is inserted in the gas of interest and the sample is sucked through the tube with a pump. Sampling times of 2-10 minutes are often sufficient. High moisture content in the gas may result in losses of the most volatile compounds, when drying. Even very low concentrations of water in the tube may cause ice formation in the cold-trap and the sample will be destroyed. The analysis is unfortunately time-consuming because the desorption oven must be cooled between every analysis. This will reduce the number of samples which can be analyzed per day. The tubes can be stored for several weeks before analysis without deterioration. 4 refs, 5 figs, 3 tabs

  5. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    Science.gov (United States)

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  6. Probabilistic Near and Far-Future Climate Scenarios of Precipitation and Surface Temperature for the North American Monsoon Region Under a Weighted CMIP5-GCM Ensemble Approach.

    Science.gov (United States)

    Montero-Martinez, M. J.; Colorado, G.; Diaz-Gutierrez, D. E.; Salinas-Prieto, J. A.

    2017-12-01

    It is well known the North American Monsoon (NAM) region is already a very dry region which is under a lot of stress due to the lack of water resources on multiple locations of the area. However, it is very interesting that even under those conditions, the Mexican part of the NAM region is certainly the most productive in Mexico from the agricultural point of view. Thus, it is very important to have realistic climate scenarios for climate variables such as temperature, precipitation, relative humidity, radiation, etc. This study tries to tackle that problem by generating probabilistic climate scenarios using a weighted CMIP5-GCM ensemble approach based on the Xu et al. (2010) technique which is on itself an improved method from the better known Reliability Ensemble Averaging algorithm of Giorgi and Mearns (2002). In addition, it is compared the 20-plus GCMs individual performances and the weighted ensemble versus observed data (CRU TS2.1) by using different metrics and Taylor diagrams. This study focuses on probabilistic results reaching a certain threshold given the fact that those types of products could be of potential use for agricultural applications.

  7. PERPADUAN COMBINED SAMPLING DAN ENSEMBLE OF SUPPORT VECTOR MACHINE (ENSVM UNTUK MENANGANI KASUS CHURN PREDICTION PERUSAHAAN TELEKOMUNIKASI

    Directory of Open Access Journals (Sweden)

    Fernandy Marbun

    2010-07-01

    Full Text Available Churn prediction adalah suatu cara untuk memprediksi pelanggan yang berpotensial untuk churn. Data mining khususnya klasifikasi tampaknya dapat menjadi alternatif solusi dalam membuat model churn prediction yang akurat. Namun hasil klasifikasi menjadi tidak akurat disebabkan karena data churn bersifat imbalance. Kelas data menjadi tidak stabil karena data akan lebih condong ke bagian data yang memiliki komposisi data yang lebih besar. Salah satu cara untuk menangani permasalahan ini adalah dengan memodifikasi dataset yang digunakan atau yang lebih dikenal dengan metode resampling. Teknik resampling ini meliputi over-sampling, under-sampling, dan combined-sampling. Metode Ensemble of SVM (EnSVM diharapkan dapat meminimalisir kesalahan klasifikasi kelas mayor dan minor yang dihasilkan oleh classifier SVM tunggal. Dalam penelitian ini akan dicoba untuk memadukan combined sampling dan EnSVM untuk churn predicition. Pengujian dilakukan dengan membandingkan hasil klasifikasi CombinedSampling-EnSVM dengan SMOTE-SVM (perpaduan oversamping-SVM dan pure-SVM. Hasil pengujian menunjukkan bahwa metode CombinedSampling-EnSVM secara umum hanya mampu menghasilkan performansi Gini Index yang lebih baik daripada metode SMOTE-SVM dan tanpa resampling (pure-SVM.

  8. Weight management behaviors in a sample of Iranian adolescent girls.

    Science.gov (United States)

    Garousi, S; Garrusi, B; Baneshi, Mohammad Reza; Sharifi, Z

    2016-09-01

    Attempts to obtain the ideal body shape portrayed in advertising can result in behaviors that lead to an unhealthy reduction in weight. This study was designed to identify contributing factors that may be effective in changing the behavior of a sample of Iranian adolescents. Three hundred fifty adolescent girls from high schools in Kerman, Iran participated in a cross-sectional study based on a self-administered questionnaire. Multifactorial logistic regression modeling was used to identify the factors influencing each of the contributing factors for body management methods, and a decision tree model was constructed to identify individuals who were more or less likely to change their body shape. Approximately one-third of the adolescent girls had attempted dieting, and 37 % of them had exercised to lose weight. The logistic regression model showed that pressure from their mother and the media; father's education level; and body mass index (BMI) were important factors in dieting. BMI and perceived pressure from the media were risk factors for attempting exercise. BMI and perceived pressure from relatives, particularly mothers, and the media were important factors in attempts by adolescent girls to lose weight.

  9. Detecting reactive islands using Lagrangian descriptors and the relevance to transition path sampling.

    Science.gov (United States)

    Patra, Sarbani; Keshavamurthy, Srihari

    2018-02-14

    It has been known for sometime now that isomerization reactions, classically, are mediated by phase space structures called reactive islands (RI). RIs provide one possible route to correct for the nonstatistical effects in the reaction dynamics. In this work, we map out the reactive islands for the two dimensional Müller-Brown model potential and show that the reactive islands are intimately linked to the issue of rare event sampling. In particular, we establish the sensitivity of the so called committor probabilities, useful quantities in the transition path sampling technique, to the hierarchical RI structures. Mapping out the RI structure for high dimensional systems, however, is a challenging task. Here, we show that the technique of Lagrangian descriptors is able to effectively identify the RI hierarchy in the model system. Based on our results, we suggest that the Lagrangian descriptors can be useful for detecting RIs in high dimensional systems.

  10. Aβ monomers transiently sample oligomer and fibril-like configurations: ensemble characterization using a combined MD/NMR approach.

    Science.gov (United States)

    Rosenman, David J; Connors, Christopher R; Chen, Wen; Wang, Chunyu; García, Angel E

    2013-09-23

    Amyloid β (Aβ) peptides are a primary component of fibrils and oligomers implicated in the etiology of Alzheimer's disease (AD). However, the intrinsic flexibility of these peptides has frustrated efforts to investigate the secondary and tertiary structure of Aβ monomers, whose conformational landscapes directly contribute to the kinetics and thermodynamics of Aβ aggregation. In this work, de novo replica exchange molecular dynamics (REMD) simulations on the microseconds-per-replica timescale are used to characterize the structural ensembles of Aβ42, Aβ40, and M35-oxidized Aβ42, three physiologically relevant isoforms with substantially different aggregation properties. J-coupling data calculated from the REMD trajectories were compared to corresponding NMR-derived values acquired through two different pulse sequences, revealing that all simulations converge on the order of hundreds of nanoseconds-per-replica toward ensembles that yield good agreement with experiment. Though all three Aβ species adopt highly heterogeneous ensembles, these are considerably more structured compared to simulations on shorter timescales. Prominent in the C-terminus are antiparallel β-hairpins between L17-A21, A30-L36, and V39-I41, similar to oligomer and fibril intrapeptide models that expose these hydrophobic side chains to solvent and may serve as hotspots for self-association. Compared to reduced Aβ42, the absence of a second β-hairpin in Aβ40 and the sampling of alternate β topologies by M35-oxidized Aβ42 may explain the reduced aggregation rates of these forms. A persistent V24-K28 bend motif, observed in all three species, is stabilized by buried backbone to side-chain hydrogen bonds with D23 and a cross-region salt bridge between E22 and K28, highlighting the role of the familial AD-linked E22 and D23 residues in Aβ monomer folding. These characterizations help illustrate the conformational landscapes of Aβ monomers at atomic resolution and provide insight into

  11. Automated Prediction of Catalytic Mechanism and Rate Law Using Graph-Based Reaction Path Sampling.

    Science.gov (United States)

    Habershon, Scott

    2016-04-12

    In a recent article [ J. Chem. Phys. 2015 , 143 , 094106 ], we introduced a novel graph-based sampling scheme which can be used to generate chemical reaction paths in many-atom systems in an efficient and highly automated manner. The main goal of this work is to demonstrate how this approach, when combined with direct kinetic modeling, can be used to determine the mechanism and phenomenological rate law of a complex catalytic cycle, namely cobalt-catalyzed hydroformylation of ethene. Our graph-based sampling scheme generates 31 unique chemical products and 32 unique chemical reaction pathways; these sampled structures and reaction paths enable automated construction of a kinetic network model of the catalytic system when combined with density functional theory (DFT) calculations of free energies and resultant transition-state theory rate constants. Direct simulations of this kinetic network across a range of initial reactant concentrations enables determination of both the reaction mechanism and the associated rate law in an automated fashion, without the need for either presupposing a mechanism or making steady-state approximations in kinetic analysis. Most importantly, we find that the reaction mechanism which emerges from these simulations is exactly that originally proposed by Heck and Breslow; furthermore, the simulated rate law is also consistent with previous experimental and computational studies, exhibiting a complex dependence on carbon monoxide pressure. While the inherent errors of using DFT simulations to model chemical reactivity limit the quantitative accuracy of our calculated rates, this work confirms that our automated simulation strategy enables direct analysis of catalytic mechanisms from first principles.

  12. Multiscale simulations of patchy particle systems combining Molecular Dynamics, Path Sampling and Green's Function Reaction Dynamics

    Science.gov (United States)

    Bolhuis, Peter

    Important reaction-diffusion processes, such as biochemical networks in living cells, or self-assembling soft matter, span many orders in length and time scales. In these systems, the reactants' spatial dynamics at mesoscopic length and time scales of microns and seconds is coupled to the reactions between the molecules at microscopic length and time scales of nanometers and milliseconds. This wide range of length and time scales makes these systems notoriously difficult to simulate. While mean-field rate equations cannot describe such processes, the mesoscopic Green's Function Reaction Dynamics (GFRD) method enables efficient simulation at the particle level provided the microscopic dynamics can be integrated out. Yet, many processes exhibit non-trivial microscopic dynamics that can qualitatively change the macroscopic behavior, calling for an atomistic, microscopic description. The recently developed multiscale Molecular Dynamics Green's Function Reaction Dynamics (MD-GFRD) approach combines GFRD for simulating the system at the mesocopic scale where particles are far apart, with microscopic Molecular (or Brownian) Dynamics, for simulating the system at the microscopic scale where reactants are in close proximity. The association and dissociation of particles are treated with rare event path sampling techniques. I will illustrate the efficiency of this method for patchy particle systems. Replacing the microscopic regime with a Markov State Model avoids the microscopic regime completely. The MSM is then pre-computed using advanced path-sampling techniques such as multistate transition interface sampling. I illustrate this approach on patchy particle systems that show multiple modes of binding. MD-GFRD is generic, and can be used to efficiently simulate reaction-diffusion systems at the particle level, including the orientational dynamics, opening up the possibility for large-scale simulations of e.g. protein signaling networks.

  13. Another Look at the Mechanisms of Hydride Transfer Enzymes with Quantum and Classical Transition Path Sampling.

    Science.gov (United States)

    Dzierlenga, Michael W; Antoniou, Dimitri; Schwartz, Steven D

    2015-04-02

    The mechanisms involved in enzymatic hydride transfer have been studied for years, but questions remain due, in part, to the difficulty of probing the effects of protein motion and hydrogen tunneling. In this study, we use transition path sampling (TPS) with normal mode centroid molecular dynamics (CMD) to calculate the barrier to hydride transfer in yeast alcohol dehydrogenase (YADH) and human heart lactate dehydrogenase (LDH). Calculation of the work applied to the hydride allowed for observation of the change in barrier height upon inclusion of quantum dynamics. Similar calculations were performed using deuterium as the transferring particle in order to approximate kinetic isotope effects (KIEs). The change in barrier height in YADH is indicative of a zero-point energy (ZPE) contribution and is evidence that catalysis occurs via a protein compression that mediates a near-barrierless hydride transfer. Calculation of the KIE using the difference in barrier height between the hydride and deuteride agreed well with experimental results.

  14. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  15. Lakshmibai-Seshadri paths of level-zero weight shape and one-dimensional sums associated to level-zero fundamental representations

    OpenAIRE

    Naito, Satoshi; Sagaki, Daisuke

    2006-01-01

    We give interpretations of energy functions and (classically restricted) one-dimensional sums associated to tensor products of level-zero fundamental representations of quantum affine algebras in terms of Lakshmibai-Seshadri paths of level-zero weight shape.

  16. Information in small neuronal ensemble activity in the hippocampal CA1 during delayed non-matching to sample performance in rats

    Directory of Open Access Journals (Sweden)

    Takahashi Susumu

    2009-09-01

    Full Text Available Abstract Background The matrix-like organization of the hippocampus, with its several inputs and outputs, has given rise to several theories related to hippocampal information processing. Single-cell electrophysiological studies and studies of lesions or genetically altered animals using recognition memory tasks such as delayed non-matching-to-sample (DNMS tasks support the theories. However, a complete understanding of hippocampal function necessitates knowledge of the encoding of information by multiple neurons in a single trial. The role of neuronal ensembles in the hippocampal CA1 for a DNMS task was assessed quantitatively in this study using multi-neuronal recordings and an artificial neural network classifier as a decoder. Results The activity of small neuronal ensembles (6-18 cells over brief time intervals (2-50 ms contains accurate information specifically related to the matching/non-matching of continuously presented stimuli (stimulus comparison. The accuracy of the combination of neurons pooled over all the ensembles was markedly lower than those of the ensembles over all examined time intervals. Conclusion The results show that the spatiotemporal patterns of spiking activity among cells in the small neuronal ensemble contain much information that is specifically useful for the stimulus comparison. Small neuronal networks in the hippocampal CA1 might therefore act as a comparator during recognition memory tasks.

  17. Multilevel ensemble Kalman filter

    KAUST Repository

    Chernov, Alexey; Hoel, Haakon; Law, Kody; Nobile, Fabio; Tempone, Raul

    2016-01-01

    This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF). In terms of computational cost vs. approximation error the asymptotic performance of the multilevel ensemble Kalman filter (MLEnKF) is superior to the EnKF s.

  18. Multilevel ensemble Kalman filter

    KAUST Repository

    Chernov, Alexey

    2016-01-06

    This work embeds a multilevel Monte Carlo (MLMC) sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF). In terms of computational cost vs. approximation error the asymptotic performance of the multilevel ensemble Kalman filter (MLEnKF) is superior to the EnKF s.

  19. Cis-to- Trans Isomerization of Azobenzene Derivatives Studied with Transition Path Sampling and Quantum Mechanical/Molecular Mechanical Molecular Dynamics.

    Science.gov (United States)

    Muždalo, Anja; Saalfrank, Peter; Vreede, Jocelyne; Santer, Mark

    2018-04-10

    Azobenzene-based molecular photoswitches are becoming increasingly important for the development of photoresponsive, functional soft-matter material systems. Upon illumination with light, fast interconversion between a more stable trans and a metastable cis configuration can be established resulting in pronounced changes in conformation, dipole moment or hydrophobicity. A rational design of functional photosensitive molecules with embedded azo moieties requires a thorough understanding of isomerization mechanisms and rates, especially the thermally activated relaxation. For small azo derivatives considered in the gas phase or simple solvents, Eyring's classical transition state theory (TST) approach yields useful predictions for trends in activation energies or corresponding half-life times of the cis isomer. However, TST or improved theories cannot easily be applied when the azo moiety is part of a larger molecular complex or embedded into a heterogeneous environment, where a multitude of possible reaction pathways may exist. In these cases, only the sampling of an ensemble of dynamic reactive trajectories (transition path sampling, TPS) with explicit models of the environment may reveal the nature of the processes involved. In the present work we show how a TPS approach can conveniently be implemented for the phenomenon of relaxation-isomerization of azobenzenes starting with the simple examples of pure azobenzene and a push-pull derivative immersed in a polar (DMSO) and apolar (toluene) solvent. The latter are represented explicitly at a molecular mechanical (MM) and the azo moiety at a quantum mechanical (QM) level. We demonstrate for the push-pull azobenzene that path sampling in combination with the chosen QM/MM scheme produces the expected change in isomerization pathway from inversion to rotation in going from a low to a high permittivity (explicit) solvent model. We discuss the potential of the simulation procedure presented for comparative calculation of

  20. Prevalence of Human Papillomavirus Infection in Unselected SurePath Samples Using the APTIMA HPV mRNA Assay

    DEFF Research Database (Denmark)

    Rebolj, Matejka; Preisler, Sarah; Ejegod, Ditte M

    2013-01-01

    The APTIMA Human Papillomavirus (HPV) Assay detects E6/E7 mRNA from 14 human papillomavirus genotypes. Horizon was a population-based split-sample study among well-screened women, with an aim to compare APTIMA, Hybrid Capture 2 (HC2), and liquid-based cytology (LBC) using SurePath samples. APTIMA...

  1. Entropy of network ensembles

    Science.gov (United States)

    Bianconi, Ginestra

    2009-03-01

    In this paper we generalize the concept of random networks to describe network ensembles with nontrivial features by a statistical mechanics approach. This framework is able to describe undirected and directed network ensembles as well as weighted network ensembles. These networks might have nontrivial community structure or, in the case of networks embedded in a given space, they might have a link probability with a nontrivial dependence on the distance between the nodes. These ensembles are characterized by their entropy, which evaluates the cardinality of networks in the ensemble. In particular, in this paper we define and evaluate the structural entropy, i.e., the entropy of the ensembles of undirected uncorrelated simple networks with given degree sequence. We stress the apparent paradox that scale-free degree distributions are characterized by having small structural entropy while they are so widely encountered in natural, social, and technological complex systems. We propose a solution to the paradox by proving that scale-free degree distributions are the most likely degree distribution with the corresponding value of the structural entropy. Finally, the general framework we present in this paper is able to describe microcanonical ensembles of networks as well as canonical or hidden-variable network ensembles with significant implications for the formulation of network-constructing algorithms.

  2. Using path analysis to understand parents' perceptions of their children's weight, physical activity and eating habits in the Champlain region of Ontario.

    Science.gov (United States)

    Adamo, Kristi B; Papadakis, Sophia; Dojeiji, Laurie; Turnau, Micheline; Simmons, Louise; Parameswaran, Meena; Cunningham, John; Pipe, Andrew L; Reid, Robert D

    2010-11-01

    Parents have a fundamental role in promoting the healthy weight of their children. To determine parental perceptions of their child's body weight, eating and physical activity (PA) behaviours, and to test a predictive model of parental perceptions regarding their child's PA and healthy eating behaviours. A random-digit telephone survey was conducted among parents of children four to 12 years of age living in the Champlain region of Ontario. Descriptive statistics were used to summarize the responses. Path analysis was used to identify predictors of parental perceptions of PA and healthy eating. The study sample consisted of 1940 parents/caregivers. Only 0.2% of parents reported their child as being obese; 8.6% reported their child as being overweight. Most parents perceived their child to be physically active and eating healthily. Approximately 25% of parents reported that their child spent 2 h/day or more in front of a screen, and that their child consumed less than three servings of fruits and vegetables daily, and regularly consumed fast food. Variables that correlated with PA perceptions included time spent reading/doing homework, interest in PA, perceived importance of PA, frequency of PA, level of parental PA, participation in organized sport, child weight and parental concern for weight. Variables that predicted perceptions regarding healthy eating were parental education, household income, preparation of home-cooked meals, fruit and vegetable intake, and concern for and influence on the child's weight. Parents in the present study sample did not appear to understand, or had little knowledge of the recommendations for PA and healthy eating in children. Parents appeared to base their judgment of healthy levels of PA or healthy eating behaviours using minimal criteria; these criteria are inconsistent with those used by health professionals to define adequate PA and healthy eating. The present survey highlights an important knowledge gap between scientific

  3. Impact of climate change on the stream flow of the lower Brahmaputra: trends in high and low flows based on discharge-weighted ensemble modelling

    Directory of Open Access Journals (Sweden)

    A. K. Gain

    2011-05-01

    Full Text Available Climate change is likely to have significant effects on the hydrology. The Ganges-Brahmaputra river basin is one of the most vulnerable areas in the world as it is subject to the combined effects of glacier melt, extreme monsoon rainfall and sea level rise. To what extent climate change will impact river flow in the Brahmaputra basin is yet unclear, as climate model studies show ambiguous results. In this study we investigate the effect of climate change on both low and high flows of the lower Brahmaputra. We apply a novel method of discharge-weighted ensemble modeling using model outputs from a global hydrological models forced with 12 different global climate models (GCMs. Our analysis shows that only a limited number of GCMs are required to reconstruct observed discharge. Based on the GCM outputs and long-term records of observed flow at Bahadurabad station, our method results in a multi-model weighted ensemble of transient stream flow for the period 1961–2100. Using the constructed transients, we subsequently project future trends in low and high river flow. The analysis shows that extreme low flow conditions are likely to occur less frequent in the future. However a very strong increase in peak flows is projected, which may, in combination with projected sea level change, have devastating effects for Bangladesh. The methods presented in this study are more widely applicable, in that existing multi-model streamflow simulations from global hydrological models can be weighted against observed streamflow data to assess at first order the effects of climate change for specific river basins.

  4. Can an inadequate cervical cytology sample in ThinPrep be converted to a satisfactory sample by processing it with a SurePath preparation?

    Science.gov (United States)

    Sørbye, Sveinung Wergeland; Pedersen, Mette Kristin; Ekeberg, Bente; Williams, Merete E Johansen; Sauer, Torill; Chen, Ying

    2017-01-01

    The Norwegian Cervical Cancer Screening Program recommends screening every 3 years for women between 25 and 69 years of age. There is a large difference in the percentage of unsatisfactory samples between laboratories that use different brands of liquid-based cytology. We wished to examine if inadequate ThinPrep samples could be satisfactory by processing them with the SurePath protocol. A total of 187 inadequate ThinPrep specimens from the Department of Clinical Pathology at University Hospital of North Norway were sent to Akershus University Hospital for conversion to SurePath medium. Ninety-one (48.7%) were processed through the automated "gynecologic" application for cervix cytology samples, and 96 (51.3%) were processed with the "nongynecological" automatic program. Out of 187 samples that had been unsatisfactory by ThinPrep, 93 (49.7%) were satisfactory after being converted to SurePath. The rate of satisfactory cytology was 36.6% and 62.5% for samples run through the "gynecology" program and "nongynecology" program, respectively. Of the 93 samples that became satisfactory after conversion from ThinPrep to SurePath, 80 (86.0%) were screened as normal while 13 samples (14.0%) were given an abnormal diagnosis, which included 5 atypical squamous cells of undetermined significance, 5 low-grade squamous intraepithelial lesion, 2 atypical glandular cells not otherwise specified, and 1 atypical squamous cells cannot exclude high-grade squamous intraepithelial lesion. A total of 2.1% (4/187) of the women got a diagnosis of cervical intraepithelial neoplasia 2 or higher at a later follow-up. Converting cytology samples from ThinPrep to SurePath processing can reduce the number of unsatisfactory samples. The samples should be run through the "nongynecology" program to ensure an adequate number of cells.

  5. Cervical cancer incidence after normal cytological sample in routine screening using SurePath, ThinPrep, and conventional cytology

    DEFF Research Database (Denmark)

    Rozemeijer, Kirsten; Naber, Steffie K; Penning, Corine

    2017-01-01

    of histo- and cytopathology in the Netherlands (PALGA), January 2000 to March 2013.Population Women with 5 924 474 normal screening samples (23 833 123 person years).Exposure Use of SurePath or ThinPrep versus conventional cytology as screening test.Main outcome measure 72 month cumulative incidence...

  6. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    Energy Technology Data Exchange (ETDEWEB)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allows us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.

  7. A study on reducing update frequency of the forecast samples in the ensemble-based 4DVar data assimilation method

    Energy Technology Data Exchange (ETDEWEB)

    Shao, Aimei; Xu, Daosheng [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Chinese Academy of Meteorological Sciences, Beijing (China). State Key Lab. of Severe Weather; Qiu, Xiaobin [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province; Tianjin Institute of Meteorological Science (China); Qiu, Chongjian [Lanzhou Univ. (China). Key Lab. of Arid Climatic Changing and Reducing Disaster of Gansu Province

    2013-02-15

    In the ensemble-based four dimensional variational assimilation method (SVD-En4DVar), a singular value decomposition (SVD) technique is used to select the leading eigenvectors and the analysis variables are expressed as the orthogonal bases expansion of the eigenvectors. The experiments with a two-dimensional shallow-water equation model and simulated observations show that the truncation error and rejection of observed signals due to the reduced-dimensional reconstruction of the analysis variable are the major factors that damage the analysis when the ensemble size is not large enough. However, a larger-sized ensemble is daunting computational burden. Experiments with a shallow-water equation model also show that the forecast error covariances remain relatively constant over time. For that reason, we propose an approach that increases the members of the forecast ensemble while reducing the update frequency of the forecast error covariance in order to increase analysis accuracy and to reduce the computational cost. A series of experiments were conducted with the shallow-water equation model to test the efficiency of this approach. The experimental results indicate that this approach is promising. Further experiments with the WRF model show that this approach is also suitable for the real atmospheric data assimilation problem, but the update frequency of the forecast error covariances should not be too low. (orig.)

  8. 7 CFR 201.46 - Weight of working sample.

    Science.gov (United States)

    2010-01-01

    ... specified in column 2, table 1, (2) add all these products, (3) total the percentages of all components of... the product of the weight calculated in paragraph (d)(2)(i) of this section multiplied by 100 percent... Pepper 15 150 165 Pumpkin 500 500 5 Radish 30 300 75 Rhubarb 50 300 60 Rutabaga 5 50 430 Sage 25 150 120...

  9. Ensemble Methods

    Science.gov (United States)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been

  10. Predictors of initial weight loss among women with abdominal obesity: a path model using self-efficacy and health-promoting behaviour.

    Science.gov (United States)

    Choo, Jina; Kang, Hyuncheol

    2015-05-01

    To identify predictors of initial weight loss among women with abdominal obesity by using a path model. Successful weight loss in the initial stages of long-term weight management may promote weight loss maintenance. A longitudinal study design. Study participants were 75 women with abdominal obesity, who were enrolled in a 12-month Community-based Heart and Weight Management Trial and followed until a 6-month assessment. The Weight Efficacy Lifestyle, Exercise Self-Efficacy and Health Promoting Lifestyle Profile-II measured diet self-efficacy, exercise self-efficacy and health-promoting behaviour respectively. All endogenous and exogenous variables used in our path model were change variables from baseline to 6 months. Data were collected between May 2011-May 2012. Based on the path model, increases in both diet and exercise self-efficacy had significant effects on increases in health-promoting behaviour. Increases in diet self-efficacy had a significant indirect effect on initial weight loss via increases in health-promoting behaviour. Increases in health-promoting behaviour had a significant effect on initial weight loss. Among women with abdominal obesity, increased diet self-efficacy and health-promoting behaviour were predictors of initial weight loss. A mechanism by which increased diet self-efficacy predicts initial weight loss may be partially attributable to health-promoting behavioural change. However, more work is still needed to verify causality. Based on the current findings, intensive nursing strategies for increasing self-efficacy for weight control and health-promoting behaviour may be essential components for better weight loss in the initial stage of a weight management intervention. © 2015 John Wiley & Sons Ltd.

  11. Relative humidity effects on water vapour fluxes measured with closed-path eddy-covariance systems with short sampling lines

    DEFF Research Database (Denmark)

    Fratini, Gerardo; Ibrom, Andreas; Arriga, Nicola

    2012-01-01

    It has been formerly recognised that increasing relative humidity in the sampling line of closed-path eddy-covariance systems leads to increasing attenuation of water vapour turbulent fluctuations, resulting in strong latent heat flux losses. This occurrence has been analyzed for very long (50 m...... from eddy-covariance systems featuring short (4 m) and very short (1 m) sampling lines running at the same clover field and show that relative humidity effects persist also for these setups, and should not be neglected. Starting from the work of Ibrom and co-workers, we propose a mixed method...... and correction method proposed here is deemed applicable to closed-path systems featuring a broad range of sampling lines, and indeed applicable also to passive gases as a special case. The methods described in this paper are incorporated, as processing options, in the free and open-source eddy...

  12. NYYD Ensemble

    Index Scriptorium Estoniae

    2002-01-01

    NYYD Ensemble'i duost Traksmann - Lukk E.-S. Tüüri teosega "Symbiosis", mis on salvestatud ka hiljuti ilmunud NYYD Ensemble'i CDle. 2. märtsil Rakvere Teatri väikeses saalis ja 3. märtsil Rotermanni Soolalaos, kavas Tüür, Kaumann, Berio, Reich, Yun, Hauta-aho, Buckinx

  13. Lessons from Climate Modeling on the Design and Use of Ensembles for Crop Modeling

    Science.gov (United States)

    Wallach, Daniel; Mearns, Linda O.; Ruane, Alexander C.; Roetter, Reimund P.; Asseng, Senthold

    2016-01-01

    Working with ensembles of crop models is a recent but important development in crop modeling which promises to lead to better uncertainty estimates for model projections and predictions, better predictions using the ensemble mean or median, and closer collaboration within the modeling community. There are numerous open questions about the best way to create and analyze such ensembles. Much can be learned from the field of climate modeling, given its much longer experience with ensembles. We draw on that experience to identify questions and make propositions that should help make ensemble modeling with crop models more rigorous and informative. The propositions include defining criteria for acceptance of models in a crop MME, exploring criteria for evaluating the degree of relatedness of models in a MME, studying the effect of number of models in the ensemble, development of a statistical model of model sampling, creation of a repository for MME results, studies of possible differential weighting of models in an ensemble, creation of single model ensembles based on sampling from the uncertainty distribution of parameter values or inputs specifically oriented toward uncertainty estimation, the creation of super ensembles that sample more than one source of uncertainty, the analysis of super ensemble results to obtain information on total uncertainty and the separate contributions of different sources of uncertainty and finally further investigation of the use of the multi-model mean or median as a predictor.

  14. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  15. Ensembl 2004.

    Science.gov (United States)

    Birney, E; Andrews, D; Bevan, P; Caccamo, M; Cameron, G; Chen, Y; Clarke, L; Coates, G; Cox, T; Cuff, J; Curwen, V; Cutts, T; Down, T; Durbin, R; Eyras, E; Fernandez-Suarez, X M; Gane, P; Gibbins, B; Gilbert, J; Hammond, M; Hotz, H; Iyer, V; Kahari, A; Jekosch, K; Kasprzyk, A; Keefe, D; Keenan, S; Lehvaslaiho, H; McVicker, G; Melsopp, C; Meidl, P; Mongin, E; Pettett, R; Potter, S; Proctor, G; Rae, M; Searle, S; Slater, G; Smedley, D; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Storey, R; Ureta-Vidal, A; Woodwark, C; Clamp, M; Hubbard, T

    2004-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organize biology around the sequences of large genomes. It is a comprehensive and integrated source of annotation of large genome sequences, available via interactive website, web services or flat files. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. The facilities of the system range from sequence analysis to data storage and visualization and installations exist around the world both in companies and at academic sites. With a total of nine genome sequences available from Ensembl and more genomes to follow, recent developments have focused mainly on closer integration between genomes and external data.

  16. Ensembl 2017

    OpenAIRE

    Aken, Bronwen L.; Achuthan, Premanand; Akanni, Wasiu; Amode, M. Ridwan; Bernsdorff, Friederike; Bhai, Jyothish; Billis, Konstantinos; Carvalho-Silva, Denise; Cummins, Carla; Clapham, Peter; Gil, Laurent; Gir?n, Carlos Garc?a; Gordon, Leo; Hourlier, Thibaut; Hunt, Sarah E.

    2016-01-01

    Ensembl (www.ensembl.org) is a database and genome browser for enabling research on vertebrate genomes. We import, analyse, curate and integrate a diverse collection of large-scale reference data to create a more comprehensive view of genome biology than would be possible from any individual dataset. Our extensive data resources include evidence-based gene and regulatory region annotation, genome variation and gene trees. An accompanying suite of tools, infrastructure and programmatic access ...

  17. Developing Students' Reasoning about Samples and Sampling Variability as a Path to Expert Statistical Thinking

    Science.gov (United States)

    Garfield, Joan; Le, Laura; Zieffler, Andrew; Ben-Zvi, Dani

    2015-01-01

    This paper describes the importance of developing students' reasoning about samples and sampling variability as a foundation for statistical thinking. Research on expert-novice thinking as well as statistical thinking is reviewed and compared. A case is made that statistical thinking is a type of expert thinking, and as such, research…

  18. General and smoking cessation weight concern in a Hispanic sample of light and intermittent smokers.

    Science.gov (United States)

    Landrau-Cribbs, Erica; Cabriales, José Alonso; Cooper, Theodore V

    2015-02-01

    This study assessed general and cessation related weight concerns in a Hispanic sample of light (≤10 cigarettes per day) and intermittent (non-daily smoking) smokers (LITS) participating in a brief smoking cessation intervention. Three hundred and fifty-four Hispanic LITS (Mage=34.2, SD=14; 51.1% male; 57.9% Mexican American; 59.0% daily light, 41.0% intermittent) completed baseline measures assessing demographics, tobacco use/history, stage of change (SOC), general weight concern, and cessation related weight concern. Three multiple logistic regression models examined potential predictors (i.e., age, gender, SOC, cigarettes per month, smoking status [daily vs non-daily], weight, cessation related weight concern, general weight concern) of general weight concern, cessation related weight concern, and past 30day abstinence (controlling for the intervention). Study results indicated that a majority of participants reported general weight concern (59.6%), and slightly more than a third (35.6%) reported post cessation weight gain concern (mean and median weight tolerated before relapse were within the 10-12lb range). Lower weight and endorsing general weight concern were associated with cessation related weight concern. Female gender, higher weight, and endorsing cessation related weight concern were associated with general weight concern. Monthly cigarette use was associated with smoking cessation at the three-month follow-up. The results indicate a substantial prevalence of general weight concern and non-trivial rates of cessation related weight concern in Hispanic LITS attempting to quit, and greater success in quitting among those who reported lower rates of cigarettes smoked per month. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Ensembl variation resources

    Directory of Open Access Journals (Sweden)

    Marin-Garcia Pablo

    2010-05-01

    Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.

  20. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    Science.gov (United States)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  1. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  2. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  3. Hybrid Capture 2 and cobas human papillomavirus assays perform similarly on SurePath samples from women with abnormalities

    DEFF Research Database (Denmark)

    Fornari, D; Rebolj, M; Bjerregaard, B

    2016-01-01

    OBJECTIVE: In two laboratories (Departments of Pathology, Copenhagen University Hospitals of Herlev and Hvidovre), we compared cobas and Hybrid Capture 2 (HC2) human papillomavirus (HPV) assays using SurePath® samples from women with atypical squamous cells of undetermined significance (ASCUS......) at ≥30 years and women after treatment of cervical intraepithelial neoplasia (CIN). METHODS: Samples from 566 women with ASCUS and 411 women after treatment were routinely tested with HC2 and, thereafter, with cobas. Histological outcomes were retrieved from the Danish Pathology Data Base. We calculated...... the overall agreement between the assays, and compared their sensitivity and specificity for ≥CIN2. RESULTS: In women with ASCUS, HC2 and cobas testing results were similar in the two laboratories. The overall agreement was 91% (95% CI, 88-93). After CIN treatment, the overall agreement was 87% (95% CI, 82...

  4. Assessment of Processes of Change for Weight Management in a UK Sample

    Science.gov (United States)

    Andrés, Ana; Saldaña, Carmina; Beeken, Rebecca J.

    2015-01-01

    Objective The present study aimed to validate the English version of the Processes of Change questionnaire in weight management (P-Weight). Methods Participants were 1,087 UK adults, including people enrolled in a behavioural weight management programme, university students and an opportunistic sample. The mean age of the sample was 34.80 (SD = 13.56) years, and 83% were women. BMI ranged from 18.51 to 55.36 (mean = 25.92, SD = 6.26) kg/m2. Participants completed both the stages and processes questionnaires in weight management (S-Weight and P-Weight), and subscales from the EDI-2 and EAT-40. A refined version of the P-Weight consisting of 32 items was obtained based on the item analysis. Results The internal structure of the scale fitted a four-factor model, and statistically significant correlations with external measures supported the convergent validity of the scale. Conclusion The adequate psychometric properties of the P-Weight English version suggest that it could be a useful tool to tailor weight management interventions. PMID:25765163

  5. Weight of the Shortest Path to the First Encountered Peer in a Peer Group of Size m

    NARCIS (Netherlands)

    Van Mieghem, P.; Tang, S.

    We model the weight (e.g. delay, distance or cost) from an arbitrary node to the nearest (in weight) peer in a peer-to-peer (P2P) network. The exact probability generating function and an asymptotic analysis is presented for a random graph with i.i.d. exponential link weights. The asymptotic

  6. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  7. Particle filtering with path sampling and an application to a bimodal ocean current model

    International Nuclear Information System (INIS)

    Weare, Jonathan

    2009-01-01

    This paper introduces a recursive particle filtering algorithm designed to filter high dimensional systems with complicated non-linear and non-Gaussian effects. The method incorporates a parallel marginalization (PMMC) step in conjunction with the hybrid Monte Carlo (HMC) scheme to improve samples generated by standard particle filters. Parallel marginalization is an efficient Markov chain Monte Carlo (MCMC) strategy that uses lower dimensional approximate marginal distributions of the target distribution to accelerate equilibration. As a validation the algorithm is tested on a 2516 dimensional, bimodal, stochastic model motivated by the Kuroshio current that runs along the Japanese coast. The results of this test indicate that the method is an attractive alternative for problems that require the generality of a particle filter but have been inaccessible due to the limitations of standard particle filtering strategies.

  8. Examination of weight control practices in a non-clinical sample of college women.

    Science.gov (United States)

    Hayes, S; Napolitano, M A

    2012-09-01

    The current study examined healthy weight control practices among a sample of college women enrolled at an urban university (N=715; age=19.87±1.16; 77.2% Caucasian; 13.4% African American, 7.2% Asian, 2.2% other races). Participants completed measures as part of an on-line study about health habits, behaviors, and attitudes. Items from the Three Factor Eating Questionnaire were selected and evaluated with exploratory factor analysis to create a healthy weight control practices scale. Results revealed that college women, regardless of weight status, used a comparable number (four of eight) of practices. Examination of racial differences between Caucasian and African American women revealed that normal weight African American women used significantly fewer strategies than Caucasian women. Of note, greater use of healthy weight control practices was associated with higher cognitive restraint, drive for thinness, minutes of physical activity, and more frequent use of compensatory strategies. Higher scores on measures of binge and disinhibited eating, body dissatisfaction, negative affect, and depressive symptoms were associated with greater use of healthy weight control practices by underweight/normal weight but not by overweight/obese college women. Results suggest that among a sample of college females, a combination of healthy and potentially unhealthy weight control practices occurs. Implications of the findings suggest the need for effective weight management and eating disorder prevention programs for this critical developmental life stage. Such programs should be designed to help students learn how to appropriately use healthy weight control practices, as motivations for use may vary by weight status.

  9. Probability Maps for the Visualization of Assimilation Ensemble Flow Data

    KAUST Repository

    Hollt, Thomas

    2015-05-25

    Ocean forecasts nowadays are created by running ensemble simulations in combination with data assimilation techniques. Most of these techniques resample the ensemble members after each assimilation cycle. This means that in a time series, after resampling, every member can follow up on any of the members before resampling. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially with the number of assimilation cycles. In general a single possible path is not of interest but only the probabilities that any point in space might be reached by a particle at some point in time. In this work we present an approach using probability-weighted piecewise particle trajectories to allow such a mapping interactively, instead of tracing quadrillions of individual particles. We achieve interactive rates by binning the domain and splitting up the tracing process into the individual assimilation cycles, so that particles that fall into the same bin after a cycle can be treated as a single particle with a larger probability as input for the next time step. As a result we loose the possibility to track individual particles, but can create probability maps for any desired seed at interactive rates.

  10. Behaviours associated with weight loss maintenance and regaining in a Mediterranean population sample. A qualitative study.

    Science.gov (United States)

    Karfopoulou, E; Mouliou, K; Koutras, Y; Yannakoulia, M

    2013-10-01

    In the US, the National Weight Control Registry revealed lifestyle behaviours shared by weight loss maintainers. In the US and the UK, qualitative studies compared the experiences of weight loss maintainers and regainers. High rates of physical activity, a low-energy/low-fat diet, weight self-monitoring, breakfast consumption and flexible control of eating are well-established maintenance behaviours. The Mediterranean lifestyle has not been studied relative to weight loss maintenance. This study focused on a sample of Greek maintainers and regainers. Maintainers emphasized home-cooked meals; their diet does not appear to be low-fat, as home-cooked Greek meals are rich in olive oil. Having a small dinner is a common strategy among maintainers. Health motives were not mentioned by maintainers. Maintainers, but not regainers, appeared to compensate for emotional eating. Weight loss maintenance is imperative to successful obesity treatment. We qualitatively explored lifestyle behaviours associated with weight regulation, in a sample of Greek volunteers who had lost weight and either maintained or regained it. A 10% intentional loss maintained for at least one year was considered successful maintenance. Volunteers (n = 44, 41% men) formed eight focus groups, four of maintainers and four of regainers. Questions regarded weight loss, weight maintenance or regaining, and beliefs on weight maintenance and regaining. All discussions were tape recorded. Maintainers lost weight on their own, whereas regainers sought professional help. Maintainers exercised during both the loss and maintenance phases, whereas regainers showed inconsistent physical activity levels. Health motives for weight loss were mentioned only by regainers. Emotional eating was a common barrier, but only maintainers compensated for it. Maintainers continuously applied specific strategies to maintain their weight: emphasizing home-cooked meals, high eating frequency, a small dinner, portion size

  11. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.

    Science.gov (United States)

    Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E

    2015-09-03

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. APTIMA assay on SurePath liquid-based cervical samples compared to endocervical swab samples facilitated by a real time database

    Directory of Open Access Journals (Sweden)

    Khader Samer

    2010-01-01

    Full Text Available Background: Liquid-based cytology (LBC cervical samples are increasingly being used to test for pathogens, including: HPV, Chlamydia trachomatis (CT and Neisseria gonorrhoeae (GC using nucleic acid amplification tests. Several reports have shown the accuracy of such testing on ThinPrep (TP LBC samples. Fewer studies have evaluated SurePath (SP LBC samples, which utilize a different specimen preservative. This study was undertaken to assess the performance of the Aptima Combo 2 Assay (AC2 for CT and GC on SP versus endocervical swab samples in our laboratory. Materials and Methods: The live pathology database of Montefiore Medical Center was searched for patients with AC2 endocervical swab specimens and SP Paps taken the same day. SP samples from CT- and/or GC-positive endocervical swab patients and randomly selected negative patients were studied. In each case, 1.5 ml of the residual SP vial sample, which was in SP preservative and stored at room temperature, was transferred within seven days of collection to APTIMA specimen transfer tubes without any sample or patient identifiers. Blind testing with the AC2 assay was performed on the Tigris DTS System (Gen-probe, San Diego, CA. Finalized SP results were compared with the previously reported endocervical swab results for the entire group and separately for patients 25 years and younger and patients over 25 years. Results: SP specimens from 300 patients were tested. This included 181 swab CT-positive, 12 swab GC-positive, 7 CT and GC positive and 100 randomly selected swab CT and GC negative patients. Using the endocervical swab results as the patient′s infection status, AC2 assay of the SP samples showed: CT sensitivity 89.3%, CT specificity 100.0%; GC sensitivity and specificity 100.0%. CT sensitivity for patients 25 years or younger was 93.1%, versus 80.7% for patients over 25 years, a statistically significant difference (P = 0.02. Conclusions: Our results show that AC2 assay of 1.5 ml SP

  13. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong

    2010-09-19

    The ensemble square root filter (EnSRF) [1, 2, 3, 4] is a popular method for data assimilation in high dimensional systems (e.g., geophysics models). Essentially the EnSRF is a Monte Carlo implementation of the conventional Kalman filter (KF) [5, 6]. It is mainly different from the KF at the prediction steps, where it is some ensembles, rather then the means and covariance matrices, of the system state that are propagated forward. In doing this, the EnSRF is computationally more efficient than the KF, since propagating a covariance matrix forward in high dimensional systems is prohibitively expensive. In addition, the EnSRF is also very convenient in implementation. By propagating the ensembles of the system state, the EnSRF can be directly applied to nonlinear systems without any change in comparison to the assimilation procedures in linear systems. However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  14. Efficient sampling techniques for uncertainty quantification in history matching using nonlinear error models and ensemble level upscaling techniques

    KAUST Repository

    Efendiev, Y.

    2009-11-01

    The Markov chain Monte Carlo (MCMC) is a rigorous sampling method to quantify uncertainty in subsurface characterization. However, the MCMC usually requires many flow and transport simulations in evaluating the posterior distribution and can be computationally expensive for fine-scale geological models. We propose a methodology that combines coarse- and fine-scale information to improve the efficiency of MCMC methods. The proposed method employs off-line computations for modeling the relation between coarse- and fine-scale error responses. This relation is modeled using nonlinear functions with prescribed error precisions which are used in efficient sampling within the MCMC framework. We propose a two-stage MCMC where inexpensive coarse-scale simulations are performed to determine whether or not to run the fine-scale (resolved) simulations. The latter is determined on the basis of a statistical model developed off line. The proposed method is an extension of the approaches considered earlier where linear relations are used for modeling the response between coarse-scale and fine-scale models. The approach considered here does not rely on the proximity of approximate and resolved models and can employ much coarser and more inexpensive models to guide the fine-scale simulations. Numerical results for three-phase flow and transport demonstrate the advantages, efficiency, and utility of the method for uncertainty assessment in the history matching. Copyright 2009 by the American Geophysical Union.

  15. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  16. On Ensemble Nonlinear Kalman Filtering with Symmetric Analysis Ensembles

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim; Moroz, Irene M.

    2010-01-01

    However, by adopting the Monte Carlo method, the EnSRF also incurs certain sampling errors. One way to alleviate this problem is to introduce certain symmetry to the ensembles, which can reduce the sampling errors and spurious modes in evaluation of the means and covariances of the ensembles [7]. In this contribution, we present two methods to produce symmetric ensembles. One is based on the unscented transform [8, 9], which leads to the unscented Kalman filter (UKF) [8, 9] and its variant, the ensemble unscented Kalman filter (EnUKF) [7]. The other is based on Stirling’s interpolation formula (SIF), which results in the divided difference filter (DDF) [10]. Here we propose a simplified divided difference filter (sDDF) in the context of ensemble filtering. The similarity and difference between the sDDF and the EnUKF will be discussed. Numerical experiments will also be conducted to investigate the performance of the sDDF and the EnUKF, and compare them to a well‐established EnSRF, the ensemble transform Kalman filter (ETKF) [2].

  17. Perception of weight and psychological variables in a sample of Spanish adolescents

    Directory of Open Access Journals (Sweden)

    Jáuregui-Lobera I

    2011-06-01

    Full Text Available Ignacio Jáuregui-Lobera1,2, Patricia Bolaños-Ríos2, María José Santiago-Fernández2, Olivia Garrido-Casals2, Elsa Sánchez31Department of Nutrition and Bromatology, Pablo de Olavide University, Seville, Spain; 2Behavioral Sciences Institute, Seville, Spain; 3Professional Schools Sagrada Familia, Écija, Seville, SpainBackground: This study explored the relationship between body mass index (BMI and weight perception, self-esteem, positive body image, food beliefs, and mental health status, along with any gender differences in weight perception, in a sample of adolescents in Spain.Methods: The sample comprised 85 students (53 females and 32 males, mean age 17.4 ± 5.5 years with no psychiatric history who were recruited from a high school in Écija, Seville. Weight and height were recorded for all participants, who were then classified according to whether they perceived themselves as slightly overweight, very overweight, very underweight, slightly underweight, or about the right weight, using the question “How do you think of yourself in terms of weight?”. Finally, a series of questionnaires were administered, including the Irrational Food Beliefs Scale, Body Appreciation Scale, Self Esteem Scale, and General Health Questionnaire.Results: Overall, 23.5% of participants misperceived their weight. Taking into account only those with a normal BMI (percentile 5–85, there was a significant gender difference with respect to those who perceived themselves as overweight (slightly overweight and very overweight; 13.9% of females and 7.9% of males perceived themselves as overweight (χ2 = 3.957, P < 0.05. There was a significant difference for age, with participants who perceived their weight adequately being of mean age 16.34 ± 3.17 years and those who misperceived their weight being of mean age 18.50 ± 4.02 years (F = 3.112, P < 0.05.Conclusion: Misperception of overweight seems to be more frequent in female adolescents, and mainly among

  18. Childhood weight status and timing of first substance use in an ethnically diverse sample.

    Science.gov (United States)

    Duckworth, Jennifer C; Doran, Kelly A; Waldron, Mary

    2016-07-01

    We examined associations between weight status during childhood and timing of first cigarette, alcohol, and marijuana use in an ethnically diverse sample. Data were drawn from child respondents of the 1979 National Longitudinal Survey of Youth, including 1448 Hispanic, 2126 non-Hispanic Black, and 3304 non-Hispanic, non-Black (White) respondents aged 10 years and older as of last assessment. Cox proportional hazards regression was conducted predicting age at first use from weight status (obese, overweight, and underweight relative to healthy weight) assessed at ages 7/8, separately by substance class, sex, and race/ethnicity. Tests of interactions between weight status and respondent sex and race/ethnicity were also conducted. Compared to healthy-weight females of the same race/ethnicity, overweight Hispanic females were at increased likelihood of alcohol and marijuana use and overweight White females were at increased likelihood of cigarette and marijuana use. Compared to healthy-weight males of the same race/ethnicity, obese White males were at decreased likelihood of cigarette and alcohol use and underweight Hispanic and Black males were at decreased likelihood of alcohol and marijuana use. Significant differences in associations by sex and race/ethnicity were observed in tests of interactions. Findings highlight childhood weight status as a predictor of timing of first substance use among Hispanic and Non-Hispanic Black and White female and male youth. Results suggest that collapsing across sex and race/ethnicity, a common practice in prior research, may obscure important within-group patterns of associations and thus may be of limited utility for informing preventive and early intervention efforts. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Non-Boltzmann Ensembles and Monte Carlo Simulations

    International Nuclear Information System (INIS)

    Murthy, K. P. N.

    2016-01-01

    Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc . This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g ( E , M ), as a function of both energy E , and order parameter M . This is carried out in two stages. We estimate g ( E ) in the first stage

  20. Analysis of inconsistent source sampling in monte carlo weight-window variance reduction methods

    Directory of Open Access Journals (Sweden)

    David P. Griesheimer

    2017-09-01

    Full Text Available The application of Monte Carlo (MC to large-scale fixed-source problems has recently become possible with new hybrid methods that automate generation of parameters for variance reduction techniques. Two common variance reduction techniques, weight windows and source biasing, have been automated and popularized by the consistent adjoint-driven importance sampling (CADIS method. This method uses the adjoint solution from an inexpensive deterministic calculation to define a consistent set of weight windows and source particles for a subsequent MC calculation. One of the motivations for source consistency is to avoid the splitting or rouletting of particles at birth, which requires computational resources. However, it is not always possible or desirable to implement such consistency, which results in inconsistent source biasing. This paper develops an original framework that mathematically expresses the coupling of the weight window and source biasing techniques, allowing the authors to explore the impact of inconsistent source sampling on the variance of MC results. A numerical experiment supports this new framework and suggests that certain classes of problems may be relatively insensitive to inconsistent source sampling schemes with moderate levels of splitting and rouletting.

  1. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  2. Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation

    International Nuclear Information System (INIS)

    Helgesson, P.; Sjöstrand, H.; Koning, A.J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.

    2016-01-01

    In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also

  3. Integral transforms of the quantum mechanical path integral: Hit function and path-averaged potential

    Science.gov (United States)

    Edwards, James P.; Gerber, Urs; Schubert, Christian; Trejo, Maria Anabel; Weber, Axel

    2018-04-01

    We introduce two integral transforms of the quantum mechanical transition kernel that represent physical information about the path integral. These transforms can be interpreted as probability distributions on particle trajectories measuring respectively the relative contribution to the path integral from paths crossing a given spatial point (the hit function) and the likelihood of values of the line integral of the potential along a path in the ensemble (the path-averaged potential).

  4. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  5. MVL spatiotemporal analysis for model intercomparison in EPS: application to the DEMETER multi-model ensemble

    Science.gov (United States)

    Fernández, J.; Primo, C.; Cofiño, A. S.; Gutiérrez, J. M.; Rodríguez, M. A.

    2009-08-01

    In a recent paper, Gutiérrez et al. (Nonlinear Process Geophys 15(1):109-114, 2008) introduced a new characterization of spatiotemporal error growth—the so called mean-variance logarithmic (MVL) diagram—and applied it to study ensemble prediction systems (EPS); in particular, they analyzed single-model ensembles obtained by perturbing the initial conditions. In the present work, the MVL diagram is applied to multi-model ensembles analyzing also the effect of model formulation differences. To this aim, the MVL diagram is systematically applied to the multi-model ensemble produced in the EU-funded DEMETER project. It is shown that the shared building blocks (atmospheric and ocean components) impose similar dynamics among different models and, thus, contribute to poorly sampling the model formulation uncertainty. This dynamical similarity should be taken into account, at least as a pre-screening process, before applying any objective weighting method.

  6. Probability Maps for the Visualization of Assimilation Ensemble Flow Data

    KAUST Repository

    Hollt, Thomas; Hadwiger, Markus; Knio, Omar; Hoteit, Ibrahim

    2015-01-01

    resampling, every member can follow up on any of the members before resampling. Tracking behavior over time, such as all possible paths of a particle in an ensemble vector field, becomes very difficult, as the number of combinations rises exponentially

  7. Iterated Leavitt Path Algebras

    International Nuclear Information System (INIS)

    Hazrat, R.

    2009-11-01

    Leavitt path algebras associate to directed graphs a Z-graded algebra and in their simplest form recover the Leavitt algebras L(1,k). In this note, we introduce iterated Leavitt path algebras associated to directed weighted graphs which have natural ± Z grading and in their simplest form recover the Leavitt algebras L(n,k). We also characterize Leavitt path algebras which are strongly graded. (author)

  8. Multilevel ensemble Kalman filtering

    KAUST Repository

    Hoel, Hakon

    2016-06-14

    This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  9. Multilevel ensemble Kalman filtering

    KAUST Repository

    Hoel, Hakon; Law, Kody J. H.; Tempone, Raul

    2016-01-01

    This work embeds a multilevel Monte Carlo sampling strategy into the Monte Carlo step of the ensemble Kalman filter (EnKF) in the setting of finite dimensional signal evolution and noisy discrete-time observations. The signal dynamics is assumed to be governed by a stochastic differential equation (SDE), and a hierarchy of time grids is introduced for multilevel numerical integration of that SDE. The resulting multilevel EnKF is proved to asymptotically outperform EnKF in terms of computational cost versus approximation accuracy. The theoretical results are illustrated numerically.

  10. Accuracy of self-reported height, weight and waist circumference in a Japanese sample.

    Science.gov (United States)

    Okamoto, N; Hosono, A; Shibata, K; Tsujimura, S; Oka, K; Fujita, H; Kamiya, M; Kondo, F; Wakabayashi, R; Yamada, T; Suzuki, S

    2017-12-01

    Inconsistent results have been found in prior studies investigating the accuracy of self-reported waist circumference, and no study has investigated the validity of self-reported waist circumference among Japanese individuals. This study used the diagnostic standard of metabolic syndrome to assess the accuracy of individual's self-reported height, weight and waist circumference in a Japanese sample. Study participants included 7,443 Japanese men and women aged 35-79 years. They participated in a cohort study's baseline survey between 2007 and 2011. Participants' height, weight and waist circumference were measured, and their body mass index was calculated. Self-reported values were collected through a questionnaire before the examination. Strong correlations between measured and self-reported values for height, weight and body mass index were detected. The correlation was lowest for waist circumference (men, 0.87; women, 0.73). Men significantly overestimated their waist circumference (mean difference, 0.8 cm), whereas women significantly underestimated theirs (mean difference, 5.1 cm). The sensitivity of self-reported waist circumference using the cut-off value of metabolic syndrome was 0.83 for men and 0.57 for women. Due to systematic and random errors, the accuracy of self-reported waist circumference was low. Therefore, waist circumference should be measured without relying on self-reported values, particularly in the case of women.

  11. Simultaneous determination of sample thickness, tilt, and electron mean free path using tomographic tilt images based on Beer-Lambert law.

    Science.gov (United States)

    Yan, Rui; Edwards, Thomas J; Pankratz, Logan M; Kuhn, Richard J; Lanman, Jason K; Liu, Jun; Jiang, Wen

    2015-11-01

    Cryo-electron tomography (cryo-ET) is an emerging technique that can elucidate the architecture of macromolecular complexes and cellular ultrastructure in a near-native state. Some important sample parameters, such as thickness and tilt, are needed for 3-D reconstruction. However, these parameters can currently only be determined using trial 3-D reconstructions. Accurate electron mean free path plays a significant role in modeling image formation process essential for simulation of electron microscopy images and model-based iterative 3-D reconstruction methods; however, their values are voltage and sample dependent and have only been experimentally measured for a limited number of sample conditions. Here, we report a computational method, tomoThickness, based on the Beer-Lambert law, to simultaneously determine the sample thickness, tilt and electron inelastic mean free path by solving an overdetermined nonlinear least square optimization problem utilizing the strong constraints of tilt relationships. The method has been extensively tested with both stained and cryo datasets. The fitted electron mean free paths are consistent with reported experimental measurements. The accurate thickness estimation eliminates the need for a generous assignment of Z-dimension size of the tomogram. Interestingly, we have also found that nearly all samples are a few degrees tilted relative to the electron beam. Compensation of the intrinsic sample tilt can result in horizontal structure and reduced Z-dimension of tomograms. Our fast, pre-reconstruction method can thus provide important sample parameters that can help improve performance of tomographic reconstruction of a wide range of samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Risk-Based Sampling: I Don't Want to Weight in Vain.

    Science.gov (United States)

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  13. Bounded Memory, Inertia, Sampling and Weighting Model for Market Entry Games

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lee

    2011-03-01

    Full Text Available This paper describes the “Bounded Memory, Inertia, Sampling and Weighting” (BI-SAW model, which won the http://sites.google.com/site/gpredcomp/Market Entry Prediction Competition in 2010. The BI-SAW model refines the I-SAW Model (Erev et al. [1] by adding the assumption of limited memory span. In particular, we assume when players draw a small sample to weight against the average payoff of all past experience, they can only recall 6 trials of past experience. On the other hand, we keep all other key features of the I-SAW model: (1 Reliance on a small sample of past experiences, (2 Strong inertia and recency effects, and (3 Surprise triggers change. We estimate this model using the first set of experimental results run by the competition organizers, and use it to predict results of a second set of similar experiments later ran by the organizers. We find significant improvement in out-of-sample predictability (against the I-SAW model in terms of smaller mean normalized MSD, and such result is robust to resampling the predicted game set and reversing the role of the sets of experimental results. Our model’s performance is the best among all the participants.

  14. World Music Ensemble: Kulintang

    Science.gov (United States)

    Beegle, Amy C.

    2012-01-01

    As instrumental world music ensembles such as steel pan, mariachi, gamelan and West African drums are becoming more the norm than the exception in North American school music programs, there are other world music ensembles just starting to gain popularity in particular parts of the United States. The kulintang ensemble, a drum and gong ensemble…

  15. Path Dependency

    OpenAIRE

    Mark Setterfield

    2015-01-01

    Path dependency is defined, and three different specific concepts of path dependency – cumulative causation, lock in, and hysteresis – are analyzed. The relationships between path dependency and equilibrium, and path dependency and fundamental uncertainty are also discussed. Finally, a typology of dynamical systems is developed to clarify these relationships.

  16. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    Science.gov (United States)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  17. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    Science.gov (United States)

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A Communicative Model of Mothers’ Lifestyles During Pregnancy with Low Birth Weight Based on Social Determinants of Health: A Path Analysis

    Directory of Open Access Journals (Sweden)

    Zohreh Mahmoodi1,

    2017-07-01

    Full Text Available Objectives: Low birth weight (LBW is one of the major health problems worldwide. It is important to identify the factors that play a role in the incidence of this adverse pregnancy outcome. This study aimed to develop a tool to measure mothers’ lifestyles during pregnancy with a view to the effects of social determinants on health and develop a correlation model of mothers’ lifestyles with LBW. Methods: This study was conducted using methodological and case-control designs in four stages by selecting 750 mothers with infants weighing less than 4000 g using multistage sampling. The questionnaire contained 160 items. Face, content, criterion, and construct validity were used to study the psychometrics of the instrument. Results: After psychometrics, 132 items were approved in six domains. Test results indicated the utility and the high fitness of the model and reasonable relationships adjusted for variables based on conceptual models. Based on the correlation model of lifestyle, occupation (-0.263 and social relationships (0.248 had the greatest overall effect on birth weight. Conclusions: The review of lifestyle dimensions showed that all of the dimensions directly, indirectly, or both affected birth weight. Thus, given the importance and the role of lifestyle as a determinant affecting birth weight, attention, and training interventions are important to promote healthy lifestyles.

  19. Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution

    Science.gov (United States)

    Oh, Seok-Geun; Suh, Myoung-Seok

    2017-07-01

    The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.

  20. Weighted piecewise LDA for solving the small sample size problem in face verification.

    Science.gov (United States)

    Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis

    2007-03-01

    A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.

  1. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Kaixuan, E-mail: kaixuanxubjtu@yeah.net; Wang, Jun

    2017-02-26

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  2. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  3. Weighted fractional permutation entropy and fractional sample entropy for nonlinear Potts financial dynamics

    International Nuclear Information System (INIS)

    Xu, Kaixuan; Wang, Jun

    2017-01-01

    In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model. - Highlights: • Two new entropy approaches for estimation of nonlinear complexity are proposed for the financial market. • Effectiveness analysis of proposed methods is presented and their respective features are studied. • Empirical research of proposed analysis on seven world financial market indices. • Numerical simulation of Potts financial dynamics is preformed for nonlinear complexity behaviors.

  4. Impacts of calibration strategies and ensemble methods on ensemble flood forecasting over Lanjiang basin, Southeast China

    Science.gov (United States)

    Liu, Li; Xu, Yue-Ping

    2017-04-01

    Ensemble flood forecasting driven by numerical weather prediction products is becoming more commonly used in operational flood forecasting applications.In this study, a hydrological ensemble flood forecasting system based on Variable Infiltration Capacity (VIC) model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated.The hydrological model is optimized by parallel programmed ɛ-NSGAII multi-objective algorithm and two respectively parameterized models are determined to simulate daily flows and peak flows coupled with a modular approach.The results indicatethat the ɛ-NSGAII algorithm permits more efficient optimization and rational determination on parameter setting.It is demonstrated that the multimodel ensemble streamflow mean have better skills than the best singlemodel ensemble mean (ECMWF) and the multimodel ensembles weighted on members and skill scores outperform other multimodel ensembles. For typical flood event, it is proved that the flood can be predicted 3-4 days in advance, but the flows in rising limb can be captured with only 1-2 days ahead due to the flash feature. With respect to peak flows selected by Peaks Over Threshold approach, the ensemble means from either singlemodel or multimodels are generally underestimated as the extreme values are smoothed out by ensemble process.

  5. Active pharmaceutical ingredients detected in herbal food supplements for weight loss samples on the Dutch market

    NARCIS (Netherlands)

    Reeuwijk, N.M.; Venhuis, B.J.; Kaste, de D.; Hoogenboom, L.A.P.; Rietjens, I.; Martena, M.J.

    2014-01-01

    Herbal food supplements claiming to reduce weight may contain active pharmacological ingredients (APIs) that can be used for the treatment of overweight and obesity. The aim of this study was to determine whether herbal food supplements for weight loss on the Dutch market contain APIs with weight

  6. Ensemble Kalman filtering with residual nudging

    KAUST Repository

    Luo, X.; Hoteit, Ibrahim

    2012-01-01

    Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work

  7. Multivariate localization methods for ensemble Kalman filtering

    KAUST Repository

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, Marc G.

    2015-01-01

    the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function

  8. Canonical sampling of a lattice gas

    International Nuclear Information System (INIS)

    Mueller, W.F.

    1997-01-01

    It is shown that a sampling algorithm, recently proposed in conjunction with a lattice-gas model of nuclear fragmentation, samples the canonical ensemble only in an approximate fashion. A residual weight factor has to be taken into account to calculate correct thermodynamic averages. Then, however, the algorithm is numerically inefficient. copyright 1997 The American Physical Society

  9. Optimized expanded ensembles for simulations involving molecular insertions and deletions. II. Open systems

    Science.gov (United States)

    Escobedo, Fernando A.

    2007-11-01

    In the Grand Canonical, osmotic, and Gibbs ensembles, chemical potential equilibrium is attained via transfers of molecules between the system and either a reservoir or another subsystem. In this work, the expanded ensemble (EXE) methods described in part I [F. A. Escobedo and F. J. Martínez-Veracoechea, J. Chem. Phys. 127, 174103 (2007)] of this series are extended to these ensembles to overcome the difficulties associated with implementing such whole-molecule transfers. In EXE, such moves occur via a target molecule that undergoes transitions through a number of intermediate coupling states. To minimize the tunneling time between the fully coupled and fully decoupled states, the intermediate states could be either: (i) sampled with an optimal frequency distribution (the sampling problem) or (ii) selected with an optimal spacing distribution (staging problem). The sampling issue is addressed by determining the biasing weights that would allow generating an optimal ensemble; discretized versions of this algorithm (well suited for small number of coupling stages) are also presented. The staging problem is addressed by selecting the intermediate stages in such a way that a flat histogram is the optimized ensemble. The validity of the advocated methods is demonstrated by their application to two model problems, the solvation of large hard spheres into a fluid of small and large spheres, and the vapor-liquid equilibrium of a chain system.

  10. Multivariate localization methods for ensemble Kalman filtering

    OpenAIRE

    S. Roh; M. Jun; I. Szunyogh; M. G. Genton

    2015-01-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of ...

  11. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    Science.gov (United States)

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  12. Ensemble-based Kalman Filters in Strongly Nonlinear Dynamics

    Institute of Scientific and Technical Information of China (English)

    Zhaoxia PU; Joshua HACKER

    2009-01-01

    This study examines the effectiveness of ensemble Kalman filters in data assimilation with the strongly nonlinear dynamics of the Lorenz-63 model, and in particular their use in predicting the regime transition that occurs when the model jumps from one basin of attraction to the other. Four configurations of the ensemble-based Kalman filtering data assimilation techniques, including the ensemble Kalman filter, ensemble adjustment Kalman filter, ensemble square root filter and ensemble transform Kalman filter, are evaluated with their ability in predicting the regime transition (also called phase transition) and also are compared in terms of their sensitivity to both observational and sampling errors. The sensitivity of each ensemble-based filter to the size of the ensemble is also examined.

  13. Examining weight and eating behavior by sexual orientation in a sample of male veterans.

    Science.gov (United States)

    Bankoff, Sarah M; Richards, Lauren K; Bartlett, Brooke; Wolf, Erika J; Mitchell, Karen S

    2016-07-01

    Eating disorders are understudied in men and in sexual minority populations; however, extant evidence suggests that gay men have higher rates of disordered eating than heterosexual men. The present study examined the associations between sexual orientation, body mass index (BMI), disordered eating behaviors, and food addiction in a sample of male veterans. Participants included 642 male veterans from the Knowledge Networks-GfK Research Panel. They were randomly selected from a larger study based on previously reported trauma exposure; 96% identified as heterosexual. Measures included the Eating Disorder Diagnostic Scale, the Yale Food Addiction Scale, and self-reported height and weight. Heterosexual and sexual minority men did not differ significantly in terms of BMI. However, gay and bisexual men (n=24) endorsed significantly greater eating disorder symptoms and food addiction compared to heterosexual men. Our findings that sexual minority male veterans may be more likely to experience eating disorder and food addiction symptoms compared to heterosexual male veterans highlight the importance of prevention, assessment, and treatment efforts targeted to this population. Published by Elsevier Inc.

  14. Various multistage ensembles for prediction of heating energy consumption

    Directory of Open Access Journals (Sweden)

    Radisa Jovanovic

    2015-04-01

    Full Text Available Feedforward neural network models are created for prediction of daily heating energy consumption of a NTNU university campus Gloshaugen using actual measured data for training and testing. Improvement of prediction accuracy is proposed by using neural network ensemble. Previously trained feed-forward neural networks are first separated into clusters, using k-means algorithm, and then the best network of each cluster is chosen as member of an ensemble. Two conventional averaging methods for obtaining ensemble output are applied; simple and weighted. In order to achieve better prediction results, multistage ensemble is investigated. As second level, adaptive neuro-fuzzy inference system with various clustering and membership functions are used to aggregate the selected ensemble members. Feedforward neural network in second stage is also analyzed. It is shown that using ensemble of neural networks can predict heating energy consumption with better accuracy than the best trained single neural network, while the best results are achieved with multistage ensemble.

  15. Determination of the populations and structures of multiple conformers in an ensemble from NMR data: Multiple-copy refinement of nucleic acid structures using floating weights

    International Nuclear Information System (INIS)

    Goerler, Adrian; Ulyanov, Nikolai B.; James, Thomas L.

    2000-01-01

    A new algorithm is presented for determination of structural conformers and their populations based on NMR data. Restrained Metropolis Monte Carlo simulations or restrained energy minimizations are performed for several copies of a molecule simultaneously. The calculations are restrained with dipolar relaxation rates derived from measured NOE intensities via complete relaxation matrix analysis. The novel feature of the algorithm is that the weights of individual conformers are determined in every refinement step, by the quadratic programming algorithm, in such a way that the restraint energy is minimized. Its design ensures that the calculated populations of the individual conformers are based only on experimental restraints. Presence of internally inconsistent restraints is the driving force for determination of distinct multiple conformers. The method is applied to various simulated test systems. Conformational calculations on nucleic acids are carried out using generalized helical parameters with the program DNAminiCarlo. From different mixtures of A- and B-DNA, minor fractions as low as 10% could be determined with restrained energy minimization. For B-DNA with three local conformers (C2'-endo, O4'-exo, C3'-endo), the minor O4'-exo conformer could not be reliably determined using NOE data typically measured for DNA. The other two conformers, C2'-endo and C3'-endo, could be reproduced by Metropolis Monte Carlo simulated annealing. The behavior of the algorithm in various situations is analyzed, and a number of refinement protocols are discussed. Prior to application of this algorithm to each experimental system, it is suggested that the presence of internal inconsistencies in experimental data be ascertained. In addition, because the performance of the algorithm depends on the type of conformers involved and experimental data available, it is advisable to carry out test calculations with simulated data modeling each experimental system studied

  16. The BD Onclarity HPV assay on SurePath collected samples meets the International Guidelines for Human Papillomavirus Test Requirements for Cervical Screening

    DEFF Research Database (Denmark)

    Ejegod, Ditte; Bottari, Fabio; Pedersen, Helle

    2016-01-01

    This study describes a validation of the BD Onclarity HPV (Onclarity) assay using the international guidelines for HPV test requirements for cervical cancer screening of women 30 years and above using Danish SurePath screening samples. The clinical specificity (0.90, 95% CI: 0.88-0.91) and sensit......This study describes a validation of the BD Onclarity HPV (Onclarity) assay using the international guidelines for HPV test requirements for cervical cancer screening of women 30 years and above using Danish SurePath screening samples. The clinical specificity (0.90, 95% CI: 0.......88-0.91) and sensitivity (0.97, 95% CI: 0.87-1.0) of the Onclarity assay were shown to be non-inferior to the reference assay (specificity 0.90, 95% CI: 0.88-0.92, sensitivity 0.98, 95% CI: 0.91-1.0). The intra-laboratory reproducibility of Onclarity was 97% with a lower confidence bound of 96% (kappa value: 0...

  17. Implicit and Explicit Weight Bias in a National Sample of 4732 Medical Students: The Medical Student CHANGES Study

    OpenAIRE

    Phelan, Sean M.; Dovidio, John F.; Puhl, Rebecca M.; Burgess, Diana J.; Nelson, David B.; Yeazel, Mark W.; Hardeman, Rachel; Perry, Sylvia; van Ryn, Michelle

    2014-01-01

    Objective To examine the magnitude of explicit and implicit weight biases compared to biases against other groups; and identify student factors predicting bias in a large national sample of medical students. Design and Methods A web-based survey was completed by 4732 1st year medical students from 49 medical schools as part of a longitudinal study of medical education. The survey included a validated measure of implicit weight bias, the implicit association test, and 2 measures of explicit bi...

  18. Microcanonical ensemble formulation of lattice gauge theory

    International Nuclear Information System (INIS)

    Callaway, D.J.E.; Rahman, A.

    1982-01-01

    A new formulation of lattice gauge theory without explicit path integrals or sums is obtained by using the microcanonical ensemble of statistical mechanics. Expectation values in the new formalism are calculated by solving a large set of coupled, nonlinear, ordinary differential equations. The average plaquette for compact electrodynamics calculated in this fashion agrees with standard Monte Carlo results. Possible advantages of the microcanonical method in applications to fermionic systems are discussed

  19. The Ensembl REST API: Ensembl Data for Any Language.

    Science.gov (United States)

    Yates, Andrew; Beal, Kathryn; Keenan, Stephen; McLaren, William; Pignatelli, Miguel; Ritchie, Graham R S; Ruffier, Magali; Taylor, Kieron; Vullo, Alessandro; Flicek, Paul

    2015-01-01

    We present a Web service to access Ensembl data using Representational State Transfer (REST). The Ensembl REST server enables the easy retrieval of a wide range of Ensembl data by most programming languages, using standard formats such as JSON and FASTA while minimizing client work. We also introduce bindings to the popular Ensembl Variant Effect Predictor tool permitting large-scale programmatic variant analysis independent of any specific programming language. The Ensembl REST API can be accessed at http://rest.ensembl.org and source code is freely available under an Apache 2.0 license from http://github.com/Ensembl/ensembl-rest. © The Author 2014. Published by Oxford University Press.

  20. Sampling strategies for the analysis of reactive low-molecular weight compounds in air

    NARCIS (Netherlands)

    Henneken, H.

    2006-01-01

    Within this thesis, new sampling and analysis strategies for the determination of airborne workplace contaminants have been developed. Special focus has been directed towards the development of air sampling methods that involve diffusive sampling. In an introductory overview, the current

  1. Path Expressions

    Science.gov (United States)

    1975-06-01

    Traditionally, synchronization of concurrent processes is coded in line by operations on semaphores or similar objects. Path expressions move the...discussion about a variety of synchronization primitives . An analysis of their relative power is found in [3]. Path expressions do not introduce yet...another synchronization primitive . A path expression relates to such primitives as a for- or while-statement of an ALGOL-like language relates to a JUMP

  2. Molecular Weights of Bovine and Porcine Heparin Samples: Comparison of Chromatographic Methods and Results of a Collaborative Survey

    Directory of Open Access Journals (Sweden)

    Sabrina Bertini

    2017-07-01

    Full Text Available In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP’s broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.

  3. Molecular Weights of Bovine and Porcine Heparin Samples: Comparison of Chromatographic Methods and Results of a Collaborative Survey.

    Science.gov (United States)

    Bertini, Sabrina; Risi, Giulia; Guerrini, Marco; Carrick, Kevin; Szajek, Anita Y; Mulloy, Barbara

    2017-07-19

    In a collaborative study involving six laboratories in the USA, Europe, and India the molecular weight distributions of a panel of heparin sodium samples were determined, in order to compare heparin sodium of bovine intestinal origin with that of bovine lung and porcine intestinal origin. Porcine samples met the current criteria as laid out in the USP Heparin Sodium monograph. Bovine lung heparin samples had consistently lower average molecular weights. Bovine intestinal heparin was variable in molecular weight; some samples fell below the USP limits, some fell within these limits and others fell above the upper limits. These data will inform the establishment of pharmacopeial acceptance criteria for heparin sodium derived from bovine intestinal mucosa. The method for MW determination as described in the USP monograph uses a single, broad standard calibrant to characterize the chromatographic profile of heparin sodium on high-resolution silica-based GPC columns. These columns may be short-lived in some laboratories. Using the panel of samples described above, methods based on the use of robust polymer-based columns have been developed. In addition to the use of the USP's broad standard calibrant for heparin sodium with these columns, a set of conditions have been devised that allow light-scattering detected molecular weight characterization of heparin sodium, giving results that agree well with the monograph method. These findings may facilitate the validation of variant chromatographic methods with some practical advantages over the USP monograph method.

  4. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    OpenAIRE

    Yan, Ying; Suo, Bin

    2017-01-01

    Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix b...

  5. A path model analysis on predictors of dropout (at 6 and 12 months) during the weight loss interventions in endocrinology outpatient division.

    Science.gov (United States)

    Perna, Simone; Spadaccini, Daniele; Riva, Antonella; Allegrini, Pietro; Edera, Chiara; Faliva, Milena Anna; Peroni, Gabriella; Naso, Maurizio; Nichetti, Mara; Gozzer, Carlotta; Vigo, Beatrice; Rondanelli, Mariangela

    2018-02-22

    This study aimed to identify the dropout rate at 6 and 12 months from the first outpatient visit, and to analyze dropout risk factors among the following areas: biochemical examinations, anthropometric measures, psychological tests, personal data, and life attitude such as smoking, physical activity, and pathologies. This is a retrospective longitudinal observational study. Patients undergo an outpatient endocrinology visit, which includes collecting biographical data, anthropometric measurements, physical and pathological history, psychological tests, and biochemical examinations. The sample consists of 913 subjects (682 women and 231 men), with an average age of 50.88 years (±15.80) for the total sample, with a BMI of 33.11 ± 5.65 kg/m 2 . 51.9% of the patients abandoned therapy at 6 months after their first visit, and analyzing the dropout rate at 12 months, it appears that 69.5% of subjects abandon therapy. The main predictor of dropout risk factors at 6 and 12 months is the weight loss during the first 3 months (p dropout at 12 months. Patients who introduced physical activity had a reduction of - 17% (at 6 months) and -13% (at 12 months) of dropout risk (p dropout vs. other categories of worker (i = 0.58; p Dropout risk at 12 months decrease in patients with a previous history of cancer, Endocrine and psychic and behavioral disorders (p dropout.

  6. Polarized ensembles of random pure states

    International Nuclear Information System (INIS)

    Cunden, Fabio Deelan; Facchi, Paolo; Florio, Giuseppe

    2013-01-01

    A new family of polarized ensembles of random pure states is presented. These ensembles are obtained by linear superposition of two random pure states with suitable distributions, and are quite manageable. We will use the obtained results for two purposes: on the one hand we will be able to derive an efficient strategy for sampling states from isopurity manifolds. On the other, we will characterize the deviation of a pure quantum state from separability under the influence of noise. (paper)

  7. Polarized ensembles of random pure states

    Science.gov (United States)

    Deelan Cunden, Fabio; Facchi, Paolo; Florio, Giuseppe

    2013-08-01

    A new family of polarized ensembles of random pure states is presented. These ensembles are obtained by linear superposition of two random pure states with suitable distributions, and are quite manageable. We will use the obtained results for two purposes: on the one hand we will be able to derive an efficient strategy for sampling states from isopurity manifolds. On the other, we will characterize the deviation of a pure quantum state from separability under the influence of noise.

  8. Effect of sample matrix composition on INAA sample weights, measurement precisions, limits of detection, and optimum conditions

    International Nuclear Information System (INIS)

    Guinn, V.P.; Nakazawa, L.; Leslie, J.

    1984-01-01

    The instrumental neutron activation analysis (INAA) Advance Prediction Computer Program (APCP) is extremely useful in guiding one to optimum subsequent experimental analyses of samples of all types of matrices. By taking into account the contributions to the cumulative Compton-continuum levels from all significant induced gamma-emitting radionuclides, it provides good INAA advance estimates of detectable photopeaks, measurement precisions, concentration lower limits of detection (LOD's) and optimum irradiation/decay/counting conditions - as well as of the very important maximum allowable sample size for each set of conditions calculated. The usefulness and importance of the four output parameters cited in the title are discussed using the INAA APCP outputs for NBS SRM-1632 Coal as the example

  9. Musical ensembles in Ancient Mesapotamia

    NARCIS (Netherlands)

    Krispijn, T.J.H.; Dumbrill, R.; Finkel, I.

    2010-01-01

    Identification of musical instruments from ancient Mesopotamia by comparing musical ensembles attested in Sumerian and Akkadian texts with depicted ensembles. Lexicographical contributions to the Sumerian and Akkadian lexicon.

  10. Exploring the Relationship between Reward and Punishment Sensitivity and Gambling Disorder in a Clinical Sample: A Path Modeling Analysis.

    Science.gov (United States)

    Jiménez-Murcia, Susana; Fernández-Aranda, Fernando; Mestre-Bach, Gemma; Granero, Roser; Tárrega, Salomé; Torrubia, Rafael; Aymamí, Neus; Gómez-Peña, Mónica; Soriano-Mas, Carles; Steward, Trevor; Moragas, Laura; Baño, Marta; Del Pino-Gutiérrez, Amparo; Menchón, José M

    2017-06-01

    Most individuals will gamble during their lifetime, yet only a select few will develop gambling disorder. Gray's Reinforcement Sensitivity Theory holds promise for providing insight into gambling disorder etiology and symptomatology as it ascertains that neurobiological differences in reward and punishment sensitivity play a crucial role in determining an individual's affect and motives. The aim of the study was to assess a mediational pathway, which included patients' sex, personality traits, reward and punishment sensitivity, and gambling-severity variables. The Sensitivity to Punishment and Sensitivity to Reward Questionnaire, the South Oaks Gambling Screen, the Symptom Checklist-Revised, and the Temperament and Character Inventory-Revised were administered to a sample of gambling disorder outpatients (N = 831), diagnosed according to DSM-5 criteria, attending a specialized outpatient unit. Sociodemographic variables were also recorded. A structural equation model found that both reward and punishment sensitivity were positively and directly associated with increased gambling severity, sociodemographic variables, and certain personality traits while also revealing a complex mediational role for these dimensions. To this end, our findings suggest that the Sensitivity to Punishment and Sensitivity to Reward Questionnaire could be a useful tool for gaining a better understanding of different gambling disorder phenotypes and developing tailored interventions.

  11. Monthly ENSO Forecast Skill and Lagged Ensemble Size

    Science.gov (United States)

    Trenary, L.; DelSole, T.; Tippett, M. K.; Pegion, K.

    2018-04-01

    The mean square error (MSE) of a lagged ensemble of monthly forecasts of the Niño 3.4 index from the Climate Forecast System (CFSv2) is examined with respect to ensemble size and configuration. Although the real-time forecast is initialized 4 times per day, it is possible to infer the MSE for arbitrary initialization frequency and for burst ensembles by fitting error covariances to a parametric model and then extrapolating to arbitrary ensemble size and initialization frequency. Applying this method to real-time forecasts, we find that the MSE consistently reaches a minimum for a lagged ensemble size between one and eight days, when four initializations per day are included. This ensemble size is consistent with the 8-10 day lagged ensemble configuration used operationally. Interestingly, the skill of both ensemble configurations is close to the estimated skill of the infinite ensemble. The skill of the weighted, lagged, and burst ensembles are found to be comparable. Certain unphysical features of the estimated error growth were tracked down to problems with the climatology and data discontinuities.

  12. Effectiveness of prediction equations in estimating energy expenditure sample of Brazilian and Spanish women with excess body weight

    OpenAIRE

    Lopes Rosado, Eliane; Santiago de Brito, Roberta; Bressan, Josefina; Martínez Hernández, José Alfredo

    2014-01-01

    Objective: To assess the adequacy of predictive equations for estimation of energy expenditure (EE), compared with the EE using indirect calorimetry in a sample of Brazilian and Spanish women with excess body weight Methods: It is a cross-sectional study with 92 obese adult women [26 Brazilian -G1- and 66 Spanish - G2- (aged 20-50)]. Weight and height were evaluated during fasting for the calculation of body mass index and predictive equations. EE was evaluated using the open-circuit indirect...

  13. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  14. Ensemble Data Mining Methods

    Science.gov (United States)

    Oza, Nikunj C.

    2004-01-01

    Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve better prediction accuracy than any of the individual models could on their own. The basic goal when designing an ensemble is the same as when establishing a committee of people: each member of the committee should be as competent as possible, but the members should be complementary to one another. If the members are not complementary, Le., if they always agree, then the committee is unnecessary---any one member is sufficient. If the members are complementary, then when one or a few members make an error, the probability is high that the remaining members can correct this error. Research in ensemble methods has largely revolved around designing ensembles consisting of competent yet complementary models.

  15. Ensemble Data Mining Methods

    Data.gov (United States)

    National Aeronautics and Space Administration — Ensemble Data Mining Methods, also known as Committee Methods or Model Combiners, are machine learning methods that leverage the power of multiple models to achieve...

  16. Comparison of Spot and Time Weighted Averaging (TWA Sampling with SPME-GC/MS Methods for Trihalomethane (THM Analysis

    Directory of Open Access Journals (Sweden)

    Don-Roger Parkinson

    2016-02-01

    Full Text Available Water samples were collected and analyzed for conductivity, pH, temperature and trihalomethanes (THMs during the fall of 2014 at two monitored municipal drinking water source ponds. Both spot (or grab and time weighted average (TWA sampling methods were assessed over the same two day sampling time period. For spot sampling, replicate samples were taken at each site and analyzed within 12 h of sampling by both Headspace (HS- and direct (DI- solid phase microextraction (SPME sampling/extraction methods followed by Gas Chromatography/Mass Spectrometry (GC/MS. For TWA, a two day passive on-site TWA sampling was carried out at the same sampling points in the ponds. All SPME sampling methods undertaken used a 65-µm PDMS/DVB SPME fiber, which was found optimal for THM sampling. Sampling conditions were optimized in the laboratory using calibration standards of chloroform, bromoform, bromodichloromethane, dibromochloromethane, 1,2-dibromoethane and 1,2-dichloroethane, prepared in aqueous solutions from analytical grade samples. Calibration curves for all methods with R2 values ranging from 0.985–0.998 (N = 5 over the quantitation linear range of 3–800 ppb were achieved. The different sampling methods were compared for quantification of the water samples, and results showed that DI- and TWA- sampling methods gave better data and analytical metrics. Addition of 10% wt./vol. of (NH42SO4 salt to the sampling vial was found to aid extraction of THMs by increasing GC peaks areas by about 10%, which resulted in lower detection limits for all techniques studied. However, for on-site TWA analysis of THMs in natural waters, the calibration standard(s ionic strength conditions, must be carefully matched to natural water conditions to properly quantitate THM concentrations. The data obtained from the TWA method may better reflect actual natural water conditions.

  17. Loss of control eating and weight outcomes after bariatric surgery: a study with a Portuguese sample

    OpenAIRE

    Conceição, Eva Martins; Silva, Ana Isabel Pinto Bastos Leite; Brandão, Isabel; Vaz, Ana Rita Rendeiro Ribeiro; Ramalho, Sofia Marlene Marques; Arrojado, Filipa; Costa, José; Machado, Paulo P. P.

    2014-01-01

    The present study aim is to investigate the frequency of loss of control eating (LOC) episodes in three groups with different assessment times: one before, one at short and one at long-term after bariatric surgery; as well as to explore the association of postoperative problematic eating behaviors and weight outcomes and psychological characteristics. This cross-sectional study compared a group of preoperative bariatric surgery patients (n = 176) and two postoperative groups, one at short-ter...

  18. The Relationship Between Sleep and Weight in a Sample of Adolescents

    OpenAIRE

    Lytle, Leslie A.; Pasch, Keryn E.; Farbakhsh, Kian

    2010-01-01

    Research to date in young children and adults shows a strong, inverse relationship between sleep duration and risk for overweight and obesity. Fewer studies examining this relationship have been conducted in adolescents. The purpose of the article is to describe the relationship between sleep and weight in a population of adolescents, controlling for demographics, energy intake, energy expenditure, and depression. This is a cross-sectional study of 723 adolescents participating in population-...

  19. The relationship between sleep and weight in a sample of adolescents.

    Science.gov (United States)

    Lytle, Leslie A; Pasch, Keryn E; Farbakhsh, Kian

    2011-02-01

    Research to date in young children and adults shows a strong, inverse relationship between sleep duration and risk for overweight and obesity. Fewer studies examining this relationship have been conducted in adolescents. The purpose of the article is to describe the relationship between sleep and weight in a population of adolescents, controlling for demographics, energy intake, energy expenditure, and depression. This is a cross-sectional study of 723 adolescents participating in population-based studies of the etiologic factors related to obesity. We examined the relationship between three weight-related dependent variables obtained through a clinical assessment and three sleep variables obtained through self-report. Average caloric intake from dietary recalls, average activity counts based on accelerometers, and depression were included as covariates and the analysis was stratified by gender and grade level. Our results show that the relationship between sleep duration and BMI is evident in middle-school boys (β = -0.32, s.e. = 0.06: P high-school students. Differences in sleep patterns have little association with weight in males, but in high-school girls, waking up late on weekends as compared to weekdays is associated with lower body fat (β = -0.80, s.e. = 0.40: P = 0.05) and a healthy weight status (β = -0.28, s.e. = 0.14: P = 0.05). This study adds to the evidence that, particularly for middle-school boys and girls, inadequate sleep is a risk factor for early adolescent obesity. Future research needs to examine the relationship longitudinally and to study potential mediators of the relationship.

  20. Weighted Moments Estimators of the Parameters for the Extreme Value Distribution Based on the Multiply Type II Censored Sample

    Directory of Open Access Journals (Sweden)

    Jong-Wuu Wu

    2013-01-01

    Full Text Available We propose the weighted moments estimators (WMEs of the location and scale parameters for the extreme value distribution based on the multiply type II censored sample. Simulated mean squared errors (MSEs of best linear unbiased estimator (BLUE and exact MSEs of WMEs are compared to study the behavior of different estimation methods. The results show the best estimator among the WMEs and BLUE under different combinations of censoring schemes.

  1. 'Lazy' quantum ensembles

    International Nuclear Information System (INIS)

    Parfionov, George; Zapatrin, Roman

    2006-01-01

    We compare different strategies aimed to prepare an ensemble with a given density matrix ρ. Preparing the ensemble of eigenstates of ρ with appropriate probabilities can be treated as 'generous' strategy: it provides maximal accessible information about the state. Another extremity is the so-called 'Scrooge' ensemble, which is mostly stingy in sharing the information. We introduce 'lazy' ensembles which require minimal effort to prepare the density matrix by selecting pure states with respect to completely random choice. We consider two parties, Alice and Bob, playing a kind of game. Bob wishes to guess which pure state is prepared by Alice. His null hypothesis, based on the lack of any information about Alice's intention, is that Alice prepares any pure state with equal probability. Then, the average quantum state measured by Bob turns out to be ρ, and he has to make a new hypothesis about Alice's intention solely based on the information that the observed density matrix is ρ. The arising 'lazy' ensemble is shown to be the alternative hypothesis which minimizes type I error

  2. The semantic similarity ensemble

    Directory of Open Access Journals (Sweden)

    Andrea Ballatore

    2013-12-01

    Full Text Available Computational measures of semantic similarity between geographic terms provide valuable support across geographic information retrieval, data mining, and information integration. To date, a wide variety of approaches to geo-semantic similarity have been devised. A judgment of similarity is not intrinsically right or wrong, but obtains a certain degree of cognitive plausibility, depending on how closely it mimics human behavior. Thus selecting the most appropriate measure for a specific task is a significant challenge. To address this issue, we make an analogy between computational similarity measures and soliciting domain expert opinions, which incorporate a subjective set of beliefs, perceptions, hypotheses, and epistemic biases. Following this analogy, we define the semantic similarity ensemble (SSE as a composition of different similarity measures, acting as a panel of experts having to reach a decision on the semantic similarity of a set of geographic terms. The approach is evaluated in comparison to human judgments, and results indicate that an SSE performs better than the average of its parts. Although the best member tends to outperform the ensemble, all ensembles outperform the average performance of each ensemble's member. Hence, in contexts where the best measure is unknown, the ensemble provides a more cognitively plausible approach.

  3. Improving the Network Scale-Up Estimator: Incorporating Means of Sums, Recursive Back Estimation, and Sampling Weights.

    Directory of Open Access Journals (Sweden)

    Patrick Habecker

    Full Text Available Researchers interested in studying populations that are difficult to reach through traditional survey methods can now draw on a range of methods to access these populations. Yet many of these methods are more expensive and difficult to implement than studies using conventional sampling frames and trusted sampling methods. The network scale-up method (NSUM provides a middle ground for researchers who wish to estimate the size of a hidden population, but lack the resources to conduct a more specialized hidden population study. Through this method it is possible to generate population estimates for a wide variety of groups that are perhaps unwilling to self-identify as such (for example, users of illegal drugs or other stigmatized populations via traditional survey tools such as telephone or mail surveys--by asking a representative sample to estimate the number of people they know who are members of such a "hidden" subpopulation. The original estimator is formulated to minimize the weight a single scaling variable can exert upon the estimates. We argue that this introduces hidden and difficult to predict biases, and instead propose a series of methodological advances on the traditional scale-up estimation procedure, including a new estimator. Additionally, we formalize the incorporation of sample weights into the network scale-up estimation process, and propose a recursive process of back estimation "trimming" to identify and remove poorly performing predictors from the estimation process. To demonstrate these suggestions we use data from a network scale-up mail survey conducted in Nebraska during 2014. We find that using the new estimator and recursive trimming process provides more accurate estimates, especially when used in conjunction with sampling weights.

  4. Mind-Body Practice and Body Weight Status in a Large Population-Based Sample of Adults.

    Science.gov (United States)

    Camilleri, Géraldine M; Méjean, Caroline; Bellisle, France; Hercberg, Serge; Péneau, Sandrine

    2016-04-01

    In industrialized countries characterized by a high prevalence of obesity and chronic stress, mind-body practices such as yoga or meditation may facilitate body weight control. However, virtually no data are available to ascertain whether practicing mind-body techniques is associated with weight status. The purpose of this study is to examine the relationship between the practice of mind-body techniques and weight status in a large population-based sample of adults. A total of 61,704 individuals aged ≥18 years participating in the NutriNet-Santé study (2009-2014) were included in this cross-sectional analysis conducted in 2014. Data on mind-body practices were collected, as well as self-reported weight and height. The association between the practice of mind-body techniques and weight status was assessed using multiple linear and multinomial logistic regression models adjusted for sociodemographic, lifestyle, and dietary factors. After adjusting for sociodemographic and lifestyle factors, regular users of mind-body techniques were less likely to be overweight (OR=0.68, 95% CI=0.63, 0.74) or obese (OR=0.55, 95% CI=0.50, 0.61) than never users. In addition, regular users had a lower BMI than never users (-3.19%, 95% CI=-3.71, -2.68). These data provide novel information about an inverse relationship between mind-body practice and weight status. If causal links were demonstrated in further prospective studies, such practice could be fostered in obesity prevention and treatment. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.

  5. Differentiating between endocervical glandular neoplasia and high grade squamous intraepithelial lesions in endocervical crypts: cytological features in ThinPrep and SurePath cervical cytology samples.

    Science.gov (United States)

    Thiryayi, Sakinah A; Marshall, Janet; Rana, Durgesh N

    2009-05-01

    A recent audit at our institution revealed a higher number of cases diagnosed as endocervical glandular neoplasia on ThinPrep (TP) cervical cytology samples (9 cases) as opposed to SurePath (SP) (1 case), which on histology showed only high-grade cervical intraepithelial neoplasia (CIN) with endocervical crypt involvement (CI). We attempted to ascertain the reasons for this finding by reviewing the available slides of these cases, as well as slides of cases diagnosed as glandular neoplasia on cytology and histology; cases diagnosed as high-grade squamous intraepithelial lesions (HSIL) on cytology which had CIN with CI on histology and cases with mixed glandular and squamous abnormalities diagnosed both cytologically and histologically. Single neoplastic glandular cells and short pseudostratified strips were more prevalent in SP than TP with the cell clusters in glandular neoplasia 3-4 cells thick, in contrast to the dense crowded centre of cell groups in HSIL with CI. The cells at the periphery of groups can be misleading. Cases with HSIL and glandular neoplasia have a combination of the features of each entity in isolation. The diagnosis of glandular neoplasia remains challenging and conversion from conventional to liquid based cervical cytology requires a period of learning and adaptation, which can be facilitated by local audit and review of the cytology slides in cases with a cytology-histology mismatch. (c) 2009 Wiley-Liss, Inc.

  6. Eating style, overeating and weight gain. A prospective 2-year follow-up study in a representative Dutch sample.

    Science.gov (United States)

    van Strien, Tatjana; Herman, C Peter; Verheijden, Marieke W

    2012-12-01

    This study examined which individuals are particularly at risk for developing overweight and whether there are behavioral lifestyle factors that may attenuate this susceptibility. A prospective study with a 2-year follow-up was conducted in a sample representative of the general population of The Netherlands (n=590). Body mass change (self-reported) was assessed in relation to overeating and change in physical activity (both self-reported), dietary restraint, emotional eating, and external eating, as assessed by the Dutch Eating Behavior Questionnaire. There was a consistent main (suppressive) effect of increased physical activity on BMI change. Only emotional eating and external eating moderated the relation between overeating and body mass change. However, the interaction effect of external eating became borderline significant with Yes or No meaningful weight gain (weight gain >3%) as dependent variable. It was concluded that whilst increasing physical activity may attenuate weight gain, particularly high emotional eaters seem at risk for developing overweight, because overconsumption seems to be more strongly related to weight gain in people with high degrees of emotional eating. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Path Dependence

    DEFF Research Database (Denmark)

    Madsen, Mogens Ove

    Begrebet Path Dependence blev oprindelig udviklet inden for New Institutionel Economics af bl.a. David, Arthur og North. Begrebet har spredt sig vidt i samfundsvidenskaberne og undergået en udvikling. Dette paper propagerer for at der er sket så en så omfattende udvikling af begrebet, at man nu kan...... tale om 1. og 2. generation af Path Dependence begrebet. Den nyeste udvikling af begrebet har relevans for metodologi-diskusionerne i relation til Keynes...

  8. Path optimization method for the sign problem

    Directory of Open Access Journals (Sweden)

    Ohnishi Akira

    2018-01-01

    Full Text Available We propose a path optimization method (POM to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t(f ϵ R and by optimizing f(t to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.

  9. On the maximal use of Monte Carlo samples: re-weighting events at NLO accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Mattelaer, Olivier [Durham University, Institute for Particle Physics Phenomenology (IPPP), Durham (United Kingdom)

    2016-12-15

    Accurate Monte Carlo simulations for high-energy events at CERN's Large Hadron Collider, are very expensive, both from the computing and storage points of view. We describe a method that allows to consistently re-use parton-level samples accurate up to NLO in QCD under different theoretical hypotheses. We implement it in MadGraph5{sub a}MC rate at NLO and show its validation by applying it to several cases of practical interest for the search of new physics at the LHC. (orig.)

  10. Bayesian ensemble refinement by replica simulations and reweighting

    Science.gov (United States)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  11. Automation of registration of sample weights for high-volume neutron activation analysis at the IBR-2 reactor of FLNP, JINR

    International Nuclear Information System (INIS)

    Dmitriev, A.Yu.; Dmitriev, F.A.

    2015-01-01

    The 'Weight' software tool was created at FLNP JINR to automate the reading of analytical balance readouts and saving these values in the NAA database. The analytical balance connected to the personal computer is used to measure weight values. The 'Weight' software tool controls the reading of weight values and the exchange of information with the NAA database. The weighing process of a large amount of samples is reliably provided during high-volume neutron activation analysis. [ru

  12. Microdosimetry calculations for monoenergetic electrons using Geant4-DNA combined with a weighted track sampling algorithm.

    Science.gov (United States)

    Famulari, Gabriel; Pater, Piotr; Enger, Shirin A

    2017-07-07

    The aim of this study was to calculate microdosimetric distributions for low energy electrons simulated using the Monte Carlo track structure code Geant4-DNA. Tracks for monoenergetic electrons with kinetic energies ranging from 100 eV to 1 MeV were simulated in an infinite spherical water phantom using the Geant4-DNA extension included in Geant4 toolkit version 10.2 (patch 02). The microdosimetric distributions were obtained through random sampling of transfer points and overlaying scoring volumes within the associated volume of the tracks. Relative frequency distributions of energy deposition f(>E)/f(>0) and dose mean lineal energy ([Formula: see text]) values were calculated in nanometer-sized spherical and cylindrical targets. The effects of scoring volume and scoring techniques were examined. The results were compared with published data generated using MOCA8B and KURBUC. Geant4-DNA produces a lower frequency of higher energy deposits than MOCA8B. The [Formula: see text] values calculated with Geant4-DNA are smaller than those calculated using MOCA8B and KURBUC. The differences are mainly due to the lower ionization and excitation cross sections of Geant4-DNA for low energy electrons. To a lesser extent, discrepancies can also be attributed to the implementation in this study of a new and fast scoring technique that differs from that used in previous studies. For the same mean chord length ([Formula: see text]), the [Formula: see text] calculated in cylindrical volumes are larger than those calculated in spherical volumes. The discrepancies due to cross sections and scoring geometries increase with decreasing scoring site dimensions. A new set of [Formula: see text] values has been presented for monoenergetic electrons using a fast track sampling algorithm and the most recent physics models implemented in Geant4-DNA. This dataset can be combined with primary electron spectra to predict the radiation quality of photon and electron beams.

  13. On the use of transition matrix methods with extended ensembles.

    Science.gov (United States)

    Escobedo, Fernando A; Abreu, Charlles R A

    2006-03-14

    Different extended ensemble schemes for non-Boltzmann sampling (NBS) of a selected reaction coordinate lambda were formulated so that they employ (i) "variable" sampling window schemes (that include the "successive umbrella sampling" method) to comprehensibly explore the lambda domain and (ii) transition matrix methods to iteratively obtain the underlying free-energy eta landscape (or "importance" weights) associated with lambda. The connection between "acceptance ratio" and transition matrix methods was first established to form the basis of the approach for estimating eta(lambda). The validity and performance of the different NBS schemes were then assessed using as lambda coordinate the configurational energy of the Lennard-Jones fluid. For the cases studied, it was found that the convergence rate in the estimation of eta is little affected by the use of data from high-order transitions, while it is noticeably improved by the use of a broader window of sampling in the variable window methods. Finally, it is shown how an "elastic" window of sampling can be used to effectively enact (nonuniform) preferential sampling over the lambda domain, and how to stitch the weights from separate one-dimensional NBS runs to produce a eta surface over a two-dimensional domain.

  14. Neural Network Ensembles

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Salamon, Peter

    1990-01-01

    We propose several means for improving the performance an training of neural networks for classification. We use crossvalidation as a tool for optimizing network parameters and architecture. We show further that the remaining generalization error can be reduced by invoking ensembles of similar...... networks....

  15. SSAGES: Software Suite for Advanced General Ensemble Simulations

    Science.gov (United States)

    Sidky, Hythem; Colón, Yamil J.; Helfferich, Julian; Sikora, Benjamin J.; Bezik, Cody; Chu, Weiwei; Giberti, Federico; Guo, Ashley Z.; Jiang, Xikai; Lequieu, Joshua; Li, Jiyuan; Moller, Joshua; Quevillon, Michael J.; Rahimi, Mohammad; Ramezani-Dakhel, Hadi; Rathee, Vikramjit S.; Reid, Daniel R.; Sevgen, Emre; Thapar, Vikram; Webb, Michael A.; Whitmer, Jonathan K.; de Pablo, Juan J.

    2018-01-01

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse-grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite. The code may be found at: https://github.com/MICCoM/SSAGES-public.

  16. SSAGES: Software Suite for Advanced General Ensemble Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Sidky, Hythem [Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, Indiana 46556, USA; Colón, Yamil J. [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Institute for Molecular Engineering and Materials Science Division, Argonne National Laboratory, Lemont, Illinois 60439, USA; Helfferich, Julian [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Steinbuch Center for Computing, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany; Sikora, Benjamin J. [Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, Indiana 46556, USA; Bezik, Cody [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Chu, Weiwei [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Giberti, Federico [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Guo, Ashley Z. [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Jiang, Xikai [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Lequieu, Joshua [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Li, Jiyuan [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Moller, Joshua [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Quevillon, Michael J. [Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, Indiana 46556, USA; Rahimi, Mohammad [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Ramezani-Dakhel, Hadi [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Department of Biochemistry and Molecular Biology, University of Chicago, Chicago, Illinois 60637, USA; Rathee, Vikramjit S. [Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, Indiana 46556, USA; Reid, Daniel R. [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Sevgen, Emre [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Thapar, Vikram [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Webb, Michael A. [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Institute for Molecular Engineering and Materials Science Division, Argonne National Laboratory, Lemont, Illinois 60439, USA; Whitmer, Jonathan K. [Department of Chemical and Biomolecular Engineering, University of Notre Dame, Notre Dame, Indiana 46556, USA; de Pablo, Juan J. [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637, USA; Institute for Molecular Engineering and Materials Science Division, Argonne National Laboratory, Lemont, Illinois 60439, USA

    2018-01-28

    Molecular simulation has emerged as an essential tool for modern-day research, but obtaining proper results and making reliable conclusions from simulations requires adequate sampling of the system under consideration. To this end, a variety of methods exist in the literature that can enhance sampling considerably, and increasingly sophisticated, effective algorithms continue to be developed at a rapid pace. Implementation of these techniques, however, can be challenging for experts and non-experts alike. There is a clear need for software that provides rapid, reliable, and easy access to a wide range of advanced sampling methods, and that facilitates implementation of new techniques as they emerge. Here we present SSAGES, a publicly available Software Suite for Advanced General Ensemble Simulations designed to interface with multiple widely used molecular dynamics simulations packages. SSAGES allows facile application of a variety of enhanced sampling techniques—including adaptive biasing force, string methods, and forward flux sampling—that extract meaningful free energy and transition path data from all-atom and coarse grained simulations. A noteworthy feature of SSAGES is a user-friendly framework that facilitates further development and implementation of new methods and collective variables. In this work, the use of SSAGES is illustrated in the context of simple representative applications involving distinct methods and different collective variables that are available in the current release of the suite.

  17. Path Creation

    DEFF Research Database (Denmark)

    Karnøe, Peter; Garud, Raghu

    2012-01-01

    This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts. Competenc......This paper employs path creation as a lens to follow the emergence of the Danish wind turbine cluster. Supplier competencies, regulations, user preferences and a market for wind power did not pre-exist; all had to emerge in a tranformative manner involving multiple actors and artefacts....... Competencies emerged through processes and mechanisms such as co-creation that implicated multiple learning processes. The process was not an orderly linear one as emergent contingencies influenced the learning processes. An implication is that public policy to catalyse clusters cannot be based...

  18. Screening for DSM-5 Other Specified Feeding or Eating Disorder in a Weight-Loss Treatment–Seeking Obese Sample

    Science.gov (United States)

    Gorman, Mark J.; Sogg, Stephanie; Lamont, Evan M.; Eddy, Kamryn T.; Becker, Anne E.; Thomas, Jennifer J.

    2014-01-01

    Objective: To evaluate the effectiveness of specific self-report questionnaires in detecting DSM-5 eating disorders identified via structured clinical interview in a weight-loss treatment–seeking obese sample, to improve eating disorder recognition in general clinical settings. Method: Individuals were recruited over a 3-month period (November 2, 2011, to January 10, 2012) when initially presenting to a hospital-based weight-management center in the northeastern United States, which offers evaluation and treatment for outpatients who are overweight or obese. Participants (N = 100) completed the Structured Clinical Interview for DSM-IV eating disorder module, a DSM-5 feeding and eating disorders interview, and a battery of self-report questionnaires. Results: Self-reports and interviews agreed substantially in the identification of bulimia nervosa (DSM-IV and DSM-5: tau-b = 0.71, P DSM-5: tau-b = 0.60, P DSM-5]). Discussion: Current self-report assessments are likely to identify full syndrome DSM-5 eating disorders in treatment-seeking obese samples, but unlikely to detect DSM-5 other specified feeding or eating disorders. We propose specific content changes that might enhance clinical utility as suggestions for future evaluation. PMID:25667810

  19. Coping with perceived weight discrimination: testing a theoretical model for examining the relationship between perceived weight discrimination and depressive symptoms in a representative sample of individuals with obesity.

    Science.gov (United States)

    Spahlholz, J; Pabst, A; Riedel-Heller, S G; Luck-Sikorski, C

    2016-12-01

    The association between obesity and perceived weight discrimination has been investigated in several studies. Although there is evidence that perceived weight discrimination is associated with negative outcomes on psychological well-being, there is a lack of research examining possible buffering effects of coping strategies in dealing with experiences of weight discrimination. The present study aims to fill that gap. We examined the relationship between perceived weight discrimination and depressive symptoms and tested whether problem-solving strategies and/or avoidant coping strategies mediated this effect. Using structural equation modeling, we analyzed representative cross-sectional data of n=484 German-speaking individuals with obesity (BMI⩾30 kg m -2 ), aged 18 years and older. Results revealed a direct effect of perceived weight discrimination on depressive symptoms. Further, the data supported a mediational linkage for avoidant coping strategies, not for problem-solving strategies. Higher scores of perceived weight discrimination experiences were associated with both coping strategies, but only avoidant coping strategies were positively linked to more symptoms of depression. Perceived weight discrimination was associated with increased depressive symptoms both directly and indirectly through situational coping strategies. Avoidant coping has the potential to exacerbate depressive symptoms, whereas problem-solving strategies were ineffective in dealing with experiences of weight discrimination. We emphasize the importance of coping strategies in dealing with experiences of weight discrimination and the need to distinguish between using a strategy and benefiting from it without detriment.

  20. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution.

    Science.gov (United States)

    Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme

    2013-07-01

    The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.

  1. Size exclusion chromatography with online ICP-MS enables molecular weight fractionation of dissolved phosphorus species in water samples.

    Science.gov (United States)

    Venkatesan, Arjun K; Gan, Wenhui; Ashani, Harsh; Herckes, Pierre; Westerhoff, Paul

    2018-04-15

    Phosphorus (P) is an important and often limiting element in terrestrial and aquatic ecosystem. A lack of understanding of its distribution and structures in the environment limits the design of effective P mitigation and recovery approaches. Here we developed a robust method employing size exclusion chromatography (SEC) coupled to an ICP-MS to determine the molecular weight (MW) distribution of P in environmental samples. The most abundant fraction of P varied widely in different environmental samples: (i) orthophosphate was the dominant fraction (93-100%) in one lake, two aerosols and DOC isolate samples, (ii) species of 400-600 Da range were abundant (74-100%) in two surface waters, and (iii) species of 150-350 Da range were abundant in wastewater effluents. SEC-DOC of the aqueous samples using a similar SEC column showed overlapping peaks for the 400-600 Da species in two surface waters, and for >20 kDa species in the effluents, suggesting that these fractions are likely associated with organic matter. The MW resolution and performance of SEC-ICP-MS agreed well with the time integrated results obtained using conventional ultrafiltration method. Results show that SEC in combination with ICP-MS and DOC has the potential to be a powerful and easy-to-use method in identifying unknown fractions of P in the environment. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Representing Color Ensembles.

    Science.gov (United States)

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  3. Tailored Random Graph Ensembles

    International Nuclear Information System (INIS)

    Roberts, E S; Annibale, A; Coolen, A C C

    2013-01-01

    Tailored graph ensembles are a developing bridge between biological networks and statistical mechanics. The aim is to use this concept to generate a suite of rigorous tools that can be used to quantify and compare the topology of cellular signalling networks, such as protein-protein interaction networks and gene regulation networks. We calculate exact and explicit formulae for the leading orders in the system size of the Shannon entropies of random graph ensembles constrained with degree distribution and degree-degree correlation. We also construct an ergodic detailed balance Markov chain with non-trivial acceptance probabilities which converges to a strictly uniform measure and is based on edge swaps that conserve all degrees. The acceptance probabilities can be generalized to define Markov chains that target any alternative desired measure on the space of directed or undirected graphs, in order to generate graphs with more sophisticated topological features.

  4. Corrigendum to “Relative humidity effects on water vapour fluxes measured with closed-path eddy-covariance systems with short sampling lines” [Agric. Forest Meteorol. 165 (2012) 53–63

    DEFF Research Database (Denmark)

    Fratini, Gerardo; Ibrom, Andreas; Arriga, Nicola

    2012-01-01

    It has been formerly recognised that increasing relative humidity in the sampling line of closed-path eddy-covariance systems leads to increasing attenuation of water vapour turbulent fluctuations, resulting in strong latent heat flux losses. This occurrence has been analyzed for very long (50 m...... from eddy-covariance systems featuring short (4 m) and very short (1 m) sampling lines running at the same clover field and show that relative humidity effects persist also for these setups, and should not be neglected. Starting from the work of Ibrom and co-workers, we propose a mixed method...... and correction method proposed here is deemed applicable to closed-path systems featuring a broad range of sampling lines, and indeed applicable also to passive gases as a special case. The methods described in this paper are incorporated, as processing options, in the free and open-source eddy...

  5. Making decisions based on an imperfect ensemble of climate simulators: strategies and future directions

    Science.gov (United States)

    Sanderson, B. M.

    2017-12-01

    The CMIP ensembles represent the most comprehensive source of information available to decision-makers for climate adaptation, yet it is clear that there are fundamental limitations in our ability to treat the ensemble as an unbiased sample of possible future climate trajectories. There is considerable evidence that models are not independent, and increasing complexity and resolution combined with computational constraints prevent a thorough exploration of parametric uncertainty or internal variability. Although more data than ever is available for calibration, the optimization of each model is influenced by institutional priorities, historical precedent and available resources. The resulting ensemble thus represents a miscellany of climate simulators which defy traditional statistical interpretation. Models are in some cases interdependent, but are sufficiently complex that the degree of interdependency is conditional on the application. Configurations have been updated using available observations to some degree, but not in a consistent or easily identifiable fashion. This means that the ensemble cannot be viewed as a true posterior distribution updated by available data, but nor can observational data alone be used to assess individual model likelihood. We assess recent literature for combining projections from an imperfect ensemble of climate simulators. Beginning with our published methodology for addressing model interdependency and skill in the weighting scheme for the 4th US National Climate Assessment, we consider strategies for incorporating process-based constraints on future response, perturbed parameter experiments and multi-model output into an integrated framework. We focus on a number of guiding questions: Is the traditional framework of confidence in projections inferred from model agreement leading to biased or misleading conclusions? Can the benefits of upweighting skillful models be reconciled with the increased risk of truth lying outside the

  6. Improving Climate Projections Using "Intelligent" Ensembles

    Science.gov (United States)

    Baker, Noel C.; Taylor, Patrick C.

    2015-01-01

    Recent changes in the climate system have led to growing concern, especially in communities which are highly vulnerable to resource shortages and weather extremes. There is an urgent need for better climate information to develop solutions and strategies for adapting to a changing climate. Climate models provide excellent tools for studying the current state of climate and making future projections. However, these models are subject to biases created by structural uncertainties. Performance metrics-or the systematic determination of model biases-succinctly quantify aspects of climate model behavior. Efforts to standardize climate model experiments and collect simulation data-such as the Coupled Model Intercomparison Project (CMIP)-provide the means to directly compare and assess model performance. Performance metrics have been used to show that some models reproduce present-day climate better than others. Simulation data from multiple models are often used to add value to projections by creating a consensus projection from the model ensemble, in which each model is given an equal weight. It has been shown that the ensemble mean generally outperforms any single model. It is possible to use unequal weights to produce ensemble means, in which models are weighted based on performance (called "intelligent" ensembles). Can performance metrics be used to improve climate projections? Previous work introduced a framework for comparing the utility of model performance metrics, showing that the best metrics are related to the variance of top-of-atmosphere outgoing longwave radiation. These metrics improve present-day climate simulations of Earth's energy budget using the "intelligent" ensemble method. The current project identifies several approaches for testing whether performance metrics can be applied to future simulations to create "intelligent" ensemble-mean climate projections. It is shown that certain performance metrics test key climate processes in the models, and

  7. Paths correlation matrix.

    Science.gov (United States)

    Qian, Weixian; Zhou, Xiaojun; Lu, Yingcheng; Xu, Jiang

    2015-09-15

    Both the Jones and Mueller matrices encounter difficulties when physically modeling mixed materials or rough surfaces due to the complexity of light-matter interactions. To address these issues, we derived a matrix called the paths correlation matrix (PCM), which is a probabilistic mixture of Jones matrices of every light propagation path. Because PCM is related to actual light propagation paths, it is well suited for physical modeling. Experiments were performed, and the reflection PCM of a mixture of polypropylene and graphite was measured. The PCM of the mixed sample was accurately decomposed into pure polypropylene's single reflection, pure graphite's single reflection, and depolarization caused by multiple reflections, which is consistent with the theoretical derivation. Reflection parameters of rough surface can be calculated from PCM decomposition, and the results fit well with the theoretical calculations provided by the Fresnel equations. These theoretical and experimental analyses verify that PCM is an efficient way to physically model light-matter interactions.

  8. Ensemble Bayesian forecasting system Part I: Theory and algorithms

    Science.gov (United States)

    Herr, Henry D.; Krzysztofowicz, Roman

    2015-05-01

    The ensemble Bayesian forecasting system (EBFS), whose theory was published in 2001, is developed for the purpose of quantifying the total uncertainty about a discrete-time, continuous-state, non-stationary stochastic process such as a time series of stages, discharges, or volumes at a river gauge. The EBFS is built of three components: an input ensemble forecaster (IEF), which simulates the uncertainty associated with random inputs; a deterministic hydrologic model (of any complexity), which simulates physical processes within a river basin; and a hydrologic uncertainty processor (HUP), which simulates the hydrologic uncertainty (an aggregate of all uncertainties except input). It works as a Monte Carlo simulator: an ensemble of time series of inputs (e.g., precipitation amounts) generated by the IEF is transformed deterministically through a hydrologic model into an ensemble of time series of outputs, which is next transformed stochastically by the HUP into an ensemble of time series of predictands (e.g., river stages). Previous research indicated that in order to attain an acceptable sampling error, the ensemble size must be on the order of hundreds (for probabilistic river stage forecasts and probabilistic flood forecasts) or even thousands (for probabilistic stage transition forecasts). The computing time needed to run the hydrologic model this many times renders the straightforward simulations operationally infeasible. This motivates the development of the ensemble Bayesian forecasting system with randomization (EBFSR), which takes full advantage of the analytic meta-Gaussian HUP and generates multiple ensemble members after each run of the hydrologic model; this auxiliary randomization reduces the required size of the meteorological input ensemble and makes it operationally feasible to generate a Bayesian ensemble forecast of large size. Such a forecast quantifies the total uncertainty, is well calibrated against the prior (climatic) distribution of

  9. Investigating energy-based pool structure selection in the structure ensemble modeling with experimental distance constraints: The example from a multidomain protein Pub1.

    Science.gov (United States)

    Zhu, Guanhua; Liu, Wei; Bao, Chenglong; Tong, Dudu; Ji, Hui; Shen, Zuowei; Yang, Daiwen; Lu, Lanyuan

    2018-05-01

    The structural variations of multidomain proteins with flexible parts mediate many biological processes, and a structure ensemble can be determined by selecting a weighted combination of representative structures from a simulated structure pool, producing the best fit to experimental constraints such as interatomic distance. In this study, a hybrid structure-based and physics-based atomistic force field with an efficient sampling strategy is adopted to simulate a model di-domain protein against experimental paramagnetic relaxation enhancement (PRE) data that correspond to distance constraints. The molecular dynamics simulations produce a wide range of conformations depicted on a protein energy landscape. Subsequently, a conformational ensemble recovered with low-energy structures and the minimum-size restraint is identified in good agreement with experimental PRE rates, and the result is also supported by chemical shift perturbations and small-angle X-ray scattering data. It is illustrated that the regularizations of energy and ensemble-size prevent an arbitrary interpretation of protein conformations. Moreover, energy is found to serve as a critical control to refine the structure pool and prevent data overfitting, because the absence of energy regularization exposes ensemble construction to the noise from high-energy structures and causes a more ambiguous representation of protein conformations. Finally, we perform structure-ensemble optimizations with a topology-based structure pool, to enhance the understanding on the ensemble results from different sources of pool candidates. © 2018 Wiley Periodicals, Inc.

  10. Online probabilistic learning with an ensemble of forecasts

    Science.gov (United States)

    Thorey, Jean; Mallet, Vivien; Chaussin, Christophe

    2016-04-01

    Our objective is to produce a calibrated weighted ensemble to forecast a univariate time series. In addition to a meteorological ensemble of forecasts, we rely on observations or analyses of the target variable. The celebrated Continuous Ranked Probability Score (CRPS) is used to evaluate the probabilistic forecasts. However applying the CRPS on weighted empirical distribution functions (deriving from the weighted ensemble) may introduce a bias because of which minimizing the CRPS does not produce the optimal weights. Thus we propose an unbiased version of the CRPS which relies on clusters of members and is strictly proper. We adapt online learning methods for the minimization of the CRPS. These methods generate the weights associated to the members in the forecasted empirical distribution function. The weights are updated before each forecast step using only past observations and forecasts. Our learning algorithms provide the theoretical guarantee that, in the long run, the CRPS of the weighted forecasts is at least as good as the CRPS of any weighted ensemble with weights constant in time. In particular, the performance of our forecast is better than that of any subset ensemble with uniform weights. A noteworthy advantage of our algorithm is that it does not require any assumption on the distributions of the observations and forecasts, both for the application and for the theoretical guarantee to hold. As application example on meteorological forecasts for photovoltaic production integration, we show that our algorithm generates a calibrated probabilistic forecast, with significant performance improvements on probabilistic diagnostic tools (the CRPS, the reliability diagram and the rank histogram).

  11. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  12. New technique for ensemble dressing combining Multimodel SuperEnsemble and precipitation PDF

    Science.gov (United States)

    Cane, D.; Milelli, M.

    2009-09-01

    The Multimodel SuperEnsemble technique (Krishnamurti et al., Science 285, 1548-1550, 1999) is a postprocessing method for the estimation of weather forecast parameters reducing direct model output errors. It differs from other ensemble analysis techniques by the use of an adequate weighting of the input forecast models to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure (Cane and Milelli, Meteorologische Zeitschrift, 15, 2, 2006), the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts applied on a wide spectrum of results over Piemonte very dense non-GTS weather station network. We will focus particularly on an accurate statistical method for bias correction and on the ensemble dressing in agreement with the observed precipitation forecast-conditioned PDF. Acknowledgement: this work is supported by the Italian Civil Defence Department.

  13. Modality-Driven Classification and Visualization of Ensemble Variance

    Energy Technology Data Exchange (ETDEWEB)

    Bensema, Kevin; Gosink, Luke; Obermaier, Harald; Joy, Kenneth I.

    2016-10-01

    Advances in computational power now enable domain scientists to address conceptual and parametric uncertainty by running simulations multiple times in order to sufficiently sample the uncertain input space. While this approach helps address conceptual and parametric uncertainties, the ensemble datasets produced by this technique present a special challenge to visualization researchers as the ensemble dataset records a distribution of possible values for each location in the domain. Contemporary visualization approaches that rely solely on summary statistics (e.g., mean and variance) cannot convey the detailed information encoded in ensemble distributions that are paramount to ensemble analysis; summary statistics provide no information about modality classification and modality persistence. To address this problem, we propose a novel technique that classifies high-variance locations based on the modality of the distribution of ensemble predictions. Additionally, we develop a set of confidence metrics to inform the end-user of the quality of fit between the distribution at a given location and its assigned class. We apply a similar method to time-varying ensembles to illustrate the relationship between peak variance and bimodal or multimodal behavior. These classification schemes enable a deeper understanding of the behavior of the ensemble members by distinguishing between distributions that can be described by a single tendency and distributions which reflect divergent trends in the ensemble.

  14. Imprinting and recalling cortical ensembles.

    Science.gov (United States)

    Carrillo-Reid, Luis; Yang, Weijian; Bando, Yuki; Peterka, Darcy S; Yuste, Rafael

    2016-08-12

    Neuronal ensembles are coactive groups of neurons that may represent building blocks of cortical circuits. These ensembles could be formed by Hebbian plasticity, whereby synapses between coactive neurons are strengthened. Here we report that repetitive activation with two-photon optogenetics of neuronal populations from ensembles in the visual cortex of awake mice builds neuronal ensembles that recur spontaneously after being imprinted and do not disrupt preexisting ones. Moreover, imprinted ensembles can be recalled by single- cell stimulation and remain coactive on consecutive days. Our results demonstrate the persistent reconfiguration of cortical circuits by two-photon optogenetics into neuronal ensembles that can perform pattern completion. Copyright © 2016, American Association for the Advancement of Science.

  15. Generalized ensemble method applied to study systems with strong first order transitions

    Science.gov (United States)

    Małolepsza, E.; Kim, J.; Keyes, T.

    2015-09-01

    At strong first-order phase transitions, the entropy versus energy or, at constant pressure, enthalpy, exhibits convex behavior, and the statistical temperature curve correspondingly exhibits an S-loop or back-bending. In the canonical and isothermal-isobaric ensembles, with temperature as the control variable, the probability density functions become bimodal with peaks localized outside of the S-loop region. Inside, states are unstable, and as a result simulation of equilibrium phase coexistence becomes impossible. To overcome this problem, a method was proposed by Kim, Keyes and Straub [1], where optimally designed generalized ensemble sampling was combined with replica exchange, and denoted generalized replica exchange method (gREM). This new technique uses parametrized effective sampling weights that lead to a unimodal energy distribution, transforming unstable states into stable ones. In the present study, the gREM, originally developed as a Monte Carlo algorithm, was implemented to work with molecular dynamics in an isobaric ensemble and coded into LAMMPS, a highly optimized open source molecular simulation package. The method is illustrated in a study of the very strong solid/liquid transition in water.

  16. Evaluation of medium-range ensemble flood forecasting based on calibration strategies and ensemble methods in Lanjiang Basin, Southeast China

    Science.gov (United States)

    Liu, Li; Gao, Chao; Xuan, Weidong; Xu, Yue-Ping

    2017-11-01

    Ensemble flood forecasts by hydrological models using numerical weather prediction products as forcing data are becoming more commonly used in operational flood forecasting applications. In this study, a hydrological ensemble flood forecasting system comprised of an automatically calibrated Variable Infiltration Capacity model and quantitative precipitation forecasts from TIGGE dataset is constructed for Lanjiang Basin, Southeast China. The impacts of calibration strategies and ensemble methods on the performance of the system are then evaluated. The hydrological model is optimized by the parallel programmed ε-NSGA II multi-objective algorithm. According to the solutions by ε-NSGA II, two differently parameterized models are determined to simulate daily flows and peak flows at each of the three hydrological stations. Then a simple yet effective modular approach is proposed to combine these daily and peak flows at the same station into one composite series. Five ensemble methods and various evaluation metrics are adopted. The results show that ε-NSGA II can provide an objective determination on parameter estimation, and the parallel program permits a more efficient simulation. It is also demonstrated that the forecasts from ECMWF have more favorable skill scores than other Ensemble Prediction Systems. The multimodel ensembles have advantages over all the single model ensembles and the multimodel methods weighted on members and skill scores outperform other methods. Furthermore, the overall performance at three stations can be satisfactory up to ten days, however the hydrological errors can degrade the skill score by approximately 2 days, and the influence persists until a lead time of 10 days with a weakening trend. With respect to peak flows selected by the Peaks Over Threshold approach, the ensemble means from single models or multimodels are generally underestimated, indicating that the ensemble mean can bring overall improvement in forecasting of flows. For

  17. Diversity in random subspacing ensembles

    NARCIS (Netherlands)

    Tsymbal, A.; Pechenizkiy, M.; Cunningham, P.; Kambayashi, Y.; Mohania, M.K.; Wöß, W.

    2004-01-01

    Ensembles of learnt models constitute one of the main current directions in machine learning and data mining. It was shown experimentally and theoretically that in order for an ensemble to be effective, it should consist of classifiers having diversity in their predictions. A number of ways are

  18. PSO-Ensemble Demo Application

    DEFF Research Database (Denmark)

    2004-01-01

    Within the framework of the PSO-Ensemble project (FU2101) a demo application has been created. The application use ECMWF ensemble forecasts. Two instances of the application are running; one for Nysted Offshore and one for the total production (except Horns Rev) in the Eltra area. The output...

  19. New concept of statistical ensembles

    International Nuclear Information System (INIS)

    Gorenstein, M.I.

    2009-01-01

    An extension of the standard concept of the statistical ensembles is suggested. Namely, the statistical ensembles with extensive quantities fluctuating according to an externally given distribution is introduced. Applications in the statistical models of multiple hadron production in high energy physics are discussed.

  20. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  1. Ensembl 2002: accommodating comparative genomics.

    Science.gov (United States)

    Clamp, M; Andrews, D; Barker, D; Bevan, P; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Hubbard, T; Kasprzyk, A; Keefe, D; Lehvaslaiho, H; Iyer, V; Melsopp, C; Mongin, E; Pettett, R; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Birney, E

    2003-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of human, mouse and other genome sequences, available as either an interactive web site or as flat files. Ensembl also integrates manually annotated gene structures from external sources where available. As well as being one of the leading sources of genome annotation, Ensembl is an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements. These range from sequence analysis to data storage and visualisation and installations exist around the world in both companies and at academic sites. With both human and mouse genome sequences available and more vertebrate sequences to follow, many of the recent developments in Ensembl have focusing on developing automatic comparative genome analysis and visualisation.

  2. Child weight growth trajectory and its determinants in a sample of Iranian children from birth until 2 years of age

    OpenAIRE

    Sayed-Mohsen Hosseini; Mohamad-Reza Maracy; Sheida Sarrafzade; Roya Kelishadi

    2014-01-01

    Background: Growth is one of the most important indices in child health. The best and most effective way to investigate child health is measuring the physical growth indices such as weight, height and head circumference. Among these measures, weight growth is the simplest and the most effective way to determine child growth status. Weight trend at a given age is the result of cumulative growth experience, whereas growth velocity represents what is happening at the time. Methods: This long...

  3. Multivariate localization methods for ensemble Kalman filtering

    KAUST Repository

    Roh, S.

    2015-12-03

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  4. Multivariate localization methods for ensemble Kalman filtering

    KAUST Repository

    Roh, S.

    2015-05-08

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  5. Multivariate localization methods for ensemble Kalman filtering

    Science.gov (United States)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  6. Multivariate localization methods for ensemble Kalman filtering

    KAUST Repository

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, Marc G.

    2015-01-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  7. Determinants of change in body weight and body fat distribution over 5.5 years in a sample of free-living black South African women.

    Science.gov (United States)

    Chantler, Sarah; Dickie, Kasha; Micklesfield, Lisa K; Goedecke, Julia H

    To identify socio-demographic and lifestyle determinants of weight gain in a sample of premenopasual black South African (SA) women. Changes in body composition (dual-energy X-ray absorptiometry, computerised tomography), socio-economic status (SES) and behavioural/lifestyle factors were measured in 64 black SA women at baseline (27 ± 8 years) and after 5.5 years. A lower body mass index (BMI) and nulliparity, together with access to sanitation, were significant determinants of weight gain and change in body fat distribution over 5.5 years. In addition, younger women increased their body weight more than their older counterparts, but this association was not independent of other determinants. Further research is required to examine the effect of changing SES, as well as the full impact of childbearing on weight gain over time in younger women with lower BMIs. This information will suggest areas for possible intervention to prevent long-term weight gain in these women.

  8. Relations of thyroid function to body weight: cross-sectional and longitudinal observations in a community-based sample.

    Science.gov (United States)

    Fox, Caroline S; Pencina, Michael J; D'Agostino, Ralph B; Murabito, Joanne M; Seely, Ellen W; Pearce, Elizabeth N; Vasan, Ramachandran S

    2008-03-24

    Overt hypothyroidism and hyperthyroidism may be associated with weight gain and loss. We assessed whether variations in thyroid function within the reference (physiologic) range are associated with body weight. Framingham Offspring Study participants (n=2407) who attended 2 consecutive routine examinations, were not receiving thyroid hormone therapy, and had baseline serum thyrotropin (TSH) concentrations of 0.5 to 5.0 mIU/L and follow-up concentrations of 0.5 to 10.0 mIU/L were included in this study. Baseline TSH concentrations were related to body weight and body weight change during 3.5 years of follow-up. At baseline, adjusted mean weight increased progressively from 64.5 to 70.2 kg in the lowest to highest TSH concentration quartiles in women (Pweight increased by 1.5 (5.6) kg in women and 1.0 (5.0) kg in men. Baseline TSH concentrations were not associated with weight change during follow-up. However, an increase in TSH concentration at follow-up was positively associated with weight gain in women (0.5-2.3 kg across increasing quartiles of TSH concentration change; Pweight in both sexes. Our findings raise the possibility that modest increases in serum TSH concentrations within the reference range may be associated with weight gain.

  9. Spreading paths in partially observed social networks

    Science.gov (United States)

    Onnela, Jukka-Pekka; Christakis, Nicholas A.

    2012-03-01

    Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.

  10. Spreading paths in partially observed social networks.

    Science.gov (United States)

    Onnela, Jukka-Pekka; Christakis, Nicholas A

    2012-03-01

    Understanding how and how far information, behaviors, or pathogens spread in social networks is an important problem, having implications for both predicting the size of epidemics, as well as for planning effective interventions. There are, however, two main challenges for inferring spreading paths in real-world networks. One is the practical difficulty of observing a dynamic process on a network, and the other is the typical constraint of only partially observing a network. Using static, structurally realistic social networks as platforms for simulations, we juxtapose three distinct paths: (1) the stochastic path taken by a simulated spreading process from source to target; (2) the topologically shortest path in the fully observed network, and hence the single most likely stochastic path, between the two nodes; and (3) the topologically shortest path in a partially observed network. In a sampled network, how closely does the partially observed shortest path (3) emulate the unobserved spreading path (1)? Although partial observation inflates the length of the shortest path, the stochastic nature of the spreading process also frequently derails the dynamic path from the shortest path. We find that the partially observed shortest path does not necessarily give an inflated estimate of the length of the process path; in fact, partial observation may, counterintuitively, make the path seem shorter than it actually is.

  11. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo; Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2015-01-01

    (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence

  12. Assortive mating for personaltiy traits, educational level, religious affiliation, height, weight, adn body mass index in parents of Korean twin sample.

    Science.gov (United States)

    Hur, Yoon-Mi

    2003-12-01

    The degree of assortative mating for psychological and physical traits in Asian societies in relatively unknown. The present study examined assortative mating for educational level, personality traits, religious affiliation, height, weight, and body mass index in a korean sample. Age-adjusted spouse correlations were high for educational level (r = .63) and religious affiliation (r = .67), modest for most personality traits (rs = -.01 to .26), and trivial for height (r = .04), weight (r = .05)m and body mass index (r = .11). These results were remarkably similar to those found from the western samples. Implications of the present findings in behavior genetic studies and human mating patterns were briefly discussed.

  13. Contact planarization of ensemble nanowires

    Science.gov (United States)

    Chia, A. C. E.; LaPierre, R. R.

    2011-06-01

    The viability of four organic polymers (S1808, SC200, SU8 and Cyclotene) as filling materials to achieve planarization of ensemble nanowire arrays is reported. Analysis of the porosity, surface roughness and thermal stability of each filling material was performed. Sonication was used as an effective method to remove the tops of the nanowires (NWs) to achieve complete planarization. Ensemble nanowire devices were fully fabricated and I-V measurements confirmed that Cyclotene effectively planarizes the NWs while still serving the role as an insulating layer between the top and bottom contacts. These processes and analysis can be easily implemented into future characterization and fabrication of ensemble NWs for optoelectronic device applications.

  14. The use of low-calorie sweeteners is associated with self-reported prior intent to lose weight in a representative sample of US adults.

    Science.gov (United States)

    Drewnowski, A; Rehm, C D

    2016-03-07

    Low-calorie sweeteners (LCSs) are said to be a risk factor for obesity and diabetes. Reverse causality may be an alternative explanation. Data on LCS use, from a single 24-h dietary recall, for a representative sample of 22 231 adults were obtained from 5 cycles of the National Health and Nutrition Examination Survey (1999-2008 NHANES). Retrospective data on intent to lose or maintain weight during the prior 12-months and 10-year weight history were obtained from the weight history questionnaire. Objectively measured heights and weights were obtained from the examination. Primary analyses evaluated the association between intent to lose/maintain weight and use of LCSs and specific LCS product types using survey-weighted generalized linear models. We further evaluated whether body mass index (BMI) may mediate the association between weight loss intent and use of LCSs. The association between 10-year weight history and current LCS use was evaluated using restricted cubic splines. In cross-sectional analyses, LCS use was associated with a higher prevalence of obesity and diabetes. Adults who tried to lose weight during the previous 12 months were more likely to consume LCS beverages (prevalence ratio=1.64, 95% confidence interval (CI) 1.54-1.75), tabletop LCS (prevalence ratio=1.68, 95% CI 1.47-1.91) and LCS foods (prevalence ratio=1.93, 95% CI 1.60-2.33) as compared with those who did not. In mediation analyses, BMI only partially mediated the association between weight control history and the use of LCS beverages, tabletop LCS, but not LCS foods. Current LCS use was further associated with a history of prior weight change (for example, weight loss and gain). LCS use was associated with self-reported intent to lose weight during the previous 12 months. This association was only partially mediated by differences in BMI. Any inference of causality between attempts at weight control and LCS use is tempered by the cross-sectional nature of these data and retrospective

  15. Prevalence of human papillomavirus in 5,072 consecutive cervical SurePath samples evaluated with the Roche cobas HPV real-time PCR assay

    DEFF Research Database (Denmark)

    Preisler, Sarah; Rebolj, Matejka; Untermann, Anette

    2013-01-01

    of the present study, Horizon, was to assess the prevalence of high-risk HPV infections in an area with a high background risk of cervical cancer, where women aged 23-65 years are targeted for cervical screening. We collected 6,258 consecutive cervical samples from the largest cervical screening laboratory......-29 years and 10% in women aged 60-65 years. HC2 assay was positive in 20% of samples, and cytology was abnormal (≥ atypical squamous cells of undetermined significance) for 7% samples. When only samples without recent abnormalities were taken into account, 24% tested positive on cobas, 19% on HC2, and 5...

  16. Path Creation, Path Dependence and Breaking Away from the Path

    OpenAIRE

    Wang, Jens; Hedman, Jonas; Tuunainen, Virpi Kristiina

    2016-01-01

    The explanation of how and why firms succeed or fail is a recurrent research challenge. This is particularly important in the context of technological innovations. We focus on the role of historical events and decisions in explaining such success and failure. Using a case study of Nokia, we develop and extend a multi-layer path dependence framework. We identify four layers of path dependence: technical, strategic and leadership, organizational, and external collaboration. We show how path dep...

  17. Path analysis of risk factors leading to premature birth.

    Science.gov (United States)

    Fields, S J; Livshits, G; Sirotta, L; Merlob, P

    1996-01-01

    The present study tested whether various sociodemographic, anthropometric, behavioral, and medical/physiological factors act in a direct or indirect manner on the risk of prematurity using path analysis on a sample of Israeli births. The path model shows that medical complications, primarily toxemia, chorioammionitis, and a previous low birth weight delivery directly and significantly act on the risk of prematurity as do low maternal pregnancy weight gain and ethnicity. Other medical complications, including chronic hypertension, preclampsia, and placental abruption, although significantly correlated with prematurity, act indirectly on prematurity through toxemia. The model further shows that the commonly accepted sociodemographic, anthropometric, and behavioral risk factors act by modifying the development of medical complications that lead to prematurity as opposed to having a direct effect on premature delivery. © 1996 Wiley-Liss, Inc. Copyright © 1996 Wiley-Liss, Inc.

  18. Ensemble manifold regularization.

    Science.gov (United States)

    Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng

    2012-06-01

    We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.

  19. Sampling of ore

    International Nuclear Information System (INIS)

    Boehme, R.C.; Nicholas, B.L.

    1987-01-01

    This invention relates to a method of an apparatus for ore sampling. The method includes the steps of periodically removing a sample of the output material of a sorting machine, weighing each sample so that each is of the same weight, measuring a characteristic such as the radioactivity, magnetivity or the like of each sample, subjecting at least an equal portion of each sample to chemical analysis to determine the mineral content of the sample and comparing the characteristic measurement with desired mineral content of the chemically analysed portion of the sample to determine the characteristic/mineral ratio of the sample. The apparatus includes an ore sample collector, a deflector for deflecting a sample of ore particles from the output of an ore sorter into the collector and means for moving the deflector from a first position in which it is clear of the particle path from the sorter to a second position in which it is in the particle path at predetermined time intervals and for predetermined time periods to deflect the sample particles into the collector. The apparatus conveniently includes an ore crusher for comminuting the sample particle, a sample hopper means for weighing the hopper, a detector in the hopper for measuring a characteristic such as radioactivity, magnetivity or the like of particles in the hopper, a discharge outlet from the hopper and means for feeding the particles from the collector to the crusher and then to the hopper

  20. The Ensembl genome database project.

    Science.gov (United States)

    Hubbard, T; Barker, D; Birney, E; Cameron, G; Chen, Y; Clark, L; Cox, T; Cuff, J; Curwen, V; Down, T; Durbin, R; Eyras, E; Gilbert, J; Hammond, M; Huminiecki, L; Kasprzyk, A; Lehvaslaiho, H; Lijnzaad, P; Melsopp, C; Mongin, E; Pettett, R; Pocock, M; Potter, S; Rust, A; Schmidt, E; Searle, S; Slater, G; Smith, J; Spooner, W; Stabenau, A; Stalker, J; Stupka, E; Ureta-Vidal, A; Vastrik, I; Clamp, M

    2002-01-01

    The Ensembl (http://www.ensembl.org/) database project provides a bioinformatics framework to organise biology around the sequences of large genomes. It is a comprehensive source of stable automatic annotation of the human genome sequence, with confirmed gene predictions that have been integrated with external data sources, and is available as either an interactive web site or as flat files. It is also an open source software engineering project to develop a portable system able to handle very large genomes and associated requirements from sequence analysis to data storage and visualisation. The Ensembl site is one of the leading sources of human genome sequence annotation and provided much of the analysis for publication by the international human genome project of the draft genome. The Ensembl system is being installed around the world in both companies and academic sites on machines ranging from supercomputers to laptops.

  1. Optimal Paths in Gliding Flight

    Science.gov (United States)

    Wolek, Artur

    Underwater gliders are robust and long endurance ocean sampling platforms that are increasingly being deployed in coastal regions. This new environment is characterized by shallow waters and significant currents that can challenge the mobility of these efficient (but traditionally slow moving) vehicles. This dissertation aims to improve the performance of shallow water underwater gliders through path planning. The path planning problem is formulated for a dynamic particle (or "kinematic car") model. The objective is to identify the path which satisfies specified boundary conditions and minimizes a particular cost. Several cost functions are considered. The problem is addressed using optimal control theory. The length scales of interest for path planning are within a few turn radii. First, an approach is developed for planning minimum-time paths, for a fixed speed glider, that are sub-optimal but are guaranteed to be feasible in the presence of unknown time-varying currents. Next the minimum-time problem for a glider with speed controls, that may vary between the stall speed and the maximum speed, is solved. Last, optimal paths that minimize change in depth (equivalently, maximize range) are investigated. Recognizing that path planning alone cannot overcome all of the challenges associated with significant currents and shallow waters, the design of a novel underwater glider with improved capabilities is explored. A glider with a pneumatic buoyancy engine (allowing large, rapid buoyancy changes) and a cylindrical moving mass mechanism (generating large pitch and roll moments) is designed, manufactured, and tested to demonstrate potential improvements in speed and maneuverability.

  2. Kohn-Sham Theory for Ground-State Ensembles

    International Nuclear Information System (INIS)

    Ullrich, C. A.; Kohn, W.

    2001-01-01

    An electron density distribution n(r) which can be represented by that of a single-determinant ground state of noninteracting electrons in an external potential v(r) is called pure-state v -representable (P-VR). Most physical electronic systems are P-VR. Systems which require a weighted sum of several such determinants to represent their density are called ensemble v -representable (E-VR). This paper develops formal Kohn-Sham equations for E-VR physical systems, using the appropriate coupling constant integration. It also derives local density- and generalized gradient approximations, and conditions and corrections specific to ensembles

  3. Comprehensive measurements in 4 {pi} geometry for radio-active samples having a low {beta}-activity (1962); Ensemble de mesure'en geometrie 4 {pi} pour echantillons radioactifs de faible activite {beta} (1962)

    Energy Technology Data Exchange (ETDEWEB)

    Colomer, J; Valentin, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1961-07-01

    The realisation is described of a comprehensive measurement system having low background noise and, as well as a lead-wall protection, an electronic protection made up of a plastic scintillator placed in anticoincidence with the 4 {pi} counter used for making the measurements. The apparatus is described and its performance discussed. (authors) [French] Realisation d'un ensemble de mesures ayant un faible bruit de fond en utilisant en plus d'une protection par murs de Pb, une protection electronique constituee par un scintillateur plastique mis en ainticoincidence avec le compteur 4 {pi} utilise pour faire les mesures. L'appareillage est decrit et ses performances discutees. (auteurs)

  4. Modeling polydispersive ensembles of diamond nanoparticles

    International Nuclear Information System (INIS)

    Barnard, Amanda S

    2013-01-01

    While significant progress has been made toward production of monodispersed samples of a variety of nanoparticles, in cases such as diamond nanoparticles (nanodiamonds) a significant degree of polydispersivity persists, so scaling-up of laboratory applications to industrial levels has its challenges. In many cases, however, monodispersivity is not essential for reliable application, provided that the inevitable uncertainties are just as predictable as the functional properties. As computational methods of materials design are becoming more widespread, there is a growing need for robust methods for modeling ensembles of nanoparticles, that capture the structural complexity characteristic of real specimens. In this paper we present a simple statistical approach to modeling of ensembles of nanoparticles, and apply it to nanodiamond, based on sets of individual simulations that have been carefully selected to describe specific structural sources that are responsible for scattering of fundamental properties, and that are typically difficult to eliminate experimentally. For the purposes of demonstration we show how scattering in the Fermi energy and the electronic band gap are related to different structural variations (sources), and how these results can be combined strategically to yield statistically significant predictions of the properties of an entire ensemble of nanodiamonds, rather than merely one individual ‘model’ particle or a non-representative sub-set. (paper)

  5. Decimated Input Ensembles for Improved Generalization

    Science.gov (United States)

    Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)

    1999-01-01

    Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.

  6. The canonical ensemble redefined - 1: Formalism

    International Nuclear Information System (INIS)

    Venkataraman, R.

    1984-12-01

    For studying the thermodynamic properties of systems we propose an ensemble that lies in between the familiar canonical and microcanonical ensembles. We point out the transition from the canonical to microcanonical ensemble and prove from a comparative study that all these ensembles do not yield the same results even in the thermodynamic limit. An investigation of the coupling between two or more systems with these ensembles suggests that the state of thermodynamical equilibrium is a special case of statistical equilibrium. (author)

  7. Feynman's path integrals and Bohm's particle paths

    International Nuclear Information System (INIS)

    Tumulka, Roderich

    2005-01-01

    Both Bohmian mechanics, a version of quantum mechanics with trajectories, and Feynman's path integral formalism have something to do with particle paths in space and time. The question thus arises how the two ideas relate to each other. In short, the answer is, path integrals provide a re-formulation of Schroedinger's equation, which is half of the defining equations of Bohmian mechanics. I try to give a clear and concise description of the various aspects of the situation. (letters and comments)

  8. Path coupling and aggregate path coupling

    CERN Document Server

    Kovchegov, Yevgeniy

    2018-01-01

    This book describes and characterizes an extension to the classical path coupling method applied to statistical mechanical models, referred to as aggregate path coupling. In conjunction with large deviations estimates, the aggregate path coupling method is used to prove rapid mixing of Glauber dynamics for a large class of statistical mechanical models, including models that exhibit discontinuous phase transitions which have traditionally been more difficult to analyze rigorously. The book shows how the parameter regions for rapid mixing for several classes of statistical mechanical models are derived using the aggregate path coupling method.

  9. WE-E-BRE-05: Ensemble of Graphical Models for Predicting Radiation Pneumontis Risk

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S; Ybarra, N; Jeyaseelan, K; El Naqa, I [McGill University, Montreal, Quebec (Canada); Faria, S; Kopek, N [Montreal General Hospital, Montreal, Quebec (Canada)

    2014-06-15

    Purpose: We propose a prior knowledge-based approach to construct an interaction graph of biological and dosimetric radiation pneumontis (RP) covariates for the purpose of developing a RP risk classifier. Methods: We recruited 59 NSCLC patients who received curative radiotherapy with minimum 6 month follow-up. 16 RP events was observed (CTCAE grade ≥2). Blood serum was collected from every patient before (pre-RT) and during RT (mid-RT). From each sample the concentration of the following five candidate biomarkers were taken as covariates: alpha-2-macroglobulin (α2M), angiotensin converting enzyme (ACE), transforming growth factor β (TGF-β), interleukin-6 (IL-6), and osteopontin (OPN). Dose-volumetric parameters were also included as covariates. The number of biological and dosimetric covariates was reduced by a variable selection scheme implemented by L1-regularized logistic regression (LASSO). Posterior probability distribution of interaction graphs between the selected variables was estimated from the data under the literature-based prior knowledge to weight more heavily the graphs that contain the expected associations. A graph ensemble was formed by averaging the most probable graphs weighted by their posterior, creating a Bayesian Network (BN)-based RP risk classifier. Results: The LASSO selected the following 7 RP covariates: (1) pre-RT concentration level of α2M, (2) α2M level mid- RT/pre-RT, (3) pre-RT IL6 level, (4) IL6 level mid-RT/pre-RT, (5) ACE mid-RT/pre-RT, (6) PTV volume, and (7) mean lung dose (MLD). The ensemble BN model achieved the maximum sensitivity/specificity of 81%/84% and outperformed univariate dosimetric predictors as shown by larger AUC values (0.78∼0.81) compared with MLD (0.61), V20 (0.65) and V30 (0.70). The ensembles obtained by incorporating the prior knowledge improved classification performance for the ensemble size 5∼50. Conclusion: We demonstrated a probabilistic ensemble method to detect robust associations between

  10. SU-F-J-193: Efficient Dose Extinction Method for Water Equivalent Path Length (WEPL) of Real Tissue Samples for Validation of CT HU to Stopping Power Conversion

    International Nuclear Information System (INIS)

    Zhang, R; Baer, E; Jee, K; Sharp, G; Flanz, J; Lu, H

    2016-01-01

    Purpose: For proton therapy, an accurate model of CT HU to relative stopping power (RSP) conversion is essential. In current practice, validation of these models relies solely on measurements of tissue substitutes with standard compositions. Validation based on real tissue samples would be much more direct and can address variations between patients. This study intends to develop an efficient and accurate system based on the concept of dose extinction to measure WEPL and retrieve RSP in biological tissue in large number of types. Methods: A broad AP proton beam delivering a spread out Bragg peak (SOBP) is used to irradiate the samples with a Matrixx detector positioned immediately below. A water tank was placed on top of the samples, with the water level controllable in sub-millimeter by a remotely controlled dosing pump. While gradually lowering the water level with beam on, the transmission dose was recorded at 1 frame/sec. The WEPL were determined as the difference between the known beam range of the delivered SOBP (80%) and the water level corresponding to 80% of measured dose profiles in time. A Gammex 467 phantom was used to test the system and various types of biological tissue was measured. Results: RSP for all Gammex inserts, expect the one made with lung-450 material (<2% error), were determined within ±0.5% error. Depends on the WEPL of investigated phantom, a measurement takes around 10 min, which can be accelerated by a faster pump. Conclusion: Based on the concept of dose extinction, a system was explored to measure WEPL efficiently and accurately for a large number of samples. This allows the validation of CT HU to stopping power conversions based on large number of samples and real tissues. It also allows the assessment of beam uncertainties due to variations over patients, which issue has never been sufficiently studied before.

  11. SU-F-J-193: Efficient Dose Extinction Method for Water Equivalent Path Length (WEPL) of Real Tissue Samples for Validation of CT HU to Stopping Power Conversion

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, R; Baer, E; Jee, K; Sharp, G; Flanz, J; Lu, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: For proton therapy, an accurate model of CT HU to relative stopping power (RSP) conversion is essential. In current practice, validation of these models relies solely on measurements of tissue substitutes with standard compositions. Validation based on real tissue samples would be much more direct and can address variations between patients. This study intends to develop an efficient and accurate system based on the concept of dose extinction to measure WEPL and retrieve RSP in biological tissue in large number of types. Methods: A broad AP proton beam delivering a spread out Bragg peak (SOBP) is used to irradiate the samples with a Matrixx detector positioned immediately below. A water tank was placed on top of the samples, with the water level controllable in sub-millimeter by a remotely controlled dosing pump. While gradually lowering the water level with beam on, the transmission dose was recorded at 1 frame/sec. The WEPL were determined as the difference between the known beam range of the delivered SOBP (80%) and the water level corresponding to 80% of measured dose profiles in time. A Gammex 467 phantom was used to test the system and various types of biological tissue was measured. Results: RSP for all Gammex inserts, expect the one made with lung-450 material (<2% error), were determined within ±0.5% error. Depends on the WEPL of investigated phantom, a measurement takes around 10 min, which can be accelerated by a faster pump. Conclusion: Based on the concept of dose extinction, a system was explored to measure WEPL efficiently and accurately for a large number of samples. This allows the validation of CT HU to stopping power conversions based on large number of samples and real tissues. It also allows the assessment of beam uncertainties due to variations over patients, which issue has never been sufficiently studied before.

  12. A multi-model ensemble approach to seabed mapping

    Science.gov (United States)

    Diesing, Markus; Stephens, David

    2015-06-01

    Seabed habitat mapping based on swath acoustic data and ground-truth samples is an emergent and active marine science discipline. Significant progress could be achieved by transferring techniques and approaches that have been successfully developed and employed in such fields as terrestrial land cover mapping. One such promising approach is the multiple classifier system, which aims at improving classification performance by combining the outputs of several classifiers. Here we present results of a multi-model ensemble applied to multibeam acoustic data covering more than 5000 km2 of seabed in the North Sea with the aim to derive accurate spatial predictions of seabed substrate. A suite of six machine learning classifiers (k-Nearest Neighbour, Support Vector Machine, Classification Tree, Random Forest, Neural Network and Naïve Bayes) was trained with ground-truth sample data classified into seabed substrate classes and their prediction accuracy was assessed with an independent set of samples. The three and five best performing models were combined to classifier ensembles. Both ensembles led to increased prediction accuracy as compared to the best performing single classifier. The improvements were however not statistically significant at the 5% level. Although the three-model ensemble did not perform significantly better than its individual component models, we noticed that the five-model ensemble did perform significantly better than three of the five component models. A classifier ensemble might therefore be an effective strategy to improve classification performance. Another advantage is the fact that the agreement in predicted substrate class between the individual models of the ensemble could be used as a measure of confidence. We propose a simple and spatially explicit measure of confidence that is based on model agreement and prediction accuracy.

  13. Characteristics of muscle dysmorphia in male football, weight training, and competitive natural and non-natural bodybuilding samples.

    Science.gov (United States)

    Baghurst, Timothy; Lirgg, Cathy

    2009-06-01

    The purpose of this study was to identify differences in traits associated with muscle dysmorphia between collegiate football players (n=66), weight trainers for physique (n=115), competitive non-natural bodybuilders (n=47), and competitive natural bodybuilders (n=65). All participants completed demographic questionnaires in addition to the Muscle Dysmorphia Inventory (Rhea, Lantz, & Cornelius, 2004). Results revealed a significant main effect for group, and post hoc tests found that the non-natural bodybuilding group did not score significantly higher than the natural bodybuilding group on any subscale except for Pharmacological Use. Both the non-natural and natural bodybuilding groups scored significantly higher than those that weight trained for physique on the Dietary Behavior and Supplement Use subscales. The collegiate football players scored lowest on all subscales of the Muscle Dysmorphia Inventory except for Physique Protection where they scored highest. Findings are discussed with future research expounded.

  14. Forces in Motzkin paths in a wedge

    International Nuclear Information System (INIS)

    Janse van Rensburg, E J

    2006-01-01

    Entropic forces in models of Motzkin paths in a wedge geometry are considered as models of forces in polymers in confined geometries. A Motzkin path in the square lattice is a path from the origin to a point in the line Y = X while it never visits sites below this line, and it is constrained to give unit length steps only in the north and east directions and steps of length √2 in the north-east direction. Motzkin path models may be generalized to ensembles of NE-oriented paths above the line Y = rX, where r > 0 is an irrational number. These are paths giving east, north and north-east steps from the origin in the square lattice, and confined to the r-wedge formed by the Y-axis and the line Y = rX. The generating function g r of these paths is not known, but if r > 1, then I determine its radius of convergence to be t r = min (r-1)/r≤y≤r/(r+1) [y y (1-r(1-y)) 1-r(1-y) (r(1-y)-y) r(1-y)-y ] and if r is an element of (0, 1), then t r = 1/3. The entropic force the path exerts on the line Y rX may be computed from this. An asymptotic expression for the force for large values of r is given by F(r) = log(2r)/r 2 - (1+2log(2r))/2r 3 + O (log(2r)/r 4 ). In terms of the vertex angle α of the r-wedge, the moment of the force about the origin has leading terms F(α) log(2/α) - (α/2)(1+2log(2/α)) + O(α 2 log(2/α)) as α → 0 + and F(α) = 0 if α is element of [π/4, π/2]. Moreover, numerical integration of the force shows that the total work done by closing the wedge is 1.085 07... lattice units. An alternative ensemble of NE-oriented paths may be defined by slightly changing the generating function g r . In this model, it is possible to determine closed-form expressions for the limiting free energy and the force. The leading term in an asymptotic expansions for this force agrees with the leading term in the asymptotic expansion of the above model, and the subleading term only differs by a factor of 2

  15. Quantum ensembles of quantum classifiers.

    Science.gov (United States)

    Schuld, Maria; Petruccione, Francesco

    2018-02-09

    Quantum machine learning witnesses an increasing amount of quantum algorithms for data-driven decision making, a problem with potential applications ranging from automated image recognition to medical diagnosis. Many of those algorithms are implementations of quantum classifiers, or models for the classification of data inputs with a quantum computer. Following the success of collective decision making with ensembles in classical machine learning, this paper introduces the concept of quantum ensembles of quantum classifiers. Creating the ensemble corresponds to a state preparation routine, after which the quantum classifiers are evaluated in parallel and their combined decision is accessed by a single-qubit measurement. This framework naturally allows for exponentially large ensembles in which - similar to Bayesian learning - the individual classifiers do not have to be trained. As an example, we analyse an exponentially large quantum ensemble in which each classifier is weighed according to its performance in classifying the training data, leading to new results for quantum as well as classical machine learning.

  16. An ensemble method for extracting adverse drug events from social media.

    Science.gov (United States)

    Liu, Jing; Zhao, Songzheng; Zhang, Xiaodi

    2016-06-01

    Because adverse drug events (ADEs) are a serious health problem and a leading cause of death, it is of vital importance to identify them correctly and in a timely manner. With the development of Web 2.0, social media has become a large data source for information on ADEs. The objective of this study is to develop a relation extraction system that uses natural language processing techniques to effectively distinguish between ADEs and non-ADEs in informal text on social media. We develop a feature-based approach that utilizes various lexical, syntactic, and semantic features. Information-gain-based feature selection is performed to address high-dimensional features. Then, we evaluate the effectiveness of four well-known kernel-based approaches (i.e., subset tree kernel, tree kernel, shortest dependency path kernel, and all-paths graph kernel) and several ensembles that are generated by adopting different combination methods (i.e., majority voting, weighted averaging, and stacked generalization). All of the approaches are tested using three data sets: two health-related discussion forums and one general social networking site (i.e., Twitter). When investigating the contribution of each feature subset, the feature-based approach attains the best area under the receiver operating characteristics curve (AUC) values, which are 78.6%, 72.2%, and 79.2% on the three data sets. When individual methods are used, we attain the best AUC values of 82.1%, 73.2%, and 77.0% using the subset tree kernel, shortest dependency path kernel, and feature-based approach on the three data sets, respectively. When using classifier ensembles, we achieve the best AUC values of 84.5%, 77.3%, and 84.5% on the three data sets, outperforming the baselines. Our experimental results indicate that ADE extraction from social media can benefit from feature selection. With respect to the effectiveness of different feature subsets, lexical features and semantic features can enhance the ADE extraction

  17. Special cases of the quadratic shortest path problem

    NARCIS (Netherlands)

    Sotirov, Renata; Hu, Hao

    2017-01-01

    The quadratic shortest path problem (QSPP) is the problem of finding a path with prespecified start vertex s and end vertex t in a digraph such that the sum of weights of arcs and the sum of interaction costs over all pairs of arcs on the path is minimized. We first consider a variant of the QSPP

  18. [Which route leads from chronic back pain to depression? A path analysis on direct and indirect effects using the cognitive mediators catastrophizing and helplessness/hopelessness in a general population sample].

    Science.gov (United States)

    Fahland, R A; Kohlmann, T; Hasenbring, M; Feng, Y-S; Schmidt, C O

    2012-12-01

    Chronic pain and depression are highly comorbid; however, the longitudinal link is only partially understood. This study examined direct and indirect effects of chronic back pain on depression using path analysis in a general population sample, focussing on cognitive mediator variables. Analyses are based on 413 participants (aged 18-75 years) in a population-based postal survey on back pain who reported chronic back pain at baseline. Follow-up data were collected after 1 year. Depression was measured with the Center for Epidemiologic Studies Depression Scale (CES-D). Fear-avoidance-beliefs (FABQ), catastrophizing and helplessness/hopelessness (KRSS) were considered as cognitive mediators. Data were analyzed using path analysis. Chronic back pain had no direct effect on depression at follow-up when controlling for cognitive mediators. A mediating effect emerged for helplessness/hopelessness but not for catastrophizing or fear-avoidance beliefs. These results support the cognitive mediation hypothesis which assumes that psychological variables mediate the association between pain and depression. The importance of helplessness/hopelessness is of relevance for the treatment of patients with chronic back pain.

  19. Ensemble forecasting of species distributions.

    Science.gov (United States)

    Araújo, Miguel B; New, Mark

    2007-01-01

    Concern over implications of climate change for biodiversity has led to the use of bioclimatic models to forecast the range shifts of species under future climate-change scenarios. Recent studies have demonstrated that projections by alternative models can be so variable as to compromise their usefulness for guiding policy decisions. Here, we advocate the use of multiple models within an ensemble forecasting framework and describe alternative approaches to the analysis of bioclimatic ensembles, including bounding box, consensus and probabilistic techniques. We argue that, although improved accuracy can be delivered through the traditional tasks of trying to build better models with improved data, more robust forecasts can also be achieved if ensemble forecasts are produced and analysed appropriately.

  20. Effect of storage conditions on the weight and appearance of dried blood spot samples on various cellulose-based substrates.

    Science.gov (United States)

    Denniff, Philip; Spooner, Neil

    2010-11-01

    Before shipping and storage, dried blood spot (DBS) samples must be dried in order to protect the integrity of the spots. In this article, we examine the time required to dry blood spot samples and the effects of different environmental conditions on their integrity. Under ambient laboratory conditions, DBS samples on Whatman 903(®), FTA(®) and FTA(®) Elute substrates are dry within 90 min of spotting. An additional 5% of moisture is lost during subsequent storage with desiccant. When exposed to elevated conditions of temperature and relative humidity, the DBS samples absorb moisture. DBS samples on FTA lose this moisture on being returned to ambient conditions. DBS samples on 903 show no visible signs of deterioration when stored at elevated conditions. However, these conditions cause the DBS to diffuse through the FTA Elute substrate. Blood spots are dry within 90 min of spotting. However, the substrates examined behave differently when exposed to conditions of high relative humidity and temperature, in some cases resulting in the integrity of the substrate and DBS sample being compromised. It is recommended that these factors be investigated as part of method development and validation.

  1. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    Science.gov (United States)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-04-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  2. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    International Nuclear Information System (INIS)

    Qiao, Qin; Zhang, Hou-Dao; Huang, Xuhui

    2016-01-01

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  3. Enhancing pairwise state-transition weights: A new weighting scheme in simulated tempering that can minimize transition time between a pair of conformational states

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, Qin, E-mail: qqiao@ust.hk; Zhang, Hou-Dao [Department of Chemistry, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong); Huang, Xuhui, E-mail: xuhuihuang@ust.hk [Department of Chemistry, Division of Biomedical Engineering, Center of Systems Biology and Human Health, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon (Hong Kong); The HKUST Shenzhen Research Institute, Shenzhen (China)

    2016-04-21

    Simulated tempering (ST) is a widely used enhancing sampling method for Molecular Dynamics simulations. As one expanded ensemble method, ST is a combination of canonical ensembles at different temperatures and the acceptance probability of cross-temperature transitions is determined by both the temperature difference and the weights of each temperature. One popular way to obtain the weights is to adopt the free energy of each canonical ensemble, which achieves uniform sampling among temperature space. However, this uniform distribution in temperature space may not be optimal since high temperatures do not always speed up the conformational transitions of interest, as anti-Arrhenius kinetics are prevalent in protein and RNA folding. Here, we propose a new method: Enhancing Pairwise State-transition Weights (EPSW), to obtain the optimal weights by minimizing the round-trip time for transitions among different metastable states at the temperature of interest in ST. The novelty of the EPSW algorithm lies in explicitly considering the kinetics of conformation transitions when optimizing the weights of different temperatures. We further demonstrate the power of EPSW in three different systems: a simple two-temperature model, a two-dimensional model for protein folding with anti-Arrhenius kinetics, and the alanine dipeptide. The results from these three systems showed that the new algorithm can substantially accelerate the transitions between conformational states of interest in the ST expanded ensemble and further facilitate the convergence of thermodynamics compared to the widely used free energy weights. We anticipate that this algorithm is particularly useful for studying functional conformational changes of biological systems where the initial and final states are often known from structural biology experiments.

  4. Ensemble method for dengue prediction.

    Science.gov (United States)

    Buczak, Anna L; Baugher, Benjamin; Moniz, Linda J; Bagley, Thomas; Babin, Steven M; Guven, Erhan

    2018-01-01

    In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico) during four dengue seasons: 1) peak height (i.e., maximum weekly number of cases during a transmission season; 2) peak week (i.e., week in which the maximum weekly number of cases occurred); and 3) total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date. Our approach used ensemble models created by combining three disparate types of component models: 1) two-dimensional Method of Analogues models incorporating both dengue and climate data; 2) additive seasonal Holt-Winters models with and without wavelet smoothing; and 3) simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations. Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week. The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  5. Ensemble method for dengue prediction.

    Directory of Open Access Journals (Sweden)

    Anna L Buczak

    Full Text Available In the 2015 NOAA Dengue Challenge, participants made three dengue target predictions for two locations (Iquitos, Peru, and San Juan, Puerto Rico during four dengue seasons: 1 peak height (i.e., maximum weekly number of cases during a transmission season; 2 peak week (i.e., week in which the maximum weekly number of cases occurred; and 3 total number of cases reported during a transmission season. A dengue transmission season is the 12-month period commencing with the location-specific, historical week with the lowest number of cases. At the beginning of the Dengue Challenge, participants were provided with the same input data for developing the models, with the prediction testing data provided at a later date.Our approach used ensemble models created by combining three disparate types of component models: 1 two-dimensional Method of Analogues models incorporating both dengue and climate data; 2 additive seasonal Holt-Winters models with and without wavelet smoothing; and 3 simple historical models. Of the individual component models created, those with the best performance on the prior four years of data were incorporated into the ensemble models. There were separate ensembles for predicting each of the three targets at each of the two locations.Our ensemble models scored higher for peak height and total dengue case counts reported in a transmission season for Iquitos than all other models submitted to the Dengue Challenge. However, the ensemble models did not do nearly as well when predicting the peak week.The Dengue Challenge organizers scored the dengue predictions of the Challenge participant groups. Our ensemble approach was the best in predicting the total number of dengue cases reported for transmission season and peak height for Iquitos, Peru.

  6. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  7. Self-esteem, body-esteem, emotional intelligence, and social anxiety in a college sample: the moderating role of weight.

    Science.gov (United States)

    Abdollahi, Abbas; Abu Talib, Mansor

    2016-01-01

    To examine the relationships between self-esteem, body-esteem, emotional intelligence, and social anxiety, as well as to examine the moderating role of weight between exogenous variables and social anxiety, 520 university students completed the self-report measures. Structural equation modeling revealed that individuals with low self-esteem, body-esteem, and emotional intelligence were more likely to report social anxiety. The findings indicated that obese and overweight individuals with low body-esteem, emotional intelligence, and self-esteem had higher social anxiety than others. Our results highlight the roles of body-esteem, self-esteem, and emotional intelligence as influencing factors for reducing social anxiety.

  8. Visualization and classification of physiological failure modes in ensemble hemorrhage simulation

    Science.gov (United States)

    Zhang, Song; Pruett, William Andrew; Hester, Robert

    2015-01-01

    In an emergency situation such as hemorrhage, doctors need to predict which patients need immediate treatment and care. This task is difficult because of the diverse response to hemorrhage in human population. Ensemble physiological simulations provide a means to sample a diverse range of subjects and may have a better chance of containing the correct solution. However, to reveal the patterns and trends from the ensemble simulation is a challenging task. We have developed a visualization framework for ensemble physiological simulations. The visualization helps users identify trends among ensemble members, classify ensemble member into subpopulations for analysis, and provide prediction to future events by matching a new patient's data to existing ensembles. We demonstrated the effectiveness of the visualization on simulated physiological data. The lessons learned here can be applied to clinically-collected physiological data in the future.

  9. "Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation"

    Directory of Open Access Journals (Sweden)

    Jason W. Osborne

    2011-09-01

    Full Text Available Large surveys often use probability sampling in order to obtain representative samples, and these data sets are valuable tools for researchers in all areas of science. Yet many researchers are not formally prepared to appropriately utilize these resources. Indeed, users of one popular dataset were generally found not to have modeled the analyses to take account of the complex sample (Johnson & Elliott, 1998 even when publishing in highly-regarded journals. It is well known that failure to appropriately model the complex sample can substantially bias the results of the analysis. Examples presented in this paper highlight the risk of error of inference and mis-estimation of parameters from failure to analyze these data sets appropriately.

  10. Improving wave forecasting by integrating ensemble modelling and machine learning

    Science.gov (United States)

    O'Donncha, F.; Zhang, Y.; James, S. C.

    2017-12-01

    Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.

  11. Perceived Child Weight Status, Family Structure and Functioning, and Support for Health Behaviors in a Sample of Bariatric Surgery Patients.

    Science.gov (United States)

    Pratt, Keeley J; Ferriby, Megan; Noria, Sabrena; Skelton, Joseph; Taylor, Christopher; Needleman, Bradley

    2018-01-29

    The purpose of this study is to describe the associations between bariatric surgery patients' perspectives of their child's weight status, family support for eating and exercise behavior change, and family structure and functioning. A cross-sectional descriptive design with pre- and postsurgery (N = 224) patients was used. Demographics, perceptions of child weight status, family support for eating habits and exercise, and family functioning were assessed from patients at a University Bariatric Clinic. Patients who perceived their child to be overweight/obese reported more impaired family functioning, less family exercise participation, and more discouragement for eating habit change in the family compared to patients who did not perceive their child to be overweight/obese. Single parents more often perceived their children to be overweight/obese, and had more impaired family functioning, and less support for changing eating habits and family exercise participation. Patients with impaired family functioning reported less support for changing eating habits and family exercise participation. Bariatric patients who perceived their child to be overweight/obese and identified as single parents reported more impaired family functioning and less support for eating habits and family participation in exercise. Assessing pre- and postsurgery measures from parents and children will allow the further identification of relationship variables that can be targeted to promote positive family changes that benefit parents and children long-term. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. Top-quark mass measurement in the 2.1 fb-1 tight lepton and isolated track sample using neutrino φ weighting method

    International Nuclear Information System (INIS)

    Artikov, A.; Bellettini, G.; Trovato, M.; Budagov, Yu.; Glagolev, V.; Pukhov, O.; Sisakyan, A.; Suslov, I.; Chlachidze, G.; Chokheli, D.; Velev, G.

    2008-01-01

    We report on a measurement of the top quark mass in the tight lepton and isolated track sample using the neutrino φ weighting method. After applying the selection cuts for the data sample with the integrated luminosity of 2.1 fb -1 236 events were obtained. These events were reconstructed according to the tt bar hypothesis and fitted as a superposition of signal and combined background. For the expected number of background 105.8±12.9 we measure the top quark mass to be M top =167.7±4.2/4.0 (stat.) ±3.1 (syst.) GeV/c 2

  13. Utility of Respondent Driven Sampling to Reach Disadvantaged Emerging Adults for Assessment of Substance Use, Weight, and Sexual Behaviors.

    Science.gov (United States)

    Tucker, Jalie A; Simpson, Cathy A; Chandler, Susan D; Borch, Casey A; Davies, Susan L; Kerbawy, Shatomi J; Lewis, Terri H; Crawford, M Scott; Cheong, JeeWon; Michael, Max

    2016-01-01

    Emerging adulthood often entails heightened risk-taking with potential life-long consequences, and research on risk behaviors is needed to guide prevention programming, particularly in under-served and difficult to reach populations. This study evaluated the utility of Respondent Driven Sampling (RDS), a peer-driven methodology that corrects limitations of snowball sampling, to reach at-risk African American emerging adults from disadvantaged urban communities. Initial "seed" participants from the target group recruited peers, who then recruited their peers in an iterative process (110 males, 234 females; M age = 18.86 years). Structured field interviews assessed common health risk factors, including substance use, overweight/obesity, and sexual behaviors. Established gender-and age-related associations with risk factors were replicated, and sample risk profiles and prevalence estimates compared favorably with matched samples from representative U.S. national surveys. Findings supported the use of RDS as a sampling method and grassroots platform for research and prevention with community-dwelling risk groups.

  14. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Meal planning is associated with food variety, diet quality and body weight status in a large sample of French adults.

    Science.gov (United States)

    Ducrot, Pauline; Méjean, Caroline; Aroumougame, Vani; Ibanez, Gladys; Allès, Benjamin; Kesse-Guyot, Emmanuelle; Hercberg, Serge; Péneau, Sandrine

    2017-02-02

    Meal planning could be a potential tool to offset time scarcity and therefore encourage home meal preparation, which has been linked with an improved diet quality. However, to date, meal planning has received little attention in the scientific literature. The aim of our cross-sectional study was to investigate the association between meal planning and diet quality, including adherence to nutritional guidelines and food variety, as well as weight status. Meal planning, i.e. planning ahead the foods that will be eaten for the next few days, was assessed in 40,554 participants of the web-based observational NutriNet-Santé study. Dietary measurements included intakes of energy, nutrients, food groups, and adherence to the French nutritional guidelines (mPNNS-GS) estimated through repeated 24-h dietary records. A food variety score was also calculated using Food Frequency Questionnaire. Weight and height were self-reported. Association between meal planning and dietary intakes were assessed using ANCOVAs, while associations with quartiles of mPNNS-GS scores, quartiles of food variety score and weight status categories (overweight, obesity) were evaluated using logistic regression models. A total of 57% of the participants declared to plan meals at least occasionally. Meal planners were more likely to have a higher mPNNS-GS (OR quartile 4 vs. 1 = 1.13, 95% CI: [1.07-1.20]), higher overall food variety (OR quartile 4 vs. 1 = 1.25, 95% CI: [1.18-1.32]). In women, meal planning was associated with lower odds of being overweight (OR = 0.92 [0.87-0.98]) and obese (OR = 0.79 [0.73-0.86]). In men, the association was significant for obesity only (OR = 0.81 [0.69-0.94]). Meal planning was associated with a healthier diet and less obesity. Although no causality can be inferred from the reported associations, these data suggest that meal planning could potentially be relevant for obesity prevention.

  16. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  17. Shallow cumuli ensemble statistics for development of a stochastic parameterization

    Science.gov (United States)

    Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs

    2014-05-01

    According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a

  18. NIST-Traceable NMR Method to Determine Quantitative Weight Percentage Purity of Nitrogen Mustard HN-3 Feedstock Samples

    Science.gov (United States)

    2013-08-01

    products . This report may not be cited for purposes of advertisement. This report has been approved for public release. Acknowledgments The...such as 0.01% ethylbenzene in deuterated acetone can be analyzed to check the signal response. The analysis of this sample can be done periodically as

  19. Teaching Strategies for Specialized Ensembles.

    Science.gov (United States)

    Teaching Music, 1999

    1999-01-01

    Provides a strategy, from the book "Strategies for Teaching Specialized Ensembles," that addresses Standard 9A of the National Standards for Music Education. Explains that students will identify and describe the musical and historical characteristics of the classical era in music they perform and in audio examples. (CMK)

  20. Multimodel ensembles of wheat growth

    DEFF Research Database (Denmark)

    Martre, Pierre; Wallach, Daniel; Asseng, Senthold

    2015-01-01

    , but such studies are difficult to organize and have only recently begun. We report on the largest ensemble study to date, of 27 wheat models tested in four contrasting locations for their accuracy in simulating multiple crop growth and yield variables. The relative error averaged over models was 24...

  1. Spectral Diagonal Ensemble Kalman Filters

    Czech Academy of Sciences Publication Activity Database

    Kasanický, Ivan; Mandel, Jan; Vejmelka, Martin

    2015-01-01

    Roč. 22, č. 4 (2015), s. 485-497 ISSN 1023-5809 R&D Projects: GA ČR GA13-34856S Grant - others:NSF(US) DMS-1216481 Institutional support: RVO:67985807 Keywords : data assimilation * ensemble Kalman filter * spectral representation Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.321, year: 2015

  2. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    Marquardt algorithm by varying conditions such as inputs, hidden neurons, initialization, training sets and random Gaussian noise injection to ... Several such ensembles formed the population which was evolved to generate the fittest ensemble.

  3. Global Ensemble Forecast System (GEFS) [1 Deg.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...

  4. Localization of atomic ensembles via superfluorescence

    International Nuclear Information System (INIS)

    Macovei, Mihai; Evers, Joerg; Keitel, Christoph H.; Zubairy, M. Suhail

    2007-01-01

    The subwavelength localization of an ensemble of atoms concentrated to a small volume in space is investigated. The localization relies on the interaction of the ensemble with a standing wave laser field. The light scattered in the interaction of the standing wave field and the atom ensemble depends on the position of the ensemble relative to the standing wave nodes. This relation can be described by a fluorescence intensity profile, which depends on the standing wave field parameters and the ensemble properties and which is modified due to collective effects in the ensemble of nearby particles. We demonstrate that the intensity profile can be tailored to suit different localization setups. Finally, we apply these results to two localization schemes. First, we show how to localize an ensemble fixed at a certain position in the standing wave field. Second, we discuss localization of an ensemble passing through the standing wave field

  5. Analysis of low birth weight and its co-variants in Bangladesh based on a sub-sample from nationally representative survey.

    Science.gov (United States)

    Khan, Jahidur Rahman; Islam, Md Mazharul; Awan, Nabil; Muurlink, Olav

    2018-03-06

    Low birth weight (LBW) remains a leading global cause of childhood morbidity and mortality. This study leverages a large national survey to determine current prevalence and socioeconomic, demographic and heath related factors associated with LBW in Bangladesh. Data from the Multiple Indicator Cluster Survey (MICS) 2012-13 of Bangladesh were analyzed. A total of 2319 women for whom contemporaneous birth weight data was available and who had a live birth in the two years preceding the survey were sampled for this study. However, this analysis only was able to take advantage of 29% of the total sample with 71% missing birth weight for newborns. The indicator, LBW (rates observed in Rajshahi (11%) and highest rates in Rangpur (28%). Education of mothers (adjusted odds ratio [AOR] 0.52, 95% confidence interval [CI] 0.39-0.68 for secondary or higher educated mother) and poor antenatal care (ANC) (AOR 1.40, 95% CI 1.04-1.90) were associated with LBW after adjusting for mother's age, parity and cluster effects. Mothers from wealthier families were less likely to give birth to an LBW infant. Further indicators that wealth continues to play a role in LBW were that place of delivery, ANC and delivery assistance by quality health workers were significantly associated with LBW. However there has been a notable fall in LBW prevalence in Bangladesh since the last comparable survey (prevalence 36%), and an evidence of possible elimination of rural/urban disparities. Low birth weight remains associated with key indicators not just of maternal poverty (notably adequate maternal education) but also markers of structural poverty in health care (notably quality ANC). Results based on this sub-sample indicate LBW is still a public health concern in Bangladesh and an integrated effort from all stakeholders should be continued and interventions based on the study findings should be devised to further reduce the risk of LBW.

  6. Multicomponent ensemble models to forecast induced seismicity

    Science.gov (United States)

    Király-Proag, E.; Gischig, V.; Zechar, J. D.; Wiemer, S.

    2018-01-01

    In recent years, human-induced seismicity has become a more and more relevant topic due to its economic and social implications. Several models and approaches have been developed to explain underlying physical processes or forecast induced seismicity. They range from simple statistical models to coupled numerical models incorporating complex physics. We advocate the need for forecast testing as currently the best method for ascertaining if models are capable to reasonably accounting for key physical governing processes—or not. Moreover, operational forecast models are of great interest to help on-site decision-making in projects entailing induced earthquakes. We previously introduced a standardized framework following the guidelines of the Collaboratory for the Study of Earthquake Predictability, the Induced Seismicity Test Bench, to test, validate, and rank induced seismicity models. In this study, we describe how to construct multicomponent ensemble models based on Bayesian weightings that deliver more accurate forecasts than individual models in the case of Basel 2006 and Soultz-sous-Forêts 2004 enhanced geothermal stimulation projects. For this, we examine five calibrated variants of two significantly different model groups: (1) Shapiro and Smoothed Seismicity based on the seismogenic index, simple modified Omori-law-type seismicity decay, and temporally weighted smoothed seismicity; (2) Hydraulics and Seismicity based on numerically modelled pore pressure evolution that triggers seismicity using the Mohr-Coulomb failure criterion. We also demonstrate how the individual and ensemble models would perform as part of an operational Adaptive Traffic Light System. Investigating seismicity forecasts based on a range of potential injection scenarios, we use forecast periods of different durations to compute the occurrence probabilities of seismic events M ≥ 3. We show that in the case of the Basel 2006 geothermal stimulation the models forecast hazardous levels

  7. An alternative path integral for quantum gravity

    Energy Technology Data Exchange (ETDEWEB)

    Krishnan, Chethan; Kumar, K.V. Pavan; Raju, Avinash [Center for High Energy Physics, Indian Institute of Science,Bangalore 560012 (India)

    2016-10-10

    We define a (semi-classical) path integral for gravity with Neumann boundary conditions in D dimensions, and show how to relate this new partition function to the usual picture of Euclidean quantum gravity. We also write down the action in ADM Hamiltonian formulation and use it to reproduce the entropy of black holes and cosmological horizons. A comparison between the (background-subtracted) covariant and Hamiltonian ways of semi-classically evaluating this path integral in flat space reproduces the generalized Smarr formula and the first law. This “Neumann ensemble” perspective on gravitational thermodynamics is parallel to the canonical (Dirichlet) ensemble of Gibbons-Hawking and the microcanonical approach of Brown-York.

  8. Relationship between HCO_3"- concentration to weight of C_6H_6 of environmental isotop "1"4C analysis and its relationship with sampling in the field

    International Nuclear Information System (INIS)

    Satrio; Rasi Prasetio

    2016-01-01

    It has been done the groundwater sampling process of deep aquifer in Jakarta and surrounding areas for the analysis of environmental isotope "1"4C. Groundwater sampling was preceded by calculating the concentration of HCO_3"- (bicarbonate ion) through titration in the field. The number of repetitions of sampling is determined by the concentration data of HCO_3"- which obtained. The Repetition of this sampling will determine the acquisition of a solution of C_6H_6 (benzene) during the synthesis process benzene. In the field, the sampling is done by extracting of 60 liters of water to precipitate BaCO_3. The sampling process is repeated based on data from the bicarbonate ion concentration. The purpose of this study to determine the relationship between the concentration of HCO_3"- to the weights C_6H_6 which obtained in the analysis of environmental isotope "1"4C and evaluate the number of repetitions of the sampling that should be done. Based on the analysis of titration in the field, shows that concentration HCO_3"- ranged between 180 - 600 ppm with the acquisition of benzene between 1.84 to 4.5 grams. There is a strong relationship between the concentration of HCO_3"- and C_6H_6 weights obtained in the process of synthesis of benzene with a correlation of about 0.900. This correlation can be improved by measuring the concentration of HCO_3"- in advance in the laboratory tend to be more accurate than in the field. (author)

  9. Achieving cultural congruency in weight loss interventions: can a spirituality-based program attract and retain an inner-city community sample?

    Science.gov (United States)

    Davis, Chad; Dutton, William Blake; Durant, Taryn; Annunziato, Rachel A; Marcotte, David

    2014-01-01

    Ethnic minorities continue to be disproportionately affected by obesity and are less likely to access healthcare than Caucasians. It is therefore imperative that researchers develop novel methods that will attract these difficult-to-reach groups. The purpose of the present study is to describe characteristics of an urban community sample attracted to a spiritually based, weight loss intervention. METHODS. Thirteen participants enrolled in a pilot version of Spiritual Self-Schema Therapy (3S) applied to disordered eating behavior and obesity. Treatment consisted of 12 one-hour sessions in a group therapy format. At baseline, participants were measured for height and weight and completed a battery of self-report measures. The sample was predominantly African-American and Hispanic and a large percentage of the sample was male. Mean baseline scores of the EDE-Q, YFAS, and the CES-D revealed clinically meaningful levels of eating disordered pathology and depression, respectively. The overall attrition rate was quite low for interventions targeting obesity. This application of a spiritually centered intervention seemed to attract and retain a predominantly African-American and Hispanic sample. By incorporating a culturally congruent focus, this approach may have been acceptable to individuals who are traditionally more difficult to reach.

  10. Achieving Cultural Congruency in Weight Loss Interventions: Can a Spirituality-Based Program Attract and Retain an Inner-City Community Sample?

    Science.gov (United States)

    Davis, Chad; Dutton, William Blake; Durant, Taryn; Annunziato, Rachel A.; Marcotte, David

    2014-01-01

    Ethnic minorities continue to be disproportionately affected by obesity and are less likely to access healthcare than Caucasians. It is therefore imperative that researchers develop novel methods that will attract these difficult-to-reach groups. The purpose of the present study is to describe characteristics of an urban community sample attracted to a spiritually based, weight loss intervention. Methods. Thirteen participants enrolled in a pilot version of Spiritual Self-Schema Therapy (3S) applied to disordered eating behavior and obesity. Treatment consisted of 12 one-hour sessions in a group therapy format. At baseline, participants were measured for height and weight and completed a battery of self-report measures. Results. The sample was predominantly African-American and Hispanic and a large percentage of the sample was male. Mean baseline scores of the EDE-Q, YFAS, and the CES-D revealed clinically meaningful levels of eating disordered pathology and depression, respectively. The overall attrition rate was quite low for interventions targeting obesity. Discussion. This application of a spiritually centered intervention seemed to attract and retain a predominantly African-American and Hispanic sample. By incorporating a culturally congruent focus, this approach may have been acceptable to individuals who are traditionally more difficult to reach. PMID:24804086

  11. Achieving Cultural Congruency in Weight Loss Interventions: Can a Spirituality-Based Program Attract and Retain an Inner-City Community Sample?

    Directory of Open Access Journals (Sweden)

    Chad Davis

    2014-01-01

    Full Text Available Ethnic minorities continue to be disproportionately affected by obesity and are less likely to access healthcare than Caucasians. It is therefore imperative that researchers develop novel methods that will attract these difficult-to-reach groups. The purpose of the present study is to describe characteristics of an urban community sample attracted to a spiritually based, weight loss intervention. Methods. Thirteen participants enrolled in a pilot version of Spiritual Self-Schema Therapy (3S applied to disordered eating behavior and obesity. Treatment consisted of 12 one-hour sessions in a group therapy format. At baseline, participants were measured for height and weight and completed a battery of self-report measures. Results. The sample was predominantly African-American and Hispanic and a large percentage of the sample was male. Mean baseline scores of the EDE-Q, YFAS, and the CES-D revealed clinically meaningful levels of eating disordered pathology and depression, respectively. The overall attrition rate was quite low for interventions targeting obesity. Discussion. This application of a spiritually centered intervention seemed to attract and retain a predominantly African-American and Hispanic sample. By incorporating a culturally congruent focus, this approach may have been acceptable to individuals who are traditionally more difficult to reach.

  12. Fractional path planning and path tracking

    International Nuclear Information System (INIS)

    Melchior, P.; Jallouli-Khlif, R.; Metoui, B.

    2011-01-01

    This paper presents the main results of the application of fractional approach in path planning and path tracking. A new robust path planning design for mobile robot was studied in dynamic environment. The normalized attractive force applied to the robot is based on a fictitious fractional attractive potential. This method allows to obtain robust path planning despite robot mass variation. The danger level of each obstacles is characterized by the fractional order of the repulsive potential of the obstacles. Under these conditions, the robot dynamic behavior was studied by analyzing its X - Y path planning with dynamic target or dynamic obstacles. The case of simultaneously mobile obstacles and target is also considered. The influence of the robot mass variation is studied and the robustness analysis of the obtained path shows the robustness improvement due to the non integer order properties. Pre shaping approach is used to reduce system vibration in motion control. Desired systems inputs are altered so that the system finishes the requested move without residual vibration. This technique, developed by N.C. Singer and W.P.Seering, is used for flexible structure control, particularly in the aerospace field. In a previous work, this method was extended for explicit fractional derivative systems and applied to second generation CRONE control, the robustness was also studied. CRONE (the French acronym of C ommande Robuste d'Ordre Non Entier ) control system design is a frequency-domain based methodology using complex fractional integration.

  13. Salivary Cortisol as a Biomarker of Stress in Mothers and their Low Birth Weight Infants and Sample Collecting Challenges

    Directory of Open Access Journals (Sweden)

    Janevski Milica Ranković

    2016-04-01

    Full Text Available Background: Salivary cortisol measurement is a non-invasive method suitable for use in neonatal research. Mother-infant separation after birth represents stress and skin-to-skin contact (SSC has numerous benefits. The aim of the study was to measure salivary cortisol in mothers and newborns before and after SSC in order to assess the effect of SSC on mothers’ and infants’ stress and to estimate the efficacy of collecting small saliva samples in newborns.

  14. Squeezing of Collective Excitations in Spin Ensembles

    DEFF Research Database (Denmark)

    Kraglund Andersen, Christian; Mølmer, Klaus

    2012-01-01

    We analyse the possibility to create two-mode spin squeezed states of two separate spin ensembles by inverting the spins in one ensemble and allowing spin exchange between the ensembles via a near resonant cavity field. We investigate the dynamics of the system using a combination of numerical an...

  15. A Brief Tutorial on the Ensemble Kalman Filter

    OpenAIRE

    Mandel, Jan

    2009-01-01

    The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models. The EnKF originated as a version of the Kalman filter for large problems (essentially, the covariance matrix is replaced by the sample covariance), and it is now an important data assimilation component of ensemble forecasting. EnKF is related to the particle filter (in this context, a particle is the s...

  16. Multimodel hydrological ensemble forecasts for the Baskatong catchment in Canada using the TIGGE database.

    Science.gov (United States)

    Tito Arandia Martinez, Fabian

    2014-05-01

    Adequate uncertainty assessment is an important issue in hydrological modelling. An important issue for hydropower producers is to obtain ensemble forecasts which truly grasp the uncertainty linked to upcoming streamflows. If properly assessed, this uncertainty can lead to optimal reservoir management and energy production (ex. [1]). The meteorological inputs to the hydrological model accounts for an important part of the total uncertainty in streamflow forecasting. Since the creation of the THORPEX initiative and the TIGGE database, access to meteorological ensemble forecasts from nine agencies throughout the world have been made available. This allows for hydrological ensemble forecasts based on multiple meteorological ensemble forecasts. Consequently, both the uncertainty linked to the architecture of the meteorological model and the uncertainty linked to the initial condition of the atmosphere can be accounted for. The main objective of this work is to show that a weighted combination of meteorological ensemble forecasts based on different atmospheric models can lead to improved hydrological ensemble forecasts, for horizons from one to ten days. This experiment is performed for the Baskatong watershed, a head subcatchment of the Gatineau watershed in the province of Quebec, in Canada. Baskatong watershed is of great importance for hydro-power production, as it comprises the main reservoir for the Gatineau watershed, on which there are six hydropower plants managed by Hydro-Québec. Since the 70's, they have been using pseudo ensemble forecast based on deterministic meteorological forecasts to which variability derived from past forecasting errors is added. We use a combination of meteorological ensemble forecasts from different models (precipitation and temperature) as the main inputs for hydrological model HSAMI ([2]). The meteorological ensembles from eight of the nine agencies available through TIGGE are weighted according to their individual performance and

  17. Ensemble Kalman filtering with one-step-ahead smoothing

    KAUST Repository

    Raboudi, Naila F.

    2018-01-11

    The ensemble Kalman filter (EnKF) is widely used for sequential data assimilation. It operates as a succession of forecast and analysis steps. In realistic large-scale applications, EnKFs are implemented with small ensembles and poorly known model error statistics. This limits their representativeness of the background error covariances and, thus, their performance. This work explores the efficiency of the one-step-ahead (OSA) smoothing formulation of the Bayesian filtering problem to enhance the data assimilation performance of EnKFs. Filtering with OSA smoothing introduces an updated step with future observations, conditioning the ensemble sampling with more information. This should provide an improved background ensemble in the analysis step, which may help to mitigate the suboptimal character of EnKF-based methods. Here, the authors demonstrate the efficiency of a stochastic EnKF with OSA smoothing for state estimation. They then introduce a deterministic-like EnKF-OSA based on the singular evolutive interpolated ensemble Kalman (SEIK) filter. The authors show that the proposed SEIK-OSA outperforms both SEIK, as it efficiently exploits the data twice, and the stochastic EnKF-OSA, as it avoids observational error undersampling. They present extensive assimilation results from numerical experiments conducted with the Lorenz-96 model to demonstrate SEIK-OSA’s capabilities.

  18. Realizing spaces as path-component spaces

    OpenAIRE

    Banakh, Taras; Brazas, Jeremy

    2018-01-01

    The path component space of a topological space $X$ is the quotient space $\\pi_0(X)$ whose points are the path components of $X$. We show that every Tychonoff space $X$ is the path-component space of a Tychonoff space $Y$ of weight $w(Y)=w(X)$ such that the natural quotient map $Y\\to \\pi_0(Y)=X$ is a perfect map. Hence, many topological properties of $X$ transfer to $Y$. We apply this result to construct a compact space $X\\subset \\mathbb{R}^3$ for which the fundamental group $\\pi_1(X,x_0)$ is...

  19. Exploring diversity in ensemble classification: Applications in large area land cover mapping

    Science.gov (United States)

    Mellor, Andrew; Boukir, Samia

    2017-07-01

    Ensemble classifiers, such as random forests, are now commonly applied in the field of remote sensing, and have been shown to perform better than single classifier systems, resulting in reduced generalisation error. Diversity across the members of ensemble classifiers is known to have a strong influence on classification performance - whereby classifier errors are uncorrelated and more uniformly distributed across ensemble members. The relationship between ensemble diversity and classification performance has not yet been fully explored in the fields of information science and machine learning and has never been examined in the field of remote sensing. This study is a novel exploration of ensemble diversity and its link to classification performance, applied to a multi-class canopy cover classification problem using random forests and multisource remote sensing and ancillary GIS data, across seven million hectares of diverse dry-sclerophyll dominated public forests in Victoria Australia. A particular emphasis is placed on analysing the relationship between ensemble diversity and ensemble margin - two key concepts in ensemble learning. The main novelty of our work is on boosting diversity by emphasizing the contribution of lower margin instances used in the learning process. Exploring the influence of tree pruning on diversity is also a new empirical analysis that contributes to a better understanding of ensemble performance. Results reveal insights into the trade-off between ensemble classification accuracy and diversity, and through the ensemble margin, demonstrate how inducing diversity by targeting lower margin training samples is a means of achieving better classifier performance for more difficult or rarer classes and reducing information redundancy in classification problems. Our findings inform strategies for collecting training data and designing and parameterising ensemble classifiers, such as random forests. This is particularly important in large area

  20. A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations

    Science.gov (United States)

    Berg, Bernd A.

    2017-03-01

    The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.

  1. Eigenfunction statistics of Wishart Brownian ensembles

    International Nuclear Information System (INIS)

    Shukla, Pragya

    2017-01-01

    We theoretically analyze the eigenfunction fluctuation measures for a Hermitian ensemble which appears as an intermediate state of the perturbation of a stationary ensemble by another stationary ensemble of Wishart (Laguerre) type. Similar to the perturbation by a Gaussian stationary ensemble, the measures undergo a diffusive dynamics in terms of the perturbation parameter but the energy-dependence of the fluctuations is different in the two cases. This may have important consequences for the eigenfunction dynamics as well as phase transition studies in many areas of complexity where Brownian ensembles appear. (paper)

  2. Ensemble Kalman filtering with residual nudging

    KAUST Repository

    Luo, X.

    2012-10-03

    Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF) by (in effect) adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.

  3. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody

    2016-05-03

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  4. Performance Analysis of Local Ensemble Kalman Filter

    Science.gov (United States)

    Tong, Xin T.

    2018-03-01

    Ensemble Kalman filter (EnKF) is an important data assimilation method for high-dimensional geophysical systems. Efficient implementation of EnKF in practice often involves the localization technique, which updates each component using only information within a local radius. This paper rigorously analyzes the local EnKF (LEnKF) for linear systems and shows that the filter error can be dominated by the ensemble covariance, as long as (1) the sample size exceeds the logarithmic of state dimension and a constant that depends only on the local radius; (2) the forecast covariance matrix admits a stable localized structure. In particular, this indicates that with small system and observation noises, the filter error will be accurate in long time even if the initialization is not. The analysis also reveals an intrinsic inconsistency caused by the localization technique, and a stable localized structure is necessary to control this inconsistency. While this structure is usually taken for granted for the operation of LEnKF, it can also be rigorously proved for linear systems with sparse local observations and weak local interactions. These theoretical results are also validated by numerical implementation of LEnKF on a simple stochastic turbulence in two dynamical regimes.

  5. Ensemble Kalman filtering with residual nudging

    Directory of Open Access Journals (Sweden)

    Xiaodong Luo

    2012-10-01

    Full Text Available Covariance inflation and localisation are two important techniques that are used to improve the performance of the ensemble Kalman filter (EnKF by (in effect adjusting the sample covariances of the estimates in the state space. In this work, an additional auxiliary technique, called residual nudging, is proposed to monitor and, if necessary, adjust the residual norms of state estimates in the observation space. In an EnKF with residual nudging, if the residual norm of an analysis is larger than a pre-specified value, then the analysis is replaced by a new one whose residual norm is no larger than a pre-specified value. Otherwise, the analysis is considered as a reasonable estimate and no change is made. A rule for choosing the pre-specified value is suggested. Based on this rule, the corresponding new state estimates are explicitly derived in case of linear observations. Numerical experiments in the 40-dimensional Lorenz 96 model show that introducing residual nudging to an EnKF may improve its accuracy and/or enhance its stability against filter divergence, especially in the small ensemble scenario.

  6. Deterministic Mean-Field Ensemble Kalman Filtering

    KAUST Repository

    Law, Kody; Tembine, Hamidou; Tempone, Raul

    2016-01-01

    The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. A density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence k between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d<2k. The fidelity of approximation of the true distribution is also established using an extension of the total variation metric to random measures. This is limited by a Gaussian bias term arising from nonlinearity/non-Gaussianity of the model, which arises in both deterministic and standard EnKF. Numerical results support and extend the theory.

  7. Nonequilibrium statistical mechanics ensemble method

    CERN Document Server

    Eu, Byung Chan

    1998-01-01

    In this monograph, nonequilibrium statistical mechanics is developed by means of ensemble methods on the basis of the Boltzmann equation, the generic Boltzmann equations for classical and quantum dilute gases, and a generalised Boltzmann equation for dense simple fluids The theories are developed in forms parallel with the equilibrium Gibbs ensemble theory in a way fully consistent with the laws of thermodynamics The generalised hydrodynamics equations are the integral part of the theory and describe the evolution of macroscopic processes in accordance with the laws of thermodynamics of systems far removed from equilibrium Audience This book will be of interest to researchers in the fields of statistical mechanics, condensed matter physics, gas dynamics, fluid dynamics, rheology, irreversible thermodynamics and nonequilibrium phenomena

  8. Statistical Analysis of Protein Ensembles

    Science.gov (United States)

    Máté, Gabriell; Heermann, Dieter

    2014-04-01

    As 3D protein-configuration data is piling up, there is an ever-increasing need for well-defined, mathematically rigorous analysis approaches, especially that the vast majority of the currently available methods rely heavily on heuristics. We propose an analysis framework which stems from topology, the field of mathematics which studies properties preserved under continuous deformations. First, we calculate a barcode representation of the molecules employing computational topology algorithms. Bars in this barcode represent different topological features. Molecules are compared through their barcodes by statistically determining the difference in the set of their topological features. As a proof-of-principle application, we analyze a dataset compiled of ensembles of different proteins, obtained from the Ensemble Protein Database. We demonstrate that our approach correctly detects the different protein groupings.

  9. Ensemble methods for handwritten digit recognition

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Liisberg, Christian; Salamon, P.

    1992-01-01

    Neural network ensembles are applied to handwritten digit recognition. The individual networks of the ensemble are combinations of sparse look-up tables (LUTs) with random receptive fields. It is shown that the consensus of a group of networks outperforms the best individual of the ensemble....... It is further shown that it is possible to estimate the ensemble performance as well as the learning curve on a medium-size database. In addition the authors present preliminary analysis of experiments on a large database and show that state-of-the-art performance can be obtained using the ensemble approach...... by optimizing the receptive fields. It is concluded that it is possible to improve performance significantly by introducing moderate-size ensembles; in particular, a 20-25% improvement has been found. The ensemble random LUTs, when trained on a medium-size database, reach a performance (without rejects) of 94...

  10. Path-dependent functions

    International Nuclear Information System (INIS)

    Khrapko, R.I.

    1985-01-01

    A uniform description of various path-dependent functions is presented with the help of expansion of the type of the Taylor series. So called ''path-integrals'' and ''path-tensor'' are introduced which are systems of many-component quantities whose values are defined for arbitrary paths in coordinated region of space in such a way that they contain a complete information on the path. These constructions are considered as elementary path-dependent functions and are used instead of power monomials in the usual Taylor series. Coefficients of such an expansion are interpreted as partial derivatives dependent on the order of the differentiations or else as nonstandard cavariant derivatives called two-point derivatives. Some examples of pathdependent functions are presented.Space curvature tensor is considered whose geometrica properties are determined by the (non-transitive) translator of parallel transport of a general type. Covariant operation leading to the ''extension'' of tensor fiels is pointed out

  11. Benchmarking Commercial Conformer Ensemble Generators.

    Science.gov (United States)

    Friedrich, Nils-Ole; de Bruyn Kops, Christina; Flachsenberg, Florian; Sommer, Kai; Rarey, Matthias; Kirchmair, Johannes

    2017-11-27

    We assess and compare the performance of eight commercial conformer ensemble generators (ConfGen, ConfGenX, cxcalc, iCon, MOE LowModeMD, MOE Stochastic, MOE Conformation Import, and OMEGA) and one leading free algorithm, the distance geometry algorithm implemented in RDKit. The comparative study is based on a new version of the Platinum Diverse Dataset, a high-quality benchmarking dataset of 2859 protein-bound ligand conformations extracted from the PDB. Differences in the performance of commercial algorithms are much smaller than those observed for free algorithms in our previous study (J. Chem. Inf. 2017, 57, 529-539). For commercial algorithms, the median minimum root-mean-square deviations measured between protein-bound ligand conformations and ensembles of a maximum of 250 conformers are between 0.46 and 0.61 Å. Commercial conformer ensemble generators are characterized by their high robustness, with at least 99% of all input molecules successfully processed and few or even no substantial geometrical errors detectable in their output conformations. The RDKit distance geometry algorithm (with minimization enabled) appears to be a good free alternative since its performance is comparable to that of the midranked commercial algorithms. Based on a statistical analysis, we elaborate on which algorithms to use and how to parametrize them for best performance in different application scenarios.

  12. Pulled Motzkin paths

    Energy Technology Data Exchange (ETDEWEB)

    Janse van Rensburg, E J, E-mail: rensburg@yorku.c [Department of Mathematics and Statistics, York University, Toronto, ON, M3J 1P3 (Canada)

    2010-08-20

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) {yields} f as f {yields} {infinity}, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) {yields} 2f as f {yields} {infinity}, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  13. Pulled Motzkin paths

    Science.gov (United States)

    Janse van Rensburg, E. J.

    2010-08-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  14. Pulled Motzkin paths

    International Nuclear Information System (INIS)

    Janse van Rensburg, E J

    2010-01-01

    In this paper the models of pulled Dyck paths in Janse van Rensburg (2010 J. Phys. A: Math. Theor. 43 215001) are generalized to pulled Motzkin path models. The generating functions of pulled Motzkin paths are determined in terms of series over trinomial coefficients and the elastic response of a Motzkin path pulled at its endpoint (see Orlandini and Whittington (2004 J. Phys. A: Math. Gen. 37 5305-14)) is shown to be R(f) = 0 for forces pushing the endpoint toward the adsorbing line and R(f) = f(1 + 2cosh f))/(2sinh f) → f as f → ∞, for forces pulling the path away from the X-axis. In addition, the elastic response of a Motzkin path pulled at its midpoint is shown to be R(f) = 0 for forces pushing the midpoint toward the adsorbing line and R(f) = f(1 + 2cosh (f/2))/sinh (f/2) → 2f as f → ∞, for forces pulling the path away from the X-axis. Formal combinatorial identities arising from pulled Motzkin path models are also presented. These identities are the generalization of combinatorial identities obtained in directed paths models to their natural trinomial counterparts.

  15. Multi-Dimensional Path Queries

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    1998-01-01

    to create nested path structures. We present an SQL-like query language that is based on path expressions and we show how to use it to express multi-dimensional path queries that are suited for advanced data analysis in decision support environments like data warehousing environments......We present the path-relationship model that supports multi-dimensional data modeling and querying. A path-relationship database is composed of sets of paths and sets of relationships. A path is a sequence of related elements (atoms, paths, and sets of paths). A relationship is a binary path...

  16. Efficient methods for including quantum effects in Monte Carlo calculations of large systems: extension of the displaced points path integral method and other effective potential methods to calculate properties and distributions.

    Science.gov (United States)

    Mielke, Steven L; Dinpajooh, Mohammadhasan; Siepmann, J Ilja; Truhlar, Donald G

    2013-01-07

    We present a procedure to calculate ensemble averages, thermodynamic derivatives, and coordinate distributions by effective classical potential methods. In particular, we consider the displaced-points path integral (DPPI) method, which yields exact quantal partition functions and ensemble averages for a harmonic potential and approximate quantal ones for general potentials, and we discuss the implementation of the new procedure in two Monte Carlo simulation codes, one that uses uncorrelated samples to calculate absolute free energies, and another that employs Metropolis sampling to calculate relative free energies. The results of the new DPPI method are compared to those from accurate path integral calculations as well as to results of two other effective classical potential schemes for the case of an isolated water molecule. In addition to the partition function, we consider the heat capacity and expectation values of the energy, the potential energy, the bond angle, and the OH distance. We also consider coordinate distributions. The DPPI scheme performs best among the three effective potential schemes considered and achieves very good accuracy for all of the properties considered. A key advantage of the effective potential schemes is that they display much lower statistical sampling variances than those for accurate path integral calculations. The method presented here shows great promise for including quantum effects in calculations on large systems.

  17. Method for sampling and analysis of volatile biomarkers in process gas from aerobic digestion of poultry carcasses using time-weighted average SPME and GC-MS.

    Science.gov (United States)

    Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J

    2017-10-01

    A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Dynamic principle for ensemble control tools.

    Science.gov (United States)

    Samoletov, A; Vasiev, B

    2017-11-28

    Dynamical equations describing physical systems in contact with a thermal bath are commonly extended by mathematical tools called "thermostats." These tools are designed for sampling ensembles in statistical mechanics. Here we propose a dynamic principle underlying a range of thermostats which is derived using fundamental laws of statistical physics and ensures invariance of the canonical measure. The principle covers both stochastic and deterministic thermostat schemes. Our method has a clear advantage over a range of proposed and widely used thermostat schemes that are based on formal mathematical reasoning. Following the derivation of the proposed principle, we show its generality and illustrate its applications including design of temperature control tools that differ from the Nosé-Hoover-Langevin scheme.

  19. Unique Path Partitions

    DEFF Research Database (Denmark)

    Bessenrodt, Christine; Olsson, Jørn Børling; Sellers, James A.

    2013-01-01

    We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions.......We give a complete classification of the unique path partitions and study congruence properties of the function which enumerates such partitions....

  20. Hamiltonian path integrals

    International Nuclear Information System (INIS)

    Prokhorov, L.V.

    1982-01-01

    The properties of path integrals associated with the allowance for nonstandard terms reflecting the operator nature of the canonical variables are considered. Rules for treating such terms (''equivalence rules'') are formulated. Problems with a boundary, the behavior of path integrals under canonical transformations, and the problem of quantization of dynamical systems with constraints are considered in the framework of the method

  1. An approach for analyzing the ensemble mean from a dynamic point of view

    OpenAIRE

    Pengfei, Wang

    2014-01-01

    Simultaneous ensemble mean equations (LEMEs) for the Lorenz model are obtained, enabling us to analyze the properties of the ensemble mean from a dynamical point of view. The qualitative analysis for the two-sample and n-sample LEMEs show the locations and number of stable points are different from the Lorenz equations (LEs), and the results are validated by numerical experiments. The analysis for the eigenmatrix of the stable points of LEMEs indicates that the stability of these stable point...

  2. A comparison of resampling schemes for estimating model observer performance with small ensembles

    Science.gov (United States)

    Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.

    2017-09-01

    In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.

  3. Measuring social interaction in music ensembles.

    Science.gov (United States)

    Volpe, Gualtiero; D'Ausilio, Alessandro; Badino, Leonardo; Camurri, Antonio; Fadiga, Luciano

    2016-05-05

    Music ensembles are an ideal test-bed for quantitative analysis of social interaction. Music is an inherently social activity, and music ensembles offer a broad variety of scenarios which are particularly suitable for investigation. Small ensembles, such as string quartets, are deemed a significant example of self-managed teams, where all musicians contribute equally to a task. In bigger ensembles, such as orchestras, the relationship between a leader (the conductor) and a group of followers (the musicians) clearly emerges. This paper presents an overview of recent research on social interaction in music ensembles with a particular focus on (i) studies from cognitive neuroscience; and (ii) studies adopting a computational approach for carrying out automatic quantitative analysis of ensemble music performances. © 2016 The Author(s).

  4. Data Pre-Analysis and Ensemble of Various Artificial Neural Networks for Monthly Streamflow Forecasting

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2018-05-01

    Full Text Available This paper introduces three artificial neural network (ANN architectures for monthly streamflow forecasting: a radial basis function network, an extreme learning machine, and the Elman network. Three ensemble techniques, a simple average ensemble, a weighted average ensemble, and an ANN-based ensemble, were used to combine the outputs of the individual ANN models. The objective was to highlight the performance of the general regression neural network-based ensemble technique (GNE through an improvement of monthly streamflow forecasting accuracy. Before the construction of an ANN model, data preanalysis techniques, such as empirical wavelet transform (EWT, were exploited to eliminate the oscillations of the streamflow series. Additionally, a theory of chaos phase space reconstruction was used to select the most relevant and important input variables for forecasting. The proposed GNE ensemble model has been applied for the mean monthly streamflow observation data from the Wudongde hydrological station in the Jinsha River Basin, China. Comparisons and analysis of this study have demonstrated that the denoised streamflow time series was less disordered and unsystematic than was suggested by the original time series according to chaos theory. Thus, EWT can be adopted as an effective data preanalysis technique for the prediction of monthly streamflow. Concurrently, the GNE performed better when compared with other ensemble techniques.

  5. Statistical ensembles in quantum mechanics

    International Nuclear Information System (INIS)

    Blokhintsev, D.

    1976-01-01

    The interpretation of quantum mechanics presented in this paper is based on the concept of quantum ensembles. This concept differs essentially from the canonical one by that the interference of the observer into the state of a microscopic system is of no greater importance than in any other field of physics. Owing to this fact, the laws established by quantum mechanics are not of less objective character than the laws governing classical statistical mechanics. The paradoxical nature of some statements of quantum mechanics which result from the interpretation of the wave functions as the observer's notebook greatly stimulated the development of the idea presented. (Auth.)

  6. Wind Power Prediction using Ensembles

    DEFF Research Database (Denmark)

    Giebel, Gregor; Badger, Jake; Landberg, Lars

    2005-01-01

    offshore wind farm and the whole Jutland/Funen area. The utilities used these forecasts for maintenance planning, fuel consumption estimates and over-the-weekend trading on the Leipzig power exchange. Othernotable scientific results include the better accuracy of forecasts made up from a simple...... superposition of two NWP provider (in our case, DMI and DWD), an investigation of the merits of a parameterisation of the turbulent kinetic energy within thedelivered wind speed forecasts, and the finding that a “naïve” downscaling of each of the coarse ECMWF ensemble members with higher resolution HIRLAM did...

  7. A Path Space Extension for Robust Light Transport Simulation

    DEFF Research Database (Denmark)

    Hachisuka, Toshiya; Pantaleoni, Jacopo; Jensen, Henrik Wann

    2012-01-01

    We present a new sampling space for light transport paths that makes it possible to describe Monte Carlo path integration and photon density estimation in the same framework. A key contribution of our paper is the introduction of vertex perturbations, which extends the space of paths with loosely...

  8. EnsembleGASVR: A novel ensemble method for classifying missense single nucleotide polymorphisms

    KAUST Repository

    Rapakoulia, Trisevgeni; Theofilatos, Konstantinos A.; Kleftogiannis, Dimitrios A.; Likothanasis, Spiridon D.; Tsakalidis, Athanasios K.; Mavroudi, Seferina P.

    2014-01-01

    do not support their predictions with confidence scores. Results: To overcome these limitations, a novel ensemble computational methodology is proposed. EnsembleGASVR facilitates a twostep algorithm, which in its first step applies a novel

  9. Multi-Model Ensemble Wake Vortex Prediction

    Science.gov (United States)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  10. Urban runoff forecasting with ensemble weather predictions

    DEFF Research Database (Denmark)

    Pedersen, Jonas Wied; Courdent, Vianney Augustin Thomas; Vezzaro, Luca

    This research shows how ensemble weather forecasts can be used to generate urban runoff forecasts up to 53 hours into the future. The results highlight systematic differences between ensemble members that needs to be accounted for when these forecasts are used in practice.......This research shows how ensemble weather forecasts can be used to generate urban runoff forecasts up to 53 hours into the future. The results highlight systematic differences between ensemble members that needs to be accounted for when these forecasts are used in practice....

  11. Novel liquid chromatography method based on linear weighted regression for the fast determination of isoprostane isomers in plasma samples using sensitive tandem mass spectrometry detection.

    Science.gov (United States)

    Aszyk, Justyna; Kot, Jacek; Tkachenko, Yurii; Woźniak, Michał; Bogucka-Kocka, Anna; Kot-Wasik, Agata

    2017-04-15

    A simple, fast, sensitive and accurate methodology based on a LLE followed by liquid chromatography-tandem mass spectrometry for simultaneous determination of four regioisomers (8-iso prostaglandin F 2α , 8-iso-15(R)-prostaglandin F 2α , 11β-prostaglandin F 2α , 15(R)-prostaglandin F 2α ) in routine analysis of human plasma samples was developed. Isoprostanes are stable products of arachidonic acid peroxidation and are regarded as the most reliable markers of oxidative stress in vivo. Validation of method was performed by evaluation of the key analytical parameters such as: matrix effect, analytical curve, trueness, precision, limits of detection and limits of quantification. As a homoscedasticity was not met for analytical data, weighted linear regression was applied in order to improve the accuracy at the lower end points of calibration curve. The detection limits (LODs) ranged from 1.0 to 2.1pg/mL. For plasma samples spiked with the isoprostanes at the level of 50pg/mL, intra-and interday repeatability ranged from 2.1 to 3.5% and 0.1 to 5.1%, respectively. The applicability of the proposed approach has been verified by monitoring of isoprostane isomers level in plasma samples collected from young patients (n=8) subjected to hyperbaric hyperoxia (100% oxygen at 280kPa(a) for 30min) in a multiplace hyperbaric chamber. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. A molecular dynamics algorithm for simulation of field theories in the canonical ensemble

    International Nuclear Information System (INIS)

    Kogut, J.B.; Sinclair, D.K.

    1986-01-01

    We add a single scalar degree of freedom (''demon'') to the microcanonical ensemble which converts its molecular dynamics into a simulation method for the canonical ensemble (euclidean path integral) of the underlying field theory. This generalization of the microcanonical molecular dynamics algorithm simulates the field theory at fixed coupling with a completely deterministic procedure. We discuss the finite size effects of the method, the equipartition theorem and ergodicity. The method is applied to the planar model in two dimensions and SU(3) lattice gauge theory with four species of light, dynamical quarks in four dimensions. The method is much less sensitive to its discrete time step than conventional Langevin equation simulations of the canonical ensemble. The method is a straightforward generalization of a procedure introduced by S. Nose for molecular physics. (orig.)

  13. A Simple Ensemble Simulation Technique for Assessment of Future Variations in Specific High-Impact Weather Events

    Science.gov (United States)

    Taniguchi, Kenji

    2018-04-01

    To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.

  14. Enhancing COSMO-DE ensemble forecasts by inexpensive techniques

    Directory of Open Access Journals (Sweden)

    Zied Ben Bouallègue

    2013-02-01

    Full Text Available COSMO-DE-EPS, a convection-permitting ensemble prediction system based on the high-resolution numerical weather prediction model COSMO-DE, is pre-operational since December 2010, providing probabilistic forecasts which cover Germany. This ensemble system comprises 20 members based on variations of the lateral boundary conditions, the physics parameterizations and the initial conditions. In order to increase the sample size in a computationally inexpensive way, COSMO-DE-EPS is combined with alternative ensemble techniques: the neighborhood method and the time-lagged approach. Their impact on the quality of the resulting probabilistic forecasts is assessed. Objective verification is performed over a six months period, scores based on the Brier score and its decomposition are shown for June 2011. The combination of the ensemble system with the alternative approaches improves probabilistic forecasts of precipitation in particular for high precipitation thresholds. Moreover, combining COSMO-DE-EPS with only the time-lagged approach improves the skill of area probabilities for precipitation and does not deteriorate the skill of 2 m-temperature and wind gusts forecasts.

  15. Path integration quantization

    International Nuclear Information System (INIS)

    DeWitt-Morette, C.

    1983-01-01

    Much is expected of path integration as a quantization procedure. Much more is possible if one recognizes that path integration is at the crossroad of stochastic and differential calculus and uses the full power of both stochastic and differential calculus in setting up and computing path integrals. In contrast to differential calculus, stochastic calculus has only comparatively recently become an instrument of thought. It has nevertheless already been used in a variety of challenging problems, for instance in the quantization problem. The author presents some applications of the stochastic scheme. (Auth.)

  16. Two dimensional simplicial paths

    International Nuclear Information System (INIS)

    Piso, M.I.

    1994-07-01

    Paths on the R 3 real Euclidean manifold are defined as 2-dimensional simplicial strips which are orbits of the action of a discrete one-parameter group. It is proven that there exists at least one embedding of R 3 in the free Z-module generated by S 2 (x 0 ). The speed is defined as the simplicial derivative of the path. If mass is attached to the simplex, the free Lagrangian is proportional to the width of the path. In the continuum limit, the relativistic form of the Lagrangian is recovered. (author). 7 refs

  17. Enhanced reconstruction of weighted networks from strengths and degrees

    International Nuclear Information System (INIS)

    Mastrandrea, Rossana; Fagiolo, Giorgio; Squartini, Tiziano; Garlaschelli, Diego

    2014-01-01

    Network topology plays a key role in many phenomena, from the spreading of diseases to that of financial crises. Whenever the whole structure of a network is unknown, one must resort to reconstruction methods that identify the least biased ensemble of networks consistent with the partial information available. A challenging case, frequently encountered due to privacy issues in the analysis of interbank flows and Big Data, is when there is only local (node-specific) aggregate information available. For binary networks, the relevant ensemble is one where the degree (number of links) of each node is constrained to its observed value. However, for weighted networks the problem is much more complicated. While the naïve approach prescribes to constrain the strengths (total link weights) of all nodes, recent counter-intuitive results suggest that in weighted networks the degrees are often more informative than the strengths. This implies that the reconstruction of weighted networks would be significantly enhanced by the specification of both strengths and degrees, a computationally hard and bias-prone procedure. Here we solve this problem by introducing an analytical and unbiased maximum-entropy method that works in the shortest possible time and does not require the explicit generation of reconstructed samples. We consider several real-world examples and show that, while the strengths alone give poor results, the additional knowledge of the degrees yields accurately reconstructed networks. Information-theoretic criteria rigorously confirm that the degree sequence, as soon as it is non-trivial, is irreducible to the strength sequence. Our results have strong implications for the analysis of motifs and communities and whenever the reconstructed ensemble is required as a null model to detect higher-order patterns

  18. Zero-Slack, Noncritical Paths

    Science.gov (United States)

    Simons, Jacob V., Jr.

    2017-01-01

    The critical path method/program evaluation and review technique method of project scheduling is based on the importance of managing a project's critical path(s). Although a critical path is the longest path through a network, its location in large projects is facilitated by the computation of activity slack. However, logical fallacies in…

  19. Joys of Community Ensemble Playing: The Case of the Happy Roll Elastic Ensemble in Taiwan

    Science.gov (United States)

    Hsieh, Yuan-Mei; Kao, Kai-Chi

    2012-01-01

    The Happy Roll Elastic Ensemble (HREE) is a community music ensemble supported by Tainan Culture Centre in Taiwan. With enjoyment and friendship as its primary goals, it aims to facilitate the joys of ensemble playing and the spirit of social networking. This article highlights the key aspects of HREE's development in its first two years…

  20. Chinese version of Impact of Weight on Quality of Life for Kids: psychometric properties in a large school-based sample.

    Science.gov (United States)

    He, Jinbo; Zhu, Hong; Luo, Xingwei; Cai, Taisheng; Wu, Siyao; Lu, Yao

    2016-06-01

    The Impact of Weight on Quality of Life for Kids (IWQOL-Kids) is the first self-report questionnaire for assessing weight-related quality of life for youth. However, there is no Chinese version of IWQOL-Kids. Thus, the objective of this research was to translate IWQOL-Kids into Mandarin and evaluate its psychometric properties in a large school-based sample. The total sample included 2282 participants aged 11-18 years old, including 1703 non-overweight, 386 overweight and 193 obese students. IWQOL-Kids was translated and culturally adapted by following the international guidelines for instrument linguistic validation procedures. The psychometric evaluation included internal consistency, test-retest reliability, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), convergent validity and discriminant validity. Cronbach's α for the Chinese version of IWQOL-Kids (IWQOL-Kids-C) was 0.956 and ranged from 0.891 to 0.927 for subscales. IWQOL-Kids-C showed a test-retest coefficient of 0.937 after 2 weeks and ranged from 0.847 to 0.903 for subscales. The original four-factor model was reproduced by EFA after seven iterations, accounting for 69.28% of the total variance. CFA demonstrated that the four-factor model had good fit indices with comparative fit index = 0.92, normed fit index = 0.91, goodness of fit index = 0.86, root mean square error of approximation = 0.07 and root mean square residual = 0.03. Convergent validity and discriminant validity were demonstrated with higher correlations between similar constructs and lower correlations between dissimilar constructs of IWQOL-Kids-C and PedsQL™ 4.0. The significant differences were found across the body mass index groups, and IWQOL-Kids-C had higher effect sizes than PedsQL™4.0 when comparing non-overweight and obese groups, supporting the sensitivity of IWQOL-Kids-C. IWQOL-Kids-C is a satisfactory, valid and reliable instrument to assess weight-related quality of life for Chinese children and

  1. A Simple Approach to Account for Climate Model Interdependence in Multi-Model Ensembles

    Science.gov (United States)

    Herger, N.; Abramowitz, G.; Angelil, O. M.; Knutti, R.; Sanderson, B.

    2016-12-01

    Multi-model ensembles are an indispensable tool for future climate projection and its uncertainty quantification. Ensembles containing multiple climate models generally have increased skill, consistency and reliability. Due to the lack of agreed-on alternatives, most scientists use the equally-weighted multi-model mean as they subscribe to model democracy ("one model, one vote").Different research groups are known to share sections of code, parameterizations in their model, literature, or even whole model components. Therefore, individual model runs do not represent truly independent estimates. Ignoring this dependence structure might lead to a false model consensus, wrong estimation of uncertainty and effective number of independent models.Here, we present a way to partially address this problem by selecting a subset of CMIP5 model runs so that its climatological mean minimizes the RMSE compared to a given observation product. Due to the cancelling out of errors, regional biases in the ensemble mean are reduced significantly.Using a model-as-truth experiment we demonstrate that those regional biases persist into the future and we are not fitting noise, thus providing improved observationally-constrained projections of the 21st century. The optimally selected ensemble shows significantly higher global mean surface temperature projections than the original ensemble, where all the model runs are considered. Moreover, the spread is decreased well beyond that expected from the decreased ensemble size.Several previous studies have recommended an ensemble selection approach based on performance ranking of the model runs. Here, we show that this approach can perform even worse than randomly selecting ensemble members and can thus be harmful. We suggest that accounting for interdependence in the ensemble selection process is a necessary step for robust projections for use in impact assessments, adaptation and mitigation of climate change.

  2. Groebner Finite Path Algebras

    OpenAIRE

    Leamer, Micah J.

    2004-01-01

    Let K be a field and Q a finite directed multi-graph. In this paper I classify all path algebras KQ and admissible orders with the property that all of their finitely generated ideals have finite Groebner bases. MS

  3. Detection of eardrum abnormalities using ensemble deep learning approaches

    Science.gov (United States)

    Senaras, Caglar; Moberly, Aaron C.; Teknos, Theodoros; Essig, Garth; Elmaraghy, Charles; Taj-Schaal, Nazhat; Yua, Lianbo; Gurcan, Metin N.

    2018-02-01

    In this study, we proposed an approach to report the condition of the eardrum as "normal" or "abnormal" by ensembling two different deep learning architectures. In the first network (Network 1), we applied transfer learning to the Inception V3 network by using 409 labeled samples. As a second network (Network 2), we designed a convolutional neural network to take advantage of auto-encoders by using additional 673 unlabeled eardrum samples. The individual classification accuracies of the Network 1 and Network 2 were calculated as 84.4%(+/- 12.1%) and 82.6% (+/- 11.3%), respectively. Only 32% of the errors of the two networks were the same, making it possible to combine two approaches to achieve better classification accuracy. The proposed ensemble method allows us to achieve robust classification because it has high accuracy (84.4%) with the lowest standard deviation (+/- 10.3%).

  4. Boundary between the thermal and statistical polarization regimes in a nuclear spin ensemble

    International Nuclear Information System (INIS)

    Herzog, B. E.; Cadeddu, D.; Xue, F.; Peddibhotla, P.; Poggio, M.

    2014-01-01

    As the number of spins in an ensemble is reduced, the statistical fluctuations in its polarization eventually exceed the mean thermal polarization. This transition has now been surpassed in a number of recent nuclear magnetic resonance experiments, which achieve nanometer-scale detection volumes. Here, we measure nanometer-scale ensembles of nuclear spins in a KPF 6 sample using magnetic resonance force microscopy. In particular, we investigate the transition between regimes dominated by thermal and statistical nuclear polarization. The ratio between the two types of polarization provides a measure of the number of spins in the detected ensemble.

  5. Monte-Carlo approach to the generation of adversary paths

    International Nuclear Information System (INIS)

    1977-01-01

    This paper considers the definition of a threat as the sequence of events that might lead to adversary success. A nuclear facility is characterized as a weighted, labeled, directed graph, with critical adversary paths. A discrete-event, Monte-Carlo simulation model is used to estimate the probability of the critical paths. The model was tested for hypothetical facilities, with promising results

  6. Path planning in changeable environments

    NARCIS (Netherlands)

    Nieuwenhuisen, D.

    2007-01-01

    This thesis addresses path planning in changeable environments. In contrast to traditional path planning that deals with static environments, in changeable environments objects are allowed to change their configurations over time. In many cases, path planning algorithms must facilitate quick

  7. Popular Music and the Instrumental Ensemble.

    Science.gov (United States)

    Boespflug, George

    1999-01-01

    Discusses popular music, the role of the musical performer as a creator, and the styles of jazz and popular music. Describes the pop ensemble at the college level, focusing on improvisation, rehearsals, recording, and performance. Argues that pop ensembles be used in junior and senior high school. (CMK)

  8. Layered Ensemble Architecture for Time Series Forecasting.

    Science.gov (United States)

    Rahman, Md Mustafizur; Islam, Md Monirul; Murase, Kazuyuki; Yao, Xin

    2016-01-01

    Time series forecasting (TSF) has been widely used in many application areas such as science, engineering, and finance. The phenomena generating time series are usually unknown and information available for forecasting is only limited to the past values of the series. It is, therefore, necessary to use an appropriate number of past values, termed lag, for forecasting. This paper proposes a layered ensemble architecture (LEA) for TSF problems. Our LEA consists of two layers, each of which uses an ensemble of multilayer perceptron (MLP) networks. While the first ensemble layer tries to find an appropriate lag, the second ensemble layer employs the obtained lag for forecasting. Unlike most previous work on TSF, the proposed architecture considers both accuracy and diversity of the individual networks in constructing an ensemble. LEA trains different networks in the ensemble by using different training sets with an aim of maintaining diversity among the networks. However, it uses the appropriate lag and combines the best trained networks to construct the ensemble. This indicates LEAs emphasis on accuracy of the networks. The proposed architecture has been tested extensively on time series data of neural network (NN)3 and NN5 competitions. It has also been tested on several standard benchmark time series data. In terms of forecasting accuracy, our experimental results have revealed clearly that LEA is better than other ensemble and nonensemble methods.

  9. Ensemble methods for seasonal limited area forecasts

    DEFF Research Database (Denmark)

    Arritt, Raymond W.; Anderson, Christopher J.; Takle, Eugene S.

    2004-01-01

    The ensemble prediction methods used for seasonal limited area forecasts were examined by comparing methods for generating ensemble simulations of seasonal precipitation. The summer 1993 model over the north-central US was used as a test case. The four methods examined included the lagged-average...

  10. Ensemble models of neutrophil trafficking in severe sepsis.

    Directory of Open Access Journals (Sweden)

    Sang Ok Song

    Full Text Available A hallmark of severe sepsis is systemic inflammation which activates leukocytes and can result in their misdirection. This leads to both impaired migration to the locus of infection and increased infiltration into healthy tissues. In order to better understand the pathophysiologic mechanisms involved, we developed a coarse-grained phenomenological model of the acute inflammatory response in CLP (cecal ligation and puncture-induced sepsis in rats. This model incorporates distinct neutrophil kinetic responses to the inflammatory stimulus and the dynamic interactions between components of a compartmentalized inflammatory response. Ensembles of model parameter sets consistent with experimental observations were statistically generated using a Markov-Chain Monte Carlo sampling. Prediction uncertainty in the model states was quantified over the resulting ensemble parameter sets. Forward simulation of the parameter ensembles successfully captured experimental features and predicted that systemically activated circulating neutrophils display impaired migration to the tissue and neutrophil sequestration in the lung, consequently contributing to tissue damage and mortality. Principal component and multiple regression analyses of the parameter ensembles estimated from survivor and non-survivor cohorts provide insight into pathologic mechanisms dictating outcome in sepsis. Furthermore, the model was extended to incorporate hypothetical mechanisms by which immune modulation using extracorporeal blood purification results in improved outcome in septic rats. Simulations identified a sub-population (about 18% of the treated population that benefited from blood purification. Survivors displayed enhanced neutrophil migration to tissue and reduced sequestration of lung neutrophils, contributing to improved outcome. The model ensemble presented herein provides a platform for generating and testing hypotheses in silico, as well as motivating further experimental

  11. Topological quantization of ensemble averages

    International Nuclear Information System (INIS)

    Prodan, Emil

    2009-01-01

    We define the current of a quantum observable and, under well-defined conditions, we connect its ensemble average to the index of a Fredholm operator. The present work builds on a formalism developed by Kellendonk and Schulz-Baldes (2004 J. Funct. Anal. 209 388) to study the quantization of edge currents for continuous magnetic Schroedinger operators. The generalization given here may be a useful tool to scientists looking for novel manifestations of the topological quantization. As a new application, we show that the differential conductance of atomic wires is given by the index of a certain operator. We also comment on how the formalism can be used to probe the existence of edge states

  12. Characterizing Ensembles of Superconducting Qubits

    Science.gov (United States)

    Sears, Adam; Birenbaum, Jeff; Hover, David; Rosenberg, Danna; Weber, Steven; Yoder, Jonilyn L.; Kerman, Jamie; Gustavsson, Simon; Kamal, Archana; Yan, Fei; Oliver, William

    We investigate ensembles of up to 48 superconducting qubits embedded within a superconducting cavity. Such arrays of qubits have been proposed for the experimental study of Ising Hamiltonians, and efficient methods to characterize and calibrate these types of systems are still under development. Here we leverage high qubit coherence (> 70 μs) to characterize individual devices as well as qubit-qubit interactions, utilizing the common resonator mode for a joint readout. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.

  13. Electrophoretic extraction of low molecular weight cationic analytes from sodium dodecyl sulfate containing sample matrices for their direct electrospray ionization mass spectrometry.

    Science.gov (United States)

    Kinde, Tristan F; Lopez, Thomas D; Dutta, Debashis

    2015-03-03

    While the use of sodium dodecyl sulfate (SDS) in separation buffers allows efficient analysis of complex mixtures, its presence in the sample matrix is known to severely interfere with the mass-spectrometric characterization of analyte molecules. In this article, we report a microfluidic device that addresses this analytical challenge by enabling inline electrospray ionization mass spectrometry (ESI-MS) of low molecular weight cationic samples prepared in SDS containing matrices. The functionality of this device relies on the continuous extraction of analyte molecules into an SDS-free solvent stream based on the free-flow zone electrophoresis (FFZE) technique prior to their ESI-MS analysis. The reported extraction was accomplished in our current work in a glass channel with microelectrodes fabricated along its sidewalls to realize the desired electric field. Our experiments show that a key challenge to successfully operating such a device is to suppress the electroosmotically driven fluid circulations generated in its extraction channel that otherwise tend to vigorously mix the liquid streams flowing through this duct. A new coating medium, N-(2-triethoxysilylpropyl) formamide, recently demonstrated by our laboratory to nearly eliminate electroosmotic flow in glass microchannels was employed to address this issue. Applying this surface modifier, we were able to efficiently extract two different peptides, human angiotensin I and MRFA, individually from an SDS containing matrix using the FFZE method and detect them at concentrations down to 3.7 and 6.3 μg/mL, respectively, in samples containing as much as 10 mM SDS. Notice that in addition to greatly reducing the amount of SDS entering the MS instrument, the reported approach allows rapid solvent exchange for facilitating efficient analyte ionization desired in ESI-MS analysis.

  14. Ensemble-based evaluation of extreme water levels for the eastern Baltic Sea

    Science.gov (United States)

    Eelsalu, Maris; Soomere, Tarmo

    2016-04-01

    The risks and damages associated with coastal flooding that are naturally associated with an increase in the magnitude of extreme storm surges are one of the largest concerns of countries with extensive low-lying nearshore areas. The relevant risks are even more contrast for semi-enclosed water bodies such as the Baltic Sea where subtidal (weekly-scale) variations in the water volume of the sea substantially contribute to the water level and lead to large spreading of projections of future extreme water levels. We explore the options for using large ensembles of projections to more reliably evaluate return periods of extreme water levels. Single projections of the ensemble are constructed by means of fitting several sets of block maxima with various extreme value distributions. The ensemble is based on two simulated data sets produced in the Swedish Meteorological and Hydrological Institute. A hindcast by the Rossby Centre Ocean model is sampled with a resolution of 6 h and a similar hindcast by the circulation model NEMO with a resolution of 1 h. As the annual maxima of water levels in the Baltic Sea are not always uncorrelated, we employ maxima for calendar years and for stormy seasons. As the shape parameter of the Generalised Extreme Value distribution changes its sign and substantially varies in magnitude along the eastern coast of the Baltic Sea, the use of a single distribution for the entire coast is inappropriate. The ensemble involves projections based on the Generalised Extreme Value, Gumbel and Weibull distributions. The parameters of these distributions are evaluated using three different ways: maximum likelihood method and method of moments based on both biased and unbiased estimates. The total number of projections in the ensemble is 40. As some of the resulting estimates contain limited additional information, the members of pairs of projections that are highly correlated are assigned weights 0.6. A comparison of the ensemble-based projection of

  15. Ensemble stacking mitigates biases in inference of synaptic connectivity.

    Science.gov (United States)

    Chambers, Brendan; Levy, Maayan; Dechery, Joseph B; MacLean, Jason N

    2018-01-01

    A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches.

  16. Using synchronization in multi-model ensembles to improve prediction

    Science.gov (United States)

    Hiemstra, P.; Selten, F.

    2012-04-01

    In recent decades, many climate models have been developed to understand and predict the behavior of the Earth's climate system. Although these models are all based on the same basic physical principles, they still show different behavior. This is for example caused by the choice of how to parametrize sub-grid scale processes. One method to combine these imperfect models, is to run a multi-model ensemble. The models are given identical initial conditions and are integrated forward in time. A multi-model estimate can for example be a weighted mean of the ensemble members. We propose to go a step further, and try to obtain synchronization between the imperfect models by connecting the multi-model ensemble, and exchanging information. The combined multi-model ensemble is also known as a supermodel. The supermodel has learned from observations how to optimally exchange information between the ensemble members. In this study we focused on the density and formulation of the onnections within the supermodel. The main question was whether we could obtain syn-chronization between two climate models when connecting only a subset of their state spaces. Limiting the connected subspace has two advantages: 1) it limits the transfer of data (bytes) between the ensemble, which can be a limiting factor in large scale climate models, and 2) learning the optimal connection strategy from observations is easier. To answer the research question, we connected two identical quasi-geostrohic (QG) atmospheric models to each other, where the model have different initial conditions. The QG model is a qualitatively realistic simulation of the winter flow on the Northern hemisphere, has three layers and uses a spectral imple-mentation. We connected the models in the original spherical harmonical state space, and in linear combinations of these spherical harmonics, i.e. Empirical Orthogonal Functions (EOFs). We show that when connecting through spherical harmonics, we only need to connect 28% of

  17. Insights into the deterministic skill of air quality ensembles from the analysis of AQMEII data

    Directory of Open Access Journals (Sweden)

    I. Kioutsioukis

    2016-12-01

    Full Text Available Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism. Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3, nitrogen dioxide (NO2 and particulate matter (PM10. Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII. The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each station's best deterministic model at no more than 60 % of the sites, indicating a combination of members with unbalanced skill difference and error dependence for the rest. The promotion of the right amount of accuracy and diversity within the ensemble results in an average additional skill of up to 31 % compared to using the full ensemble in an unconditional way. The skill improvements were higher for O3 and lower for PM10, associated with the extent of potential changes in the joint

  18. The complex Laguerre symplectic ensemble of non-Hermitian matrices

    International Nuclear Information System (INIS)

    Akemann, G.

    2005-01-01

    We solve the complex extension of the chiral Gaussian symplectic ensemble, defined as a Gaussian two-matrix model of chiral non-Hermitian quaternion real matrices. This leads to the appearance of Laguerre polynomials in the complex plane and we prove their orthogonality. Alternatively, a complex eigenvalue representation of this ensemble is given for general weight functions. All k-point correlation functions of complex eigenvalues are given in terms of the corresponding skew orthogonal polynomials in the complex plane for finite-N, where N is the matrix size or number of eigenvalues, respectively. We also allow for an arbitrary number of complex conjugate pairs of characteristic polynomials in the weight function, corresponding to massive quark flavours in applications to field theory. Explicit expressions are given in the large-N limit at both weak and strong non-Hermiticity for the weight of the Gaussian two-matrix model. This model can be mapped to the complex Dirac operator spectrum with non-vanishing chemical potential. It belongs to the symmetry class of either the adjoint representation or two colours in the fundamental representation using staggered lattice fermions

  19. Jarzynski equality in the context of maximum path entropy

    Science.gov (United States)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  20. Fitting a function to time-dependent ensemble averaged data

    DEFF Research Database (Denmark)

    Fogelmark, Karl; Lomholt, Michael A.; Irbäck, Anders

    2018-01-01

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion...... method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software....

  1. Psychometric evaluation of the impact of weight on quality of life-lite questionnaire (IWQOL-lite) in a community sample.

    Science.gov (United States)

    Kolotkin, Ronette L; Crosby, Ross D

    2002-03-01

    The short form of impact of weight on quality of life (IWQOL)-Lite is a 31-item, self-report, obesity-specific measure of health-related quality of life (HRQOL) that consists of a total score and scores on each of five scales--physical function, self-esteem, sexual life, public distress, and work--and that exhibits strong psychometric properties. This study was undertaken in order to assess test-retest reliability and discriminant validity in a heterogeneous sample of individuals not in treatment. Individuals were recruited from the community to complete questionnaires that included the IWQOL-Lite, SF-36, Rosenberg self-esteem (RSE) scale, Marlowe-Crowne social desirability scale, global ratings of quality of life, and sexual functioning and public distress ratings. Persons currently enrolled in weight loss programs or with a body mass index (BMI) of less than 18.5 were dropped from the analyses, leaving 341 females and 153 males for analysis, with an average BMI of 27.4. For test-retest reliability, 112 participants completed the IWQOL-Lite again. ANOVA revealed significant main effects for BMI for all IWQOL-Lite scales and total score. Females showed greater impairment than males on all scales except public distress. Internal consistency ranged from 0.816 to 0.944 for IWQOL-Lite scales and was 0.958 for total score. Test-retest reliability ranged from 0.814 to 0.877 for scales and was 0.937 for total score. Internal consistency and test-retest results for overweight/obese subjects were similar to those obtained for the total sample. There was strong evidence for convergent and discriminant validity of the IWQOL-Lite in overweight/obese subjects. As in previous studies conducted on treatment-seeking obese persons, the IWQOL-Lite appears to be a reliable and valid measure of obesity-specific quality of life in overweight/obese persons not seeking treatment.

  2. Quivers of Bound Path Algebras and Bound Path Coalgebras

    Directory of Open Access Journals (Sweden)

    Dr. Intan Muchtadi

    2010-09-01

    Full Text Available bras and coalgebras can be represented as quiver (directed graph, and from quiver we can construct algebras and coalgebras called path algebras and path coalgebras. In this paper we show that the quiver of a bound path coalgebra (resp. algebra is the dual quiver of its bound path algebra (resp. coalgebra.

  3. Reliability of multi-model and structurally different single-model ensembles

    Energy Technology Data Exchange (ETDEWEB)

    Yokohata, Tokuta [National Institute for Environmental Studies, Center for Global Environmental Research, Tsukuba, Ibaraki (Japan); Annan, James D.; Hargreaves, Julia C. [Japan Agency for Marine-Earth Science and Technology, Research Institute for Global Change, Yokohama, Kanagawa (Japan); Collins, Matthew [University of Exeter, College of Engineering, Mathematics and Physical Sciences, Exeter (United Kingdom); Jackson, Charles S.; Tobis, Michael [The University of Texas at Austin, Institute of Geophysics, 10100 Burnet Rd., ROC-196, Mail Code R2200, Austin, TX (United States); Webb, Mark J. [Met Office Hadley Centre, Exeter (United Kingdom)

    2012-08-15

    The performance of several state-of-the-art climate model ensembles, including two multi-model ensembles (MMEs) and four structurally different (perturbed parameter) single model ensembles (SMEs), are investigated for the first time using the rank histogram approach. In this method, the reliability of a model ensemble is evaluated from the point of view of whether the observations can be regarded as being sampled from the ensemble. Our analysis reveals that, in the MMEs, the climate variables we investigated are broadly reliable on the global scale, with a tendency towards overdispersion. On the other hand, in the SMEs, the reliability differs depending on the ensemble and variable field considered. In general, the mean state and historical trend of surface air temperature, and mean state of precipitation are reliable in the SMEs. However, variables such as sea level pressure or top-of-atmosphere clear-sky shortwave radiation do not cover a sufficiently wide range in some. It is not possible to assess whether this is a fundamental feature of SMEs generated with particular model, or a consequence of the algorithm used to select and perturb the values of the parameters. As under-dispersion is a potentially more serious issue when using ensembles to make projections, we recommend the application of rank histograms to assess reliability when designing and running perturbed physics SMEs. (orig.)

  4. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    Science.gov (United States)

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  5. Ovis: A framework for visual analysis of ocean forecast ensembles

    KAUST Repository

    Hollt, Thomas; Magdy, Ahmed; Zhan, Peng; Chen, Guoning; Gopalakrishnan, Ganesh; Hoteit, Ibrahim; Hansen, Charles D.; Hadwiger, Markus

    2014-01-01

    We present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations of the sea surface height that is used in ocean forecasting. The position of eddies can be derived directly from the sea surface height and our visualization approach enables their interactive exploration and analysis.The behavior of eddies is important in different application settings of which we present two in this paper. First, we show an application for interactive planning of placement as well as operation of off-shore structures using real-world ensemble simulation data of the Gulf of Mexico. Off-shore structures, such as those used for oil exploration, are vulnerable to hazards caused by eddies, and the oil and gas industry relies on ocean forecasts for efficient operations. We enable analysis of the spatial domain, as well as the temporal evolution, for planning the placement and operation of structures.Eddies are also important for marine life. They transport water over large distances and with it also heat and other physical properties as well as biological organisms. In the second application we present the usefulness of our tool, which could be used for planning the paths of autonomous underwater vehicles, so called gliders, for marine scientists to study simulation data of the largely unexplored Red Sea. © 1995-2012 IEEE.

  6. Ovis: A Framework for Visual Analysis of Ocean Forecast Ensembles.

    Science.gov (United States)

    Höllt, Thomas; Magdy, Ahmed; Zhan, Peng; Chen, Guoning; Gopalakrishnan, Ganesh; Hoteit, Ibrahim; Hansen, Charles D; Hadwiger, Markus

    2014-08-01

    We present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations of the sea surface height that is used in ocean forecasting. The position of eddies can be derived directly from the sea surface height and our visualization approach enables their interactive exploration and analysis.The behavior of eddies is important in different application settings of which we present two in this paper. First, we show an application for interactive planning of placement as well as operation of off-shore structures using real-world ensemble simulation data of the Gulf of Mexico. Off-shore structures, such as those used for oil exploration, are vulnerable to hazards caused by eddies, and the oil and gas industry relies on ocean forecasts for efficient operations. We enable analysis of the spatial domain, as well as the temporal evolution, for planning the placement and operation of structures.Eddies are also important for marine life. They transport water over large distances and with it also heat and other physical properties as well as biological organisms. In the second application we present the usefulness of our tool, which could be used for planning the paths of autonomous underwater vehicles, so called gliders, for marine scientists to study simulation data of the largely unexplored Red Sea.

  7. Ovis: A framework for visual analysis of ocean forecast ensembles

    KAUST Repository

    Hollt, Thomas

    2014-08-01

    We present a novel integrated visualization system that enables interactive visual analysis of ensemble simulations of the sea surface height that is used in ocean forecasting. The position of eddies can be derived directly from the sea surface height and our visualization approach enables their interactive exploration and analysis.The behavior of eddies is important in different application settings of which we present two in this paper. First, we show an application for interactive planning of placement as well as operation of off-shore structures using real-world ensemble simulation data of the Gulf of Mexico. Off-shore structures, such as those used for oil exploration, are vulnerable to hazards caused by eddies, and the oil and gas industry relies on ocean forecasts for efficient operations. We enable analysis of the spatial domain, as well as the temporal evolution, for planning the placement and operation of structures.Eddies are also important for marine life. They transport water over large distances and with it also heat and other physical properties as well as biological organisms. In the second application we present the usefulness of our tool, which could be used for planning the paths of autonomous underwater vehicles, so called gliders, for marine scientists to study simulation data of the largely unexplored Red Sea. © 1995-2012 IEEE.

  8. Multi-model ensembles for assessment of flood losses and associated uncertainty

    Science.gov (United States)

    Figueiredo, Rui; Schröter, Kai; Weiss-Motz, Alexander; Martina, Mario L. V.; Kreibich, Heidi

    2018-05-01

    Flood loss modelling is a crucial part of risk assessments. However, it is subject to large uncertainty that is often neglected. Most models available in the literature are deterministic, providing only single point estimates of flood loss, and large disparities tend to exist among them. Adopting any one such model in a risk assessment context is likely to lead to inaccurate loss estimates and sub-optimal decision-making. In this paper, we propose the use of multi-model ensembles to address these issues. This approach, which has been applied successfully in other scientific fields, is based on the combination of different model outputs with the aim of improving the skill and usefulness of predictions. We first propose a model rating framework to support ensemble construction, based on a probability tree of model properties, which establishes relative degrees of belief between candidate models. Using 20 flood loss models in two test cases, we then construct numerous multi-model ensembles, based both on the rating framework and on a stochastic method, differing in terms of participating members, ensemble size and model weights. We evaluate the performance of ensemble means, as well as their probabilistic skill and reliability. Our results demonstrate that well-designed multi-model ensembles represent a pragmatic approach to consistently obtain more accurate flood loss estimates and reliable probability distributions of model uncertainty.

  9. Leavitt path algebras

    CERN Document Server

    Abrams, Gene; Siles Molina, Mercedes

    2017-01-01

    This book offers a comprehensive introduction by three of the leading experts in the field, collecting fundamental results and open problems in a single volume. Since Leavitt path algebras were first defined in 2005, interest in these algebras has grown substantially, with ring theorists as well as researchers working in graph C*-algebras, group theory and symbolic dynamics attracted to the topic. Providing a historical perspective on the subject, the authors review existing arguments, establish new results, and outline the major themes and ring-theoretic concepts, such as the ideal structure, Z-grading and the close link between Leavitt path algebras and graph C*-algebras. The book also presents key lines of current research, including the Algebraic Kirchberg Phillips Question, various additional classification questions, and connections to noncommutative algebraic geometry. Leavitt Path Algebras will appeal to graduate students and researchers working in the field and related areas, such as C*-algebras and...

  10. Top quark mass measurement in the 2.9 fb-1 tight lepton and isolated track sample using neutrinoφ weighting method

    International Nuclear Information System (INIS)

    Bellettini, G.; Trovato, M.; Budagov, Yu.; Glagolev, V.; Sisakyan, A.; Suslov, I.; Chlachidze, G.; Velev, G.

    2008-01-01

    We report on a measurement of the top quark mass with tt bar dilepton events produced in pp bar collisions at the Fermilab Tevatron (√s 1.96 TeV) and collected by the CDF II detector. Events with a a charged muon or electron and an isolated track are searched for tt bar candidates. A sample of 328 events, corresponding to an integrated luminosity of 2.9 fb -1 , is obtained after all selection cuts. The top quark mass is reconstructed by minimizing a χ 2 function in the assumption of the tt bar dilepton hypothesis. The unconstrained kinematics of dilepton events is taken into account by the scan over the space of possibilities for the azimuthal angles of neutrinos, and a preferred mass is built for each event. In order to extract the top quark mass, a likelihood fit of the preferred mass distribution in data to a weighted sum of signal and background probability density functions is performed. Using the background constrained fit with 145.0±17.3 events expected from background we measure m t = 165.5 ± 3.3 3.4 (stat.) GeV/c 2 . The estimate of systematic error is 3.1 GeV/c 2

  11. Impulsive noise suppression in color images based on the geodesic digital paths

    Science.gov (United States)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  12. The Thinnest Path Problem

    Science.gov (United States)

    2016-07-22

    be reduced to TP in -D UDH for any . We then show that the 2-D disk hypergraph constructed in the proof of Theorem 1 can be modified to an exposed...transmission range that induces hy- peredge , i.e., (3) GAO et al.: THINNEST PATH PROBLEM 1181 Theorem 5 shows that the covered area of the path...representation of (the two hyperedges rooted at from the example given in Fig. 6 are illustrated in green and blue, respectively). step, we show in this

  13. Path dependence and creation

    DEFF Research Database (Denmark)

    Garud, Raghu; Karnøe, Peter

    This edited volume stems from a conference held in Copenhagen that the authors ran in August of 1997. The authors, aware of the recent work in evolutionary theory and the science of chaos and complexity, challenge the sometimes deterministic flavour of this work. They are interested in uncovering...... the place of agency in these theories that take history so seriously. In the end, they are as interested in path creation and destruction as they are in path dependence. This book is compiled of both theoretical and empirical writing. It shows relatively well-known industries such as the automobile...

  14. Modeling Dynamic Systems with Efficient Ensembles of Process-Based Models.

    Directory of Open Access Journals (Sweden)

    Nikola Simidjievski

    Full Text Available Ensembles are a well established machine learning paradigm, leading to accurate and robust models, predominantly applied to predictive modeling tasks. Ensemble models comprise a finite set of diverse predictive models whose combined output is expected to yield an improved predictive performance as compared to an individual model. In this paper, we propose a new method for learning ensembles of process-based models of dynamic systems. The process-based modeling paradigm employs domain-specific knowledge to automatically learn models of dynamic systems from time-series observational data. Previous work has shown that ensembles based on sampling observational data (i.e., bagging and boosting, significantly improve predictive performance of process-based models. However, this improvement comes at the cost of a substantial increase of the computational time needed for learning. To address this problem, the paper proposes a method that aims at efficiently learning ensembles of process-based models, while maintaining their accurate long-term predictive performance. This is achieved by constructing ensembles with sampling domain-specific knowledge instead of sampling data. We apply the proposed method to and evaluate its performance on a set of problems of automated predictive modeling in three lake ecosystems using a library of process-based knowledge for modeling population dynamics. The experimental results identify the optimal design decisions regarding the learning algorithm. The results also show that the proposed ensembles yield significantly more accurate predictions of population dynamics as compared to individual process-based models. Finally, while their predictive performance is comparable to the one of ensembles obtained with the state-of-the-art methods of bagging and boosting, they are substantially more efficient.

  15. Reparametrization in the path integral

    International Nuclear Information System (INIS)

    Storchak, S.N.

    1983-01-01

    The question of the invariance of a measure in the n-dimensional path integral under the path reparametrization is considered. The non-invariance of the measure through the jacobian is suggeste. After the path integral reparametrization the representatioq for the Green's function of the Hamilton operator in terms of the path integral with the classical Hamiltonian has been obtained

  16. Biophysical Characteristics of Chemical Protective Ensemble With and Without Body Armor

    Science.gov (United States)

    2015-07-01

    agility and mobility to complete mission-essential tasks. Nevertheless, the bulk, weight, and encapsulation associated with these protective ensembles...compromises mobility , agility, situational awareness, and thermoregulation. Thermal strain management during military training and operations...Corner BD, & Paquette S. Investigation of Air Gaps Entrapped in Protective Clothing System. Fire and Materials, 26(3), 121-126, 2002. 15. Song G

  17. Comparison of classical reaction paths and tunneling paths studied with the semiclassical instanton theory.

    Science.gov (United States)

    Meisner, Jan; Markmeyer, Max N; Bohner, Matthias U; Kästner, Johannes

    2017-08-30

    Atom tunneling in the hydrogen atom transfer reaction of the 2,4,6-tri-tert-butylphenyl radical to 3,5-di-tert-butylneophyl, which has a short but strongly curved reaction path, was investigated using instanton theory. We found the tunneling path to deviate qualitatively from the classical intrinsic reaction coordinate, the steepest-descent path in mass-weighted Cartesian coordinates. To perform that comparison, we implemented a new variant of the predictor-corrector algorithm for the calculation of the intrinsic reaction coordinate. We used the reaction force analysis method as a means to decompose the reaction barrier into structural and electronic components. Due to the narrow energy barrier, atom tunneling is important in the abovementioned reaction, even above room temperature. Our calculated rate constants between 350 K and 100 K agree well with experimental values. We found a H/D kinetic isotope effect of almost 10 6 at 100 K. Tunneling dominates the protium transfer below 400 K and the deuterium transfer below 300 K. We compared the lengths of the tunneling path and the classical path for the hydrogen atom transfer in the reaction HCl + Cl and quantified the corner cutting in this reaction. At low temperature, the tunneling path is about 40% shorter than the classical path.

  18. Derivation of Mayer Series from Canonical Ensemble

    International Nuclear Information System (INIS)

    Wang Xian-Zhi

    2016-01-01

    Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula. (paper)

  19. Derivation of Mayer Series from Canonical Ensemble

    Science.gov (United States)

    Wang, Xian-Zhi

    2016-02-01

    Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.

  20. A model ensemble for projecting multi‐decadal coastal cliff retreat during the 21st century

    Science.gov (United States)

    Limber, Patrick; Barnard, Patrick; Vitousek, Sean; Erikson, Li

    2018-01-01

    Sea cliff retreat rates are expected to accelerate with rising sea levels during the 21st century. Here we develop an approach for a multi‐model ensemble that efficiently projects time‐averaged sea cliff retreat over multi‐decadal time scales and large (>50 km) spatial scales. The ensemble consists of five simple 1‐D models adapted from the literature that relate sea cliff retreat to wave impacts, sea level rise (SLR), historical cliff behavior, and cross‐shore profile geometry. Ensemble predictions are based on Monte Carlo simulations of each individual model, which account for the uncertainty of model parameters. The consensus of the individual models also weights uncertainty, such that uncertainty is greater when predictions from different models do not agree. A calibrated, but unvalidated, ensemble was applied to the 475 km‐long coastline of Southern California (USA), with 4 SLR scenarios of 0.5, 0.93, 1.5, and 2 m by 2100. Results suggest that future retreat rates could increase relative to mean historical rates by more than two‐fold for the higher SLR scenarios, causing an average total land loss of 19 – 41 m by 2100. However, model uncertainty ranges from +/‐ 5 – 15 m, reflecting the inherent difficulties of projecting cliff retreat over multiple decades. To enhance ensemble performance, future work could include weighting each model by its skill in matching observations in different morphological settings

  1. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  2. Fluctuations in a quasi-stationary shallow cumulus cloud ensemble

    Directory of Open Access Journals (Sweden)

    M. Sakradzija

    2015-01-01

    Full Text Available We propose an approach to stochastic parameterisation of shallow cumulus clouds to represent the convective variability and its dependence on the model resolution. To collect information about the individual cloud lifecycles and the cloud ensemble as a whole, we employ a large eddy simulation (LES model and a cloud tracking algorithm, followed by conditional sampling of clouds at the cloud-base level. In the case of a shallow cumulus ensemble, the cloud-base mass flux distribution is bimodal, due to the different shallow cloud subtypes, active and passive clouds. Each distribution mode can be approximated using a Weibull distribution, which is a generalisation of exponential distribution by accounting for the change in distribution shape due to the diversity of cloud lifecycles. The exponential distribution of cloud mass flux previously suggested for deep convection parameterisation is a special case of the Weibull distribution, which opens a way towards unification of the statistical convective ensemble formalism of shallow and deep cumulus clouds. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate a shallow convective cloud ensemble. It is formulated as a compound random process, with the number of convective elements drawn from a Poisson distribution, and the cloud mass flux sampled from a mixed Weibull distribution. Convective memory is accounted for through the explicit cloud lifecycles, making the model formulation consistent with the choice of the Weibull cloud mass flux distribution function. The memory of individual shallow clouds is required to capture the correct convective variability. The resulting distribution of the subgrid convective states in the considered shallow cumulus case is scale-adaptive – the smaller the grid size, the broader the distribution.

  3. Ensemble based system for whole-slide prostate cancer probability mapping using color texture features.

    LENUS (Irish Health Repository)

    DiFranco, Matthew D

    2011-01-01

    We present a tile-based approach for producing clinically relevant probability maps of prostatic carcinoma in histological sections from radical prostatectomy. Our methodology incorporates ensemble learning for feature selection and classification on expert-annotated images. Random forest feature selection performed over varying training sets provides a subset of generalized CIEL*a*b* co-occurrence texture features, while sample selection strategies with minimal constraints reduce training data requirements to achieve reliable results. Ensembles of classifiers are built using expert-annotated tiles from training images, and scores for the probability of cancer presence are calculated from the responses of each classifier in the ensemble. Spatial filtering of tile-based texture features prior to classification results in increased heat-map coherence as well as AUC values of 95% using ensembles of either random forests or support vector machines. Our approach is designed for adaptation to different imaging modalities, image features, and histological decision domains.

  4. MEASURING PATH DEPENDENCY

    Directory of Open Access Journals (Sweden)

    Peter Juhasz

    2017-03-01

    Full Text Available While risk management gained popularity during the last decades even some of the basic risk types are still far out of focus. One of these is path dependency that refers to the uncertainty of how we reach a certain level of total performance over time. While decision makers are careful in accessing how their position will look like the end of certain periods, little attention is given how they will get there through the period. The uncertainty of how a process will develop across a shorter period of time is often “eliminated” by simply choosing a longer planning time interval, what makes path dependency is one of the most often overlooked business risk types. After reviewing the origin of the problem we propose and compare seven risk measures to access path. Traditional risk measures like standard deviation of sub period cash flows fail to capture this risk type. We conclude that in most cases considering the distribution of the expected cash flow effect caused by the path dependency may offer the best method, but we may need to use several measures at the same time to include all the optimisation limits of the given firm

  5. Espectrofotometria de longo caminho óptico em espectrofotômetro de duplo-feixe convencional: uma alternativa simples para investigações de amostras com densidade óptica muito baixa Long optical path length spectrophotometry in conventional double-beam spectrophotometers: a simple alternative for investigating samples of very low optical density

    Directory of Open Access Journals (Sweden)

    André Luiz Galo

    2009-01-01

    Full Text Available We describe the design and tests of a set-up mounted in a conventional double beam spectrophotometer, which allows the determination of optical density of samples confined in a long liquid core waveguide (LCW capillary. Very long optical path length can be achieved with capillary cell, allowing measurements of samples with very low optical densities. The device uses a custom optical concentrator optically coupled to LCW (TEFLON® AF. Optical density measurements, carried out using a LCW of ~ 45 cm, were in accordance with the Beer-Lambert Law. Thus, it was possible to analyze quantitatively samples at concentrations 45 fold lower than that regularly used in spectrophotometric measurements.

  6. Force Sensor Based Tool Condition Monitoring Using a Heterogeneous Ensemble Learning Model

    Directory of Open Access Journals (Sweden)

    Guofeng Wang

    2014-11-01

    Full Text Available Tool condition monitoring (TCM plays an important role in improving machining efficiency and guaranteeing workpiece quality. In order to realize reliable recognition of the tool condition, a robust classifier needs to be constructed to depict the relationship between tool wear states and sensory information. However, because of the complexity of the machining process and the uncertainty of the tool wear evolution, it is hard for a single classifier to fit all the collected samples without sacrificing generalization ability. In this paper, heterogeneous ensemble learning is proposed to realize tool condition monitoring in which the support vector machine (SVM, hidden Markov model (HMM and radius basis function (RBF are selected as base classifiers and a stacking ensemble strategy is further used to reflect the relationship between the outputs of these base classifiers and tool wear states. Based on the heterogeneous ensemble learning classifier, an online monitoring system is constructed in which the harmonic features are extracted from force signals and a minimal redundancy and maximal relevance (mRMR algorithm is utilized to select the most prominent features. To verify the effectiveness of the proposed method, a titanium alloy milling experiment was carried out and samples with different tool wear states were collected to build the proposed heterogeneous ensemble learning classifier. Moreover, the homogeneous ensemble learning model and majority voting strategy are also adopted to make a comparison. The analysis and comparison results show that the proposed heterogeneous ensemble learning classifier performs better in both classification accuracy and stability.

  7. Ensemble Machine Learning Methods and Applications

    CERN Document Server

    Ma, Yunqian

    2012-01-01

    It is common wisdom that gathering a variety of views and inputs improves the process of decision making, and, indeed, underpins a democratic society. Dubbed “ensemble learning” by researchers in computational intelligence and machine learning, it is known to improve a decision system’s robustness and accuracy. Now, fresh developments are allowing researchers to unleash the power of ensemble learning in an increasing range of real-world applications. Ensemble learning algorithms such as “boosting” and “random forest” facilitate solutions to key computational issues such as face detection and are now being applied in areas as diverse as object trackingand bioinformatics.   Responding to a shortage of literature dedicated to the topic, this volume offers comprehensive coverage of state-of-the-art ensemble learning techniques, including various contributions from researchers in leading industrial research labs. At once a solid theoretical study and a practical guide, the volume is a windfall for r...

  8. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  9. Ensemble Prediction Model with Expert Selection for Electricity Price Forecasting

    Directory of Open Access Journals (Sweden)

    Bijay Neupane

    2017-01-01

    Full Text Available Forecasting of electricity prices is important in deregulated electricity markets for all of the stakeholders: energy wholesalers, traders, retailers and consumers. Electricity price forecasting is an inherently difficult problem due to its special characteristic of dynamicity and non-stationarity. In this paper, we present a robust price forecasting mechanism that shows resilience towards the aggregate demand response effect and provides highly accurate forecasted electricity prices to the stakeholders in a dynamic environment. We employ an ensemble prediction model in which a group of different algorithms participates in forecasting 1-h ahead the price for each hour of a day. We propose two different strategies, namely, the Fixed Weight Method (FWM and the Varying Weight Method (VWM, for selecting each hour’s expert algorithm from the set of participating algorithms. In addition, we utilize a carefully engineered set of features selected from a pool of features extracted from the past electricity price data, weather data and calendar data. The proposed ensemble model offers better results than the Autoregressive Integrated Moving Average (ARIMA method, the Pattern Sequence-based Forecasting (PSF method and our previous work using Artificial Neural Networks (ANN alone on the datasets for New York, Australian and Spanish electricity markets.

  10. Quark ensembles with infinite correlation length

    OpenAIRE

    Molodtsov, S. V.; Zinovjev, G. M.

    2014-01-01

    By studying quark ensembles with infinite correlation length we formulate the quantum field theory model that, as we show, is exactly integrable and develops an instability of its standard vacuum ensemble (the Dirac sea). We argue such an instability is rooted in high ground state degeneracy (for 'realistic' space-time dimensions) featuring a fairly specific form of energy distribution, and with the cutoff parameter going to infinity this inherent energy distribution becomes infinitely narrow...

  11. Orbital magnetism in ensembles of ballistic billiards

    International Nuclear Information System (INIS)

    Ullmo, D.; Richter, K.; Jalabert, R.A.

    1993-01-01

    The magnetic response of ensembles of small two-dimensional structures at finite temperatures is calculated. Using semiclassical methods and numerical calculation it is demonstrated that only short classical trajectories are relevant. The magnetic susceptibility is enhanced in regular systems, where these trajectories appear in families. For ensembles of squares large paramagnetic susceptibility is obtained, in good agreement with recent measurements in the ballistic regime. (authors). 20 refs., 2 figs

  12. The relative importance of body change strategies, weight perception, perceived social support, and self-esteem on adolescent depressive symptoms: longitudinal findings from a national sample.

    Science.gov (United States)

    Rawana, Jennine S

    2013-07-01

    This study aimed to evaluate the relative importance of body change strategies and weight perception in adolescent depression after accounting for established risk factors for depression, namely low social support across key adolescent contexts. The moderating effect of self-esteem was also examined. Participants (N=4587, 49% female) were selected from the National Longitudinal Study of Adolescent Health. Regression analyses were conducted on the association between well-known depression risk factors (lack of perceived support from parents, peers, and schools), body change strategies, weight perception, and adolescent depressive symptoms one year later. Each well-known risk factor significantly predicted depressive symptoms. Body change strategies related to losing weight and overweight perceptions predicted depressive symptoms above and beyond established risk factors. Self-esteem moderated the relationship between trying to lose weight and depressive symptoms. Maladaptive weight loss strategies and overweight perceptions should be addressed in early identification depression programs. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Towards a GME ensemble forecasting system: Ensemble initialization using the breeding technique

    Directory of Open Access Journals (Sweden)

    Jan D. Keller

    2008-12-01

    Full Text Available The quantitative forecast of precipitation requires a probabilistic background particularly with regard to forecast lead times of more than 3 days. As only ensemble simulations can provide useful information of the underlying probability density function, we built a new ensemble forecasting system (GME-EFS based on the GME model of the German Meteorological Service (DWD. For the generation of appropriate initial ensemble perturbations we chose the breeding technique developed by Toth and Kalnay (1993, 1997, which develops perturbations by estimating the regions of largest model error induced uncertainty. This method is applied and tested in the framework of quasi-operational forecasts for a three month period in 2007. The performance of the resulting ensemble forecasts are compared to the operational ensemble prediction systems ECMWF EPS and NCEP GFS by means of ensemble spread of free atmosphere parameters (geopotential and temperature and ensemble skill of precipitation forecasting. This comparison indicates that the GME ensemble forecasting system (GME-EFS provides reasonable forecasts with spread skill score comparable to that of the NCEP GFS. An analysis with the continuous ranked probability score exhibits a lack of resolution for the GME forecasts compared to the operational ensembles. However, with significant enhancements during the 3 month test period, the first results of our work with the GME-EFS indicate possibilities for further development as well as the potential for later operational usage.

  14. Fast-food consumption, diet quality and body weight: cross-sectional and prospective associations in a community sample of working adults.

    Science.gov (United States)

    Barnes, Timothy L; French, Simone A; Mitchell, Nathan R; Wolfson, Julian

    2016-04-01

    To examine the association between fast-food consumption, diet quality and body weight in a community sample of working adults. Cross-sectional and prospective analysis of anthropometric, survey and dietary data from adults recruited to participate in a worksite nutrition intervention. Participants self-reported frequency of fast-food consumption per week. Nutrient intakes and diet quality, using the Healthy Eating Index-2010 (HEI-2010), were computed from dietary recalls collected at baseline and 6 months. Metropolitan medical complex, Minneapolis, MN, USA. Two hundred adults, aged 18-60 years. Cross-sectionally, fast-food consumption was significantly associated with higher daily total energy intake (β=72·5, P=0·005), empty calories (β=0·40, P=0·006) and BMI (β=0·73, P=0·011), and lower HEI-2010 score (β=-1·23, P=0·012), total vegetables (β=-0·14, P=0·004), whole grains (β=-0·39, P=0·005), fibre (β=-0·83, P=0·002), Mg (β=-6·99, P=0·019) and K (β=-57·5, P=0·016). Over 6 months, change in fast-food consumption was not significantly associated with changes in energy intake or BMI, but was significantly inversely associated with total intake of vegetables (β=-0·14, P=0·034). Frequency of fast-food consumption was significantly associated with higher energy intake and poorer diet quality cross-sectionally. Six-month change in fast-food intake was small, and not significantly associated with overall diet quality or BMI.

  15. Graphs and matroids weighted in a bounded incline algebra.

    Science.gov (United States)

    Lu, Ling-Xia; Zhang, Bei

    2014-01-01

    Firstly, for a graph weighted in a bounded incline algebra (or called a dioid), a longest path problem (LPP, for short) is presented, which can be considered the uniform approach to the famous shortest path problem, the widest path problem, and the most reliable path problem. The solutions for LPP and related algorithms are given. Secondly, for a matroid weighted in a linear matroid, the maximum independent set problem is studied.

  16. Ensemble stacking mitigates biases in inference of synaptic connectivity

    Directory of Open Access Journals (Sweden)

    Brendan Chambers

    2018-03-01

    Full Text Available A promising alternative to directly measuring the anatomical connections in a neuronal population is inferring the connections from the activity. We employ simulated spiking neuronal networks to compare and contrast commonly used inference methods that identify likely excitatory synaptic connections using statistical regularities in spike timing. We find that simple adjustments to standard algorithms improve inference accuracy: A signing procedure improves the power of unsigned mutual-information-based approaches and a correction that accounts for differences in mean and variance of background timing relationships, such as those expected to be induced by heterogeneous firing rates, increases the sensitivity of frequency-based methods. We also find that different inference methods reveal distinct subsets of the synaptic network and each method exhibits different biases in the accurate detection of reciprocity and local clustering. To correct for errors and biases specific to single inference algorithms, we combine methods into an ensemble. Ensemble predictions, generated as a linear combination of multiple inference algorithms, are more sensitive than the best individual measures alone, and are more faithful to ground-truth statistics of connectivity, mitigating biases specific to single inference methods. These weightings generalize across simulated datasets, emphasizing the potential for the broad utility of ensemble-based approaches. Mapping the routing of spikes through local circuitry is crucial for understanding neocortical computation. Under appropriate experimental conditions, these maps can be used to infer likely patterns of synaptic recruitment, linking activity to underlying anatomical connections. Such inferences help to reveal the synaptic implementation of population dynamics and computation. We compare a number of standard functional measures to infer underlying connectivity. We find that regularization impacts measures

  17. Conductor gestures influence evaluations of ensemble performance.

    Science.gov (United States)

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  18. Computing Diffeomorphic Paths for Large Motion Interpolation.

    Science.gov (United States)

    Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C

    2013-06-01

    In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

  19. Genetic algorithm based adaptive neural network ensemble and its application in predicting carbon flux

    Science.gov (United States)

    Xue, Y.; Liu, S.; Hu, Y.; Yang, J.; Chen, Q.

    2007-01-01

    To improve the accuracy in prediction, Genetic Algorithm based Adaptive Neural Network Ensemble (GA-ANNE) is presented. Intersections are allowed between different training sets based on the fuzzy clustering analysis, which ensures the diversity as well as the accuracy of individual Neural Networks (NNs). Moreover, to improve the accuracy of the adaptive weights of individual NNs, GA is used to optimize the cluster centers. Empirical results in predicting carbon flux of Duke Forest reveal that GA-ANNE can predict the carbon flux more accurately than Radial Basis Function Neural Network (RBFNN), Bagging NN ensemble, and ANNE. ?? 2007 IEEE.

  20. Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life

    International Nuclear Information System (INIS)

    Hu Chao; Youn, Byeng D.; Wang Pingfeng; Taek Yoon, Joung

    2012-01-01

    Prognostics aims at determining whether a failure of an engineered system (e.g., a nuclear power plant) is impending and estimating the remaining useful life (RUL) before the failure occurs. The traditional data-driven prognostic approach is to construct multiple candidate algorithms using a training data set, evaluate their respective performance using a testing data set, and select the one with the best performance while discarding all the others. This approach has three shortcomings: (i) the selected standalone algorithm may not be robust; (ii) it wastes the resources for constructing the algorithms that are discarded; (iii) it requires the testing data in addition to the training data. To overcome these drawbacks, this paper proposes an ensemble data-driven prognostic approach which combines multiple member algorithms with a weighted-sum formulation. Three weighting schemes, namely the accuracy-based weighting, diversity-based weighting and optimization-based weighting, are proposed to determine the weights of member algorithms. The k-fold cross validation (CV) is employed to estimate the prediction error required by the weighting schemes. The results obtained from three case studies suggest that the ensemble approach with any weighting scheme gives more accurate RUL predictions compared to any sole algorithm when member algorithms producing diverse RUL predictions have comparable prediction accuracy and that the optimization-based weighting scheme gives the best overall performance among the three weighting schemes.

  1. Rotationally invariant family of Levy-like random matrix ensembles

    International Nuclear Information System (INIS)

    Choi, Jinmyung; Muttalib, K A

    2009-01-01

    We introduce a family of rotationally invariant random matrix ensembles characterized by a parameter λ. While λ = 1 corresponds to well-known critical ensembles, we show that λ ≠ 1 describes 'Levy-like' ensembles, characterized by power-law eigenvalue densities. For λ > 1 the density is bounded, as in Gaussian ensembles, but λ < 1 describes ensembles characterized by densities with long tails. In particular, the model allows us to evaluate, in terms of a novel family of orthogonal polynomials, the eigenvalue correlations for Levy-like ensembles. These correlations differ qualitatively from those in either the Gaussian or the critical ensembles. (fast track communication)

  2. Geometric integrator for simulations in the canonical ensemble

    Energy Technology Data Exchange (ETDEWEB)

    Tapias, Diego, E-mail: diego.tapias@nucleares.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Sanders, David P., E-mail: dpsanders@ciencias.unam.mx [Departamento de Física, Facultad de Ciencias, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico); Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Bravetti, Alessandro, E-mail: alessandro.bravetti@iimas.unam.mx [Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, Ciudad de México 04510 (Mexico)

    2016-08-28

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  3. Harmony Search Based Parameter Ensemble Adaptation for Differential Evolution

    Directory of Open Access Journals (Sweden)

    Rammohan Mallipeddi

    2013-01-01

    Full Text Available In differential evolution (DE algorithm, depending on the characteristics of the problem at hand and the available computational resources, different strategies combined with a different set of parameters may be effective. In addition, a single, well-tuned combination of strategies and parameters may not guarantee optimal performance because different strategies combined with different parameter settings can be appropriate during different stages of the evolution. Therefore, various adaptive/self-adaptive techniques have been proposed to adapt the DE strategies and parameters during the course of evolution. In this paper, we propose a new parameter adaptation technique for DE based on ensemble approach and harmony search algorithm (HS. In the proposed method, an ensemble of parameters is randomly sampled which form the initial harmony memory. The parameter ensemble evolves during the course of the optimization process by HS algorithm. Each parameter combination in the harmony memory is evaluated by testing them on the DE population. The performance of the proposed adaptation method is evaluated using two recently proposed strategies (DE/current-to-pbest/bin and DE/current-to-gr_best/bin as basic DE frameworks. Numerical results demonstrate the effectiveness of the proposed adaptation technique compared to the state-of-the-art DE based algorithms on a set of challenging test problems (CEC 2005.

  4. Geometric integrator for simulations in the canonical ensemble

    International Nuclear Information System (INIS)

    Tapias, Diego; Sanders, David P.; Bravetti, Alessandro

    2016-01-01

    We introduce a geometric integrator for molecular dynamics simulations of physical systems in the canonical ensemble that preserves the invariant distribution in equations arising from the density dynamics algorithm, with any possible type of thermostat. Our integrator thus constitutes a unified framework that allows the study and comparison of different thermostats and of their influence on the equilibrium and non-equilibrium (thermo-)dynamic properties of a system. To show the validity and the generality of the integrator, we implement it with a second-order, time-reversible method and apply it to the simulation of a Lennard-Jones system with three different thermostats, obtaining good conservation of the geometrical properties and recovering the expected thermodynamic results. Moreover, to show the advantage of our geometric integrator over a non-geometric one, we compare the results with those obtained by using the non-geometric Gear integrator, which is frequently used to perform simulations in the canonical ensemble. The non-geometric integrator induces a drift in the invariant quantity, while our integrator has no such drift, thus ensuring that the system is effectively sampling the correct ensemble.

  5. Collective Dynamics of Specific Gene Ensembles Crucial for Neutrophil Differentiation: The Existence of Genome Vehicles Revealed

    Science.gov (United States)

    Giuliani, Alessandro; Tomita, Masaru

    2010-01-01

    Cell fate decision remarkably generates specific cell differentiation path among the multiple possibilities that can arise through the complex interplay of high-dimensional genome activities. The coordinated action of thousands of genes to switch cell fate decision has indicated the existence of stable attractors guiding the process. However, origins of the intracellular mechanisms that create “cellular attractor” still remain unknown. Here, we examined the collective behavior of genome-wide expressions for neutrophil differentiation through two different stimuli, dimethyl sulfoxide (DMSO) and all-trans-retinoic acid (atRA). To overcome the difficulties of dealing with single gene expression noises, we grouped genes into ensembles and analyzed their expression dynamics in correlation space defined by Pearson correlation and mutual information. The standard deviation of correlation distributions of gene ensembles reduces when the ensemble size is increased following the inverse square root law, for both ensembles chosen randomly from whole genome and ranked according to expression variances across time. Choosing the ensemble size of 200 genes, we show the two probability distributions of correlations of randomly selected genes for atRA and DMSO responses overlapped after 48 hours, defining the neutrophil attractor. Next, tracking the ranked ensembles' trajectories, we noticed that only certain, not all, fall into the attractor in a fractal-like manner. The removal of these genome elements from the whole genomes, for both atRA and DMSO responses, destroys the attractor providing evidence for the existence of specific genome elements (named “genome vehicle”) responsible for the neutrophil attractor. Notably, within the genome vehicles, genes with low or moderate expression changes, which are often considered noisy and insignificant, are essential components for the creation of the neutrophil attractor. Further investigations along with our findings might

  6. A virtual pebble game to ensemble average graph rigidity.

    Science.gov (United States)

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most

  7. Disease-associated mutations that alter the RNA structural ensemble.

    Directory of Open Access Journals (Sweden)

    Matthew Halvorsen

    2010-08-01

    Full Text Available Genome-wide association studies (GWAS often identify disease-associated mutations in intergenic and non-coding regions of the genome. Given the high percentage of the human genome that is transcribed, we postulate that for some observed associations the disease phenotype is caused by a structural rearrangement in a regulatory region of the RNA transcript. To identify such mutations, we have performed a genome-wide analysis of all known disease-associated Single Nucleotide Polymorphisms (SNPs from the Human Gene Mutation Database (HGMD that map to the untranslated regions (UTRs of a gene. Rather than using minimum free energy approaches (e.g. mFold, we use a partition function calculation that takes into consideration the ensemble of possible RNA conformations for a given sequence. We identified in the human genome disease-associated SNPs that significantly alter the global conformation of the UTR to which they map. For six disease-states (Hyperferritinemia Cataract Syndrome, beta-Thalassemia, Cartilage-Hair Hypoplasia, Retinoblastoma, Chronic Obstructive Pulmonary Disease (COPD, and Hypertension, we identified multiple SNPs in UTRs that alter the mRNA structural ensemble of the associated genes. Using a Boltzmann sampling procedure for sub-optimal RNA structures, we are able to characterize and visualize the nature of the conformational changes induced by the disease-associated mutations in the structural ensemble. We observe in several cases (specifically the 5' UTRs of FTL and RB1 SNP-induced conformational changes analogous to those observed in bacterial regulatory Riboswitches when specific ligands bind. We propose that the UTR and SNP combinations we identify constitute a "RiboSNitch," that is a regulatory RNA in which a specific SNP has a structural consequence that results in a disease phenotype. Our SNPfold algorithm can help identify RiboSNitches by leveraging GWAS data and an analysis of the mRNA structural ensemble.

  8. "Intelligent Ensemble" Projections of Precipitation and Surface Radiation in Support of Agricultural Climate Change Adaptation

    Science.gov (United States)

    Taylor, Patrick C.; Baker, Noel C.

    2015-01-01

    Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.

  9. The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: anisotropic Baryon Acoustic Oscillations measurements in Fourier-space with optimal redshift weights

    Science.gov (United States)

    Wang, Dandan; Zhao, Gong-Bo; Wang, Yuting; Percival, Will J.; Ruggeri, Rossana; Zhu, Fangzhou; Tojeiro, Rita; Myers, Adam D.; Chuang, Chia-Hsun; Baumgarten, Falk; Zhao, Cheng; Gil-Marín, Héctor; Ross, Ashley J.; Burtin, Etienne; Zarrouk, Pauline; Bautista, Julian; Brinkmann, Jonathan; Dawson, Kyle; Brownstein, Joel R.; de la Macorra, Axel; Schneider, Donald P.; Shafieloo, Arman

    2018-06-01

    We present a measurement of the anisotropic and isotropic Baryon Acoustic Oscillations (BAO) from the extended Baryon Oscillation Spectroscopic Survey Data Release 14 quasar sample with optimal redshift weights. Applying the redshift weights improves the constraint on the BAO dilation parameter α(zeff) by 17 per cent. We reconstruct the evolution history of the BAO distance indicators in the redshift range of 0.8 < z < 2.2. This paper is part of a set that analyses the eBOSS DR14 quasar sample.

  10. Coupled atmosphere and land-surface assimilation of surface observations with a single column model and ensemble data assimilation

    Science.gov (United States)

    Rostkier-Edelstein, Dorita; Hacker, Joshua P.; Snyder, Chris

    2014-05-01

    Numerical weather prediction and data assimilation models are composed of coupled atmosphere and land-surface (LS) components. If possible, the assimilation procedure should be coupled so that observed information in one module is used to correct fields in the coupled module. There have been some attempts in this direction using optimal interpolation, nudging and 2/3DVAR data assimilation techniques. Aside from satellite remote sensed observations, reference height in-situ observations of temperature and moisture have been used in these studies. Among other problems, difficulties in coupled atmosphere and LS assimilation arise as a result of the different time scales characteristic of each component and the unsteady correlation between these components under varying flow conditions. Ensemble data-assimilation techniques rely on flow dependent observations-model covariances. Provided that correlations and covariances between land and atmosphere can be adequately simulated and sampled, ensemble data assimilation should enable appropriate assimilation of observations simultaneously into the atmospheric and LS states. Our aim is to explore assimilation of reference height in-situ temperature and moisture observations into the coupled atmosphere-LS modules(simultaneously) in NCAR's WRF-ARW model using the NCAR's DART ensemble data-assimilation system. Observing system simulation experiments (OSSEs) are performed using the single column model (SCM) version of WRF. Numerical experiments during a warm season are centered on an atmospheric and soil column in the South Great Plains. Synthetic observations are derived from "truth" WRF-SCM runs for a given date,initialized and forced using North American Regional Reanalyses (NARR). WRF-SCM atmospheric and LS ensembles are created by mixing the atmospheric and soil NARR profile centered on a given date with that from another day (randomly chosen from the same season) with weights drawn from a logit-normal distribution. Three

  11. PATHS groundwater hydrologic model

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.W.; Schur, J.A.

    1980-04-01

    A preliminary evaluation capability for two-dimensional groundwater pollution problems was developed as part of the Transport Modeling Task for the Waste Isolation Safety Assessment Program (WISAP). Our approach was to use the data limitations as a guide in setting the level of modeling detail. PATHS Groundwater Hydrologic Model is the first level (simplest) idealized hybrid analytical/numerical model for two-dimensional, saturated groundwater flow and single component transport; homogeneous geology. This document consists of the description of the PATHS groundwater hydrologic model. The preliminary evaluation capability prepared for WISAP, including the enhancements that were made because of the authors' experience using the earlier capability is described. Appendixes A through D supplement the report as follows: complete derivations of the background equations are provided in Appendix A. Appendix B is a comprehensive set of instructions for users of PATHS. It is written for users who have little or no experience with computers. Appendix C is for the programmer. It contains information on how input parameters are passed between programs in the system. It also contains program listings and test case listing. Appendix D is a definition of terms.

  12. Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing

    KAUST Repository

    Toye, Habib

    2017-05-26

    We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation and of the Data Research Testbed (DART) for ensemble data assimilation. DART has been configured to integrate all members of an ensemble adjustment Kalman filter (EAKF) in parallel, based on which we adapted the ensemble operations in DART to use an invariant ensemble, i.e., an ensemble Optimal Interpolation (EnOI) algorithm. This approach requires only single forward model integration in the forecast step and therefore saves substantial computational cost. To deal with the strong seasonal variability of the Red Sea, the EnOI ensemble is then seasonally selected from a climatology of long-term model outputs. Observations of remote sensing sea surface height (SSH) and sea surface temperature (SST) are assimilated every 3 days. Real-time atmospheric fields from the National Center for Environmental Prediction (NCEP) and the European Center for Medium-Range Weather Forecasts (ECMWF) are used as forcing in different assimilation experiments. We investigate the behaviors of the EAKF and (seasonal-) EnOI and compare their performances for assimilating and forecasting the circulation of the Red Sea. We further assess the sensitivity of the assimilation system to various filtering parameters (ensemble size, inflation) and atmospheric forcing.

  13. Path probability distribution of stochastic motion of non dissipative systems: a classical analog of Feynman factor of path integral

    International Nuclear Information System (INIS)

    Lin, T.L.; Wang, R.; Bi, W.P.; El Kaabouchi, A.; Pujos, C.; Calvayrac, F.; Wang, Q.A.

    2013-01-01

    We investigate, by numerical simulation, the path probability of non dissipative mechanical systems undergoing stochastic motion. The aim is to search for the relationship between this probability and the usual mechanical action. The model of simulation is a one-dimensional particle subject to conservative force and Gaussian random displacement. The probability that a sample path between two fixed points is taken is computed from the number of particles moving along this path, an output of the simulation, divided by the total number of particles arriving at the final point. It is found that the path probability decays exponentially with increasing action of the sample paths. The decay rate increases with decreasing randomness. This result supports the existence of a classical analog of the Feynman factor in the path integral formulation of quantum mechanics for Hamiltonian systems

  14. Shortest Paths and Vehicle Routing

    DEFF Research Database (Denmark)

    Petersen, Bjørn

    This thesis presents how to parallelize a shortest path labeling algorithm. It is shown how to handle Chvátal-Gomory rank-1 cuts in a column generation context. A Branch-and-Cut algorithm is given for the Elementary Shortest Paths Problem with Capacity Constraint. A reformulation of the Vehicle...... Routing Problem based on partial paths is presented. Finally, a practical application of finding shortest paths in the telecommunication industry is shown....

  15. A new deterministic Ensemble Kalman Filter with one-step-ahead smoothing for storm surge forecasting

    KAUST Repository

    Raboudi, Naila

    2016-11-01

    The Ensemble Kalman Filter (EnKF) is a popular data assimilation method for state-parameter estimation. Following a sequential assimilation strategy, it breaks the problem into alternating cycles of forecast and analysis steps. In the forecast step, the dynamical model is used to integrate a stochastic sample approximating the state analysis distribution (called analysis ensemble) to obtain a forecast ensemble. In the analysis step, the forecast ensemble is updated with the incoming observation using a Kalman-like correction, which is then used for the next forecast step. In realistic large-scale applications, EnKFs are implemented with limited ensembles, and often poorly known model errors statistics, leading to a crude approximation of the forecast covariance. This strongly limits the filter performance. Recently, a new EnKF was proposed in [1] following a one-step-ahead smoothing strategy (EnKF-OSA), which involves an OSA smoothing of the state between two successive analysis. At each time step, EnKF-OSA exploits the observation twice. The incoming observation is first used to smooth the ensemble at the previous time step. The resulting smoothed ensemble is then integrated forward to compute a "pseudo forecast" ensemble, which is again updated with the same observation. The idea of constraining the state with future observations is to add more information in the estimation process in order to mitigate for the sub-optimal character of EnKF-like methods. The second EnKF-OSA "forecast" is computed from the smoothed ensemble and should therefore provide an improved background. In this work, we propose a deterministic variant of the EnKF-OSA, based on the Singular Evolutive Interpolated Ensemble Kalman (SEIK) filter. The motivation behind this is to avoid the observations perturbations of the EnKF in order to improve the scheme\\'s behavior when assimilating big data sets with small ensembles. The new SEIK-OSA scheme is implemented and its efficiency is demonstrated

  16. The Hydrologic Ensemble Prediction Experiment (HEPEX)

    Science.gov (United States)

    Wood, A. W.; Thielen, J.; Pappenberger, F.; Schaake, J. C.; Hartman, R. K.

    2012-12-01

    The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF). With support from the US National Weather Service (NWS) and the European Commission (EC), the HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support in emergency management and water resources sectors. The strategy to meet this goal includes meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. HEPEX has organized about a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Today, the HEPEX mission is to demonstrate the added value of hydrological ensemble prediction systems (HEPS) for emergency management and water resources sectors to make decisions that have important consequences for economy, public health, safety, and the environment. HEPEX is now organised around six major themes that represent core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.

  17. Using simulation to interpret experimental data in terms of protein conformational ensembles.

    Science.gov (United States)

    Allison, Jane R

    2017-04-01

    In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Path analysis of the productive traits in Sorghum species

    Directory of Open Access Journals (Sweden)

    Ikanović Jela

    2011-01-01

    Full Text Available This research studied the phenotypic correlation coefficients between three Sorghum species, namely forage sorghum S. bicolor Moench. (c. NS-Džin, Sudan grass S. sudanense L. (c. Zora and interspecies hybrid S. bicolor x S. sudanense (c. Siloking. The analyses were performed on plant material samples taken from the first cutting, when plants were in the beginning phase of tasseling. The following morphologic traits were studied: plant height, number of leaves per plant, stem leaf weight and mean stem weight. Additionally, their direct and indirect effect on dependent variable green biomass yield was analyzed, for which path coefficients were calculated. This method enables more quality and full insight into relations existing among the studied traits, more precise establishment of cause-effect connections among them, as well as to separate direct from indirect effects of any particular trait on dependent variable, being biomass yield in this case. The analysis of phenotypic coefficients revealed differences in direct and indirect effect of certain traits on dependent variable. Sudan grass had the highest stem (2.281 m and most leaves per plant (7.917. Forage sorghum had the largest leaf weight per plant (49.05 g, while interspecies hybrid had the highest mean stem weight (80.798 g. Variations of these morphologic traits among species were found to be significant and very significant. Morphologic traits - stem height and weight significantly affected sorghum green biomass yield. Leaf number and leaf portion in total biomass were negatively correlated with yield. Cultivars differed significantly regarding morphologic and productive traits. Sudan grass had the lowest green biomass yield, while forage sorghum and interspecies hybrid had significant yield increase.

  19. Bayesian ensemble approach to error estimation of interatomic potentials

    DEFF Research Database (Denmark)

    Frederiksen, Søren Lund; Jacobsen, Karsten Wedel; Brown, K.S.

    2004-01-01

    Using a Bayesian approach a general method is developed to assess error bars on predictions made by models fitted to data. The error bars are estimated from fluctuations in ensembles of models sampling the model-parameter space with a probability density set by the minimum cost. The method...... is applied to the development of interatomic potentials for molybdenum using various potential forms and databases based on atomic forces. The calculated error bars on elastic constants, gamma-surface energies, structural energies, and dislocation properties are shown to provide realistic estimates...

  20. High-density amorphous ice: A path-integral simulation

    Science.gov (United States)

    Herrero, Carlos P.; Ramírez, Rafael

    2012-09-01

    Structural and thermodynamic properties of high-density amorphous (HDA) ice have been studied by path-integral molecular dynamics simulations in the isothermal-isobaric ensemble. Interatomic interactions were modeled by using the effective q-TIP4P/F potential for flexible water. Quantum nuclear motion is found to affect several observable properties of the amorphous solid. At low temperature (T = 50 K) the molar volume of HDA ice is found to increase by 6%, and the intramolecular O-H distance rises by 1.4% due to quantum motion. Peaks in the radial distribution function of HDA ice are broadened with respect to their classical expectancy. The bulk modulus, B, is found to rise linearly with the pressure, with a slope ∂B/∂P = 7.1. Our results are compared with those derived earlier from classical and path-integral simulations of HDA ice. We discuss similarities and discrepancies with those earlier simulations.

  1. Influence of muscle strength, physical activity and weight on bone mass in a population-based sample of 1004 elderly women.

    Science.gov (United States)

    Gerdhem, P; Ringsberg, K A M; Akesson, K; Obrant, K J

    2003-09-01

    High physical activity level has been associated with high bone mass and low fracture risk and is therefore recommended to reduce fractures in old age. The aim of this study was to estimate the effect of potentially modifiable variables, such as physical activity, muscle strength, muscle mass and weight, on bone mass in elderly women. The influence of isometric thigh muscle strength, self-estimated activity level, body composition and weight on bone mineral density (dual energy X-ray absorptiometry; DXA) in total body, hip and spine was investigated. Subjects were 1004 women, all 75 years old, taking part in the Malmö Osteoporosis Prospective Risk Assessment (OPRA) study. Physical activity and muscle strength accounted for 1-6% of the variability in bone mass, whereas weight, and its closely associated variables lean mass and fat mass, to a much greater extent explained the bone mass variability. We found current body weight to be the variable with the most substantial influence on the total variability in bone mass (15-32% depending on skeletal site) in a forward stepwise regression model. Our findings suggest that in elderly women, the major fracture-preventive effect of physical activity is unlikely to be mediated through increased bone mass. Retaining or even increasing body weight is likely to be beneficial to the skeleton, but an excess body weight increase may have negative effects on health. Nevertheless, training in elderly women may have advantages by improving balance, co-ordination and mobility and therefore decreasing the risk of fractures.

  2. Rocket Flight Path

    Directory of Open Access Journals (Sweden)

    Jamie Waters

    2014-09-01

    Full Text Available This project uses Newton’s Second Law of Motion, Euler’s method, basic physics, and basic calculus to model the flight path of a rocket. From this, one can find the height and velocity at any point from launch to the maximum altitude, or apogee. This can then be compared to the actual values to see if the method of estimation is a plausible. The rocket used for this project is modeled after Bullistic-1 which was launched by the Society of Aeronautics and Rocketry at the University of South Florida.

  3. JAVA PathFinder

    Science.gov (United States)

    Mehhtz, Peter

    2005-01-01

    JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.

  4. Hamiltonian path integrals

    International Nuclear Information System (INIS)

    Prokhorov, L.V.

    1982-01-01

    Problems related to consideration of operator nonpermutability in Hamiltonian path integral (HPI) are considered in the review. Integrals are investigated using trajectories in configuration space (nonrelativistic quantum mechanics). Problems related to trajectory integrals in HPI phase space are discussed: the problem of operator nonpermutability consideration (extra terms problem) and corresponding equivalence rules; ambiguity of HPI usual recording; transition to curvilinear coordinates. Problem of quantization of dynamical systems with couplings has been studied. As in the case of canonical transformations, quantization of the systems with couplings of the first kind requires the consideration of extra terms

  5. Path to Prosperity

    OpenAIRE

    Wolfowitz,Paul

    2006-01-01

    Paul Wolfowitz, President of the World Bank, discussed Singapore's remarkable progress along the road from poverty to prosperity which has also been discovered by many other countries in East Asia and around the world. He spoke of how each country must find its own path for people to pursue the same dreams of the chance to go to school, the security of a good job, and the ability to provide a better future for their children. Throughout the world, and importantly in the developing world, ther...

  6. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    Science.gov (United States)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums

  7. Teleconnection Paths via Climate Network Direct Link Detection.

    Science.gov (United States)

    Zhou, Dong; Gozolchiani, Avi; Ashkenazy, Yosef; Havlin, Shlomo

    2015-12-31

    Teleconnections describe remote connections (typically thousands of kilometers) of the climate system. These are of great importance in climate dynamics as they reflect the transportation of energy and climate change on global scales (like the El Niño phenomenon). Yet, the path of influence propagation between such remote regions, and weighting associated with different paths, are only partially known. Here we propose a systematic climate network approach to find and quantify the optimal paths between remotely distant interacting locations. Specifically, we separate the correlations between two grid points into direct and indirect components, where the optimal path is found based on a minimal total cost function of the direct links. We demonstrate our method using near surface air temperature reanalysis data, on identifying cross-latitude teleconnections and their corresponding optimal paths. The proposed method may be used to quantify and improve our understanding regarding the emergence of climate patterns on global scales.

  8. Understanding ensemble protein folding at atomic detail

    International Nuclear Information System (INIS)

    Wallin, Stefan; Shakhnovich, Eugene I

    2008-01-01

    Although far from routine, simulating the folding of specific short protein chains on the computer, at a detailed atomic level, is starting to become a reality. This remarkable progress, which has been made over the last decade or so, allows a fundamental aspect of the protein folding process to be addressed, namely its statistical nature. In order to make quantitative comparisons with experimental kinetic data a complete ensemble view of folding must be achieved, with key observables averaged over the large number of microscopically different folding trajectories available to a protein chain. Here we review recent advances in atomic-level protein folding simulations and the new insight provided by them into the protein folding process. An important element in understanding ensemble folding kinetics are methods for analyzing many separate folding trajectories, and we discuss techniques developed to condense the large amount of information contained in an ensemble of trajectories into a manageable picture of the folding process. (topical review)

  9. Lattice gauge theory in the microcanonical ensemble

    International Nuclear Information System (INIS)

    Callaway, D.J.E.; Rahman, A.

    1983-01-01

    The microcanonical-ensemble formulation of lattice gauge theory proposed recently is examined in detail. Expectation values in this new ensemble are determined by solving a large set of coupled ordinary differential equations, after the fashion of a molecular dynamics simulation. Following a brief review of the microcanonical ensemble, calculations are performed for the gauge groups U(1), SU(2), and SU(3). The results are compared and contrasted with standard methods of computation. Several advantages of the new formalism are noted. For example, no random numbers are required to update the system. Also, this update is performed in a simultaneous fashion. Thus the microcanonical method presumably adapts well to parallel processing techniques, especially when the p action is highly nonlocal (such as when fermions are included)

  10. Ensemble Network Architecture for Deep Reinforcement Learning

    Directory of Open Access Journals (Sweden)

    Xi-liang Chen

    2018-01-01

    Full Text Available The popular deep Q learning algorithm is known to be instability because of the Q-value’s shake and overestimation action values under certain conditions. These issues tend to adversely affect their performance. In this paper, we develop the ensemble network architecture for deep reinforcement learning which is based on value function approximation. The temporal ensemble stabilizes the training process by reducing the variance of target approximation error and the ensemble of target values reduces the overestimate and makes better performance by estimating more accurate Q-value. Our results show that this architecture leads to statistically significant better value evaluation and more stable and better performance on several classical control tasks at OpenAI Gym environment.

  11. Embedded random matrix ensembles in quantum physics

    CERN Document Server

    Kota, V K B

    2014-01-01

    Although used with increasing frequency in many branches of physics, random matrix ensembles are not always sufficiently specific to account for important features of the physical system at hand. One refinement which retains the basic stochastic approach but allows for such features consists in the use of embedded ensembles.  The present text is an exhaustive introduction to and survey of this important field. Starting with an easy-to-read introduction to general random matrix theory, the text then develops the necessary concepts from the beginning, accompanying the reader to the frontiers of present-day research. With some notable exceptions, to date these ensembles have primarily been applied in nuclear spectroscopy. A characteristic example is the use of a random two-body interaction in the framework of the nuclear shell model. Yet, topics in atomic physics, mesoscopic physics, quantum information science and statistical mechanics of isolated finite quantum systems can also be addressed using these ensemb...

  12. A new type of phase-space path integral

    International Nuclear Information System (INIS)

    Marinov, M.S.

    1991-01-01

    Evolution of Wigner's quasi-distribution of a quantum system is represented by means of a path integral in phase space. Instead of the Hamiltonian action, a new functional is present in the integral, and its extrema in the functional space are also given by the classical trajectories. The phase-space paths appear in the integral with real weights, so complex integrals are not necessary. The semiclassical approximation and some applications are discussed briefly. (orig.)

  13. Statistical hadronization and hadronic micro-canonical ensemble II

    International Nuclear Information System (INIS)

    Becattini, F.; Ferroni, L.

    2004-01-01

    We present a Monte Carlo calculation of the micro-canonical ensemble of the ideal hadron-resonance gas including all known states up to a mass of about 1.8 GeV and full quantum statistics. The micro-canonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy, around 8 GeV, thus bearing out previous analyses of hadronic multiplicities in the canonical ensemble. The main numerical computing method is an importance sampling Monte Carlo algorithm using the product of Poisson distributions to generate multi-hadronic channels. It is shown that the use of this multi-Poisson distribution allows for an efficient and fast computation of averages, which can be further improved in the limit of very large clusters. We have also studied the fitness of a previously proposed computing method, based on the Metropolis Monte Carlo algorithm, for event generation in the statistical hadronization model. We find that the use of the multi-Poisson distribution as proposal matrix dramatically improves the computation performance. However, due to the correlation of subsequent samples, this method proves to be generally less robust and effective than the importance sampling method. (orig.)

  14. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  15. Ensemble Kalman methods for inverse problems

    International Nuclear Information System (INIS)

    Iglesias, Marco A; Law, Kody J H; Stuart, Andrew M

    2013-01-01

    The ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 (Evensen 1994 J. Geophys. Res. 99 10143–62) as a novel method for data assimilation: state estimation for noisily observed time-dependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches. Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid. (paper)

  16. An assessment of the relationship of physical activity, obesity, and chronic diseases/conditions between active/obese and sedentary/ normal weight American women in a national sample.

    Science.gov (United States)

    Pharr, J R; Coughenour, C A; Bungum, T J

    2018-03-01

    Obesity and physical inactivity are associated with increased rates of chronic diseases and conditions. However, the 'fit but fat' theory posits that cardiopulmonary fitness (or physical activity) can mitigate risks to health associated with obesity. The purpose of this study was to compare chronic diseases and conditions of highly active/obese women with inactive/normal weight women. This was a cross-sectional study of the 2015 Behavioral Risk Factor Surveillance System data. Weighted descriptive statistics were performed to describe the demographic characteristics of the two groups. We calculated odds ratios and adjusted odds ratios for chronic diseases and conditions comparing highly active/obese women with inactive/normal weight women. Highly active/obese women were more likely to report risk factors (hypertension, high cholesterol, and diabetes) for coronary heart disease (CHD) and cardiovascular disease (CVD) than inactive/normal weight women; however, they did not have increased rates of CVD, CHD, or heart attack and had decreased risk for stroke. Highly active/obese women had increased risk for asthma, arthritis, and depression, but not for cancer, kidney disease, or chronic obstructive pulmonary disease. Highly active/obese women appear to be staving off the actual development of CHD and CVD; however, further research is needed to understand the long-term health benefits of physical activity among obese women. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.

  17. Correlates of appearance and weight satisfaction in a U.S. National Sample: Personality, attachment style, television viewing, self-esteem, and life satisfaction.

    Science.gov (United States)

    Frederick, David A; Sandhu, Gaganjyot; Morse, Patrick J; Swami, Viren

    2016-06-01

    We examined the prevalence and correlates of satisfaction with appearance and weight. Participants (N=12,176) completed an online survey posted on the NBCNews.com and Today.com websites. Few men and women were very to extremely dissatisfied with their physical appearances (6%; 9%), but feeling very to extremely dissatisfied with weight was more common (15%; 20%). Only about one-fourth of men and women felt very to extremely satisfied with their appearances (28%; 26%) and weights (24%; 20%). Men and women with higher body masses reported higher appearance and weight dissatisfaction. Dissatisfied people had higher Neuroticism, more preoccupied and fearful attachment styles, and spent more hours watching television. In contrast, satisfied people had higher Openness, Conscientious, and Extraversion, were more secure in attachment style, and had higher self-esteem and life satisfaction. These findings highlight the high prevalence of body dissatisfaction and the factors linked to dissatisfaction among U.S. adults. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. An ensemble self-training protein interaction article classifier.

    Science.gov (United States)

    Chen, Yifei; Hou, Ping; Manderick, Bernard

    2014-01-01

    Protein-protein interaction (PPI) is essential to understand the fundamental processes governing cell biology. The mining and curation of PPI knowledge are critical for analyzing proteomics data. Hence it is desired to classify articles PPI-related or not automatically. In order to build interaction article classification systems, an annotated corpus is needed. However, it is usually the case that only a small number of labeled articles can be obtained manually. Meanwhile, a large number of unlabeled articles are available. By combining ensemble learning and semi-supervised self-training, an ensemble self-training interaction classifier called EST_IACer is designed to classify PPI-related articles based on a small number of labeled articles and a large number of unlabeled articles. A biological background based feature weighting strategy is extended using the category information from both labeled and unlabeled data. Moreover, a heuristic constraint is put forward to select optimal instances from unlabeled data to improve the performance further. Experiment results show that the EST_IACer can classify the PPI related articles effectively and efficiently.

  19. World nuclear energy paths

    International Nuclear Information System (INIS)

    Connolly, T.J.; Hansen, U.; Jaek, W.; Beckurts, K.H.

    1979-01-01

    In examing the world nuclear energy paths, the following assumptions were adopted: the world economy will grow somewhat more slowly than in the past, leading to reductions in electricity demand growth rates; national and international political impediments to the deployment of nuclear power will gradually disappear over the next few years; further development of nuclear power will proceed steadily, without serious interruption but with realistic lead times for the introduction of advanced technologies. Given these assumptions, this paper attempts a study of possible world nuclear energy developments, disaggregated on a regional and national basis. The scenario technique was used and a few alternative fuel-cycle scenarios were developed. Each is an internally consistent model of technically and economically feasible paths to the further development of nuclear power in an aggregate of individual countries and regions of the world. The main purpose of this modeling exercise was to gain some insight into the probable international locations of reactors and other nuclear facilities, the future requirements for uranium and for fuel-cycle services, and the problems of spent-fuel storage and waste management. The study also presents an assessment of the role that nuclear power might actually play in meeting future world energy demand

  20. Wang-Landau Reaction Ensemble Method: Simulation of Weak Polyelectrolytes and General Acid-Base Reactions.

    Science.gov (United States)

    Landsgesell, Jonas; Holm, Christian; Smiatek, Jens

    2017-02-14

    We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.

  1. Engagement with health care providers as a mediator between social capital and quality of life among a sample of people living with HIV in the United States: Path-analysis.

    Science.gov (United States)

    Jong, SoSon; Carrico, Adam; Cooper, Bruce; Thompson, Lisa; Portillo, Carmen

    2017-12-01

    Social capital is "features of social organizations-networks, norms, and as trust that facilitate coordination and cooperation for mutual benefit". People with high social capital have lower mortality and better health outcomes. Although utilization of social networks has grown, social capital continues to be a complex concept in relation to health promotion. This study examined 1) associations between social capital and quality of life (QoL), 2) factors of social capital leading to higher QoL among people living with HIV (PLWH), 3) role of health care providers (HCP) as a mediator between social capital and QoL. This is a secondary analysis of the International Nursing HIV Network for HIV/AIDS Research. This cross-sectional study included 1673 PLWH from 11 research sites in the United States in 2010. Using path analysis, we examined the independent effect of social capital on QoL, and the mediating effect of PLWH engagement with HCP. The majority of participants were male (71.2%), and 45.7% were African American. Eighty-nine percent of the participants were on antiretroviral therapy. Social capital consisted of three factors - social connection, tolerance toward diversity, and community participation - explaining 87% of variance of social capital. Path analysis (RMSEA = 0, CFI = 1) found that social connection, followed by tolerance toward diversity, were the principal domain of social capital leading to better QoL (std. beta = 0.50, std. error = 0.64, p capital was positively associated with QoL ( p capital on QoL was mediated by engagement with HCP ( p capital effectively, interventions should focus on strengthening PLWH's social connections and engagement to HCP.

  2. Cluster ensembles, quantization and the dilogarithm

    DEFF Research Database (Denmark)

    Fock, Vladimir; Goncharov, Alexander B.

    2009-01-01

    A cluster ensemble is a pair of positive spaces (i.e. varieties equipped with positive atlases), coming with an action of a symmetry group . The space is closely related to the spectrum of a cluster algebra [ 12 ]. The two spaces are related by a morphism . The space is equipped with a closed -form......, possibly degenerate, and the space has a Poisson structure. The map is compatible with these structures. The dilogarithm together with its motivic and quantum avatars plays a central role in the cluster ensemble structure. We define a non-commutative -deformation of the -space. When is a root of unity...

  3. Ensemble computing for the petroleum industry

    International Nuclear Information System (INIS)

    Annaratone, M.; Dossa, D.

    1995-01-01

    Computer downsizing is one of the most often used buzzwords in today's competitive business, and the petroleum industry is at the forefront of this revolution. Ensemble computing provides the key for computer downsizing with its first incarnation, i.e., workstation farms. This paper concerns the importance of increasing the productivity cycle and not just the execution time of a job. The authors introduce the concept of ensemble computing and workstation farms. The they discuss how different computing paradigms can be addressed by workstation farms

  4. Weight Management

    Science.gov (United States)

    ... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...

  5. Illness and determinants of health-related quality of life in a cross-sectional sample of schoolchildren in different weight categories

    Directory of Open Access Journals (Sweden)

    Kesztyüs, Dorothea

    2014-01-01

    Full Text Available [english] Aim: To study associations between health-related quality of life (HRQoL, frequency of illness, and weight in primary school children in southern Germany. Methods: Data from baseline measurements of the outcome evaluation of a teacher based health promotion programme (“Join the Healthy Boat” were analysed. Parents provided information about their children’s HRQoL (KINDL, EQ5D-Y Visual Analogue Scale. The number of visits to a physician, children’s days of absence because of sickness, and parental days of absence from work due to their children’s illness during the last year of school/kindergarten were queried. Children’s weight status was determined by body mass index (BMI, central obesity by waist to height ratio (WHtR ≥0.5. Results: From 1,888 children (7.1±0.6 years, 7.8% were underweight, 82% had normal weight, 5.7% were overweight and 4.4% obese. 8.4% of all children were centrally obese. Bivariate analysis showed no significant differences for parental absence and visits to a physician in weight groups classified by BMI, but obese children had more sick days than non-obese. Centrally obese children differed significantly from the rest in the number of sick days and visits to a physician, but not in the frequency of parental absence. In regression analyses, central obesity correlated significantly with EQ5D-Y VAS, KINDL total score and the subscales of “psyche”, “family” and “friends”. BMI weight groups showed no significant associations. Conclusions: Central obesity but not BMI derived overweight and obesity is associated with HRQoL and visits to a physician in primary school children. Future studies should include WHtR. Preventive measures for children should focus on a reduction of or slowed increase in waist circumference.

  6. Model dependence and its effect on ensemble projections in CMIP5

    Science.gov (United States)

    Abramowitz, G.; Bishop, C.

    2013-12-01

    Conceptually, the notion of model dependence within climate model ensembles is relatively simple - modelling groups share a literature base, parametrisations, data sets and even model code - the potential for dependence in sampling different climate futures is clear. How though can this conceptual problem inform a practical solution that demonstrably improves the ensemble mean and ensemble variance as an estimate of system uncertainty? While some research has already focused on error correlation or error covariance as a candidate to improve ensemble mean estimates, a complete definition of independence must at least implicitly subscribe to an ensemble interpretation paradigm, such as the 'truth-plus-error', 'indistinguishable', or more recently 'replicate Earth' paradigm. Using a definition of model dependence based on error covariance within the replicate Earth paradigm, this presentation will show that accounting for dependence in surface air temperature gives cooler projections in CMIP5 - by as much as 20% globally in some RCPs - although results differ significantly for each RCP, especially regionally. The fact that the change afforded by accounting for dependence across different RCPs is different is not an inconsistent result. Different numbers of submissions to each RCP by different modelling groups mean that differences in projections from different RCPs are not entirely about RCP forcing conditions - they also reflect different sampling strategies.

  7. Occupancy statistics arising from weighted particle rearrangements

    International Nuclear Information System (INIS)

    Huillet, Thierry

    2007-01-01

    The box-occupancy distributions arising from weighted rearrangements of a particle system are investigated. In the grand-canonical ensemble, they are characterized by determinantal joint probability generating functions. For doubly non-negative weight matrices, fractional occupancy statistics, generalizing Fermi-Dirac and Bose-Einstein statistics, can be defined. A spatially extended version of these balls-in-boxes problems is investigated

  8. A class of energy-based ensembles in Tsallis statistics

    International Nuclear Information System (INIS)

    Chandrashekar, R; Naina Mohammed, S S

    2011-01-01

    A comprehensive investigation is carried out on the class of energy-based ensembles. The eight ensembles are divided into two main classes. In the isothermal class of ensembles the individual members are at the same temperature. A unified framework is evolved to describe the four isothermal ensembles using the currently accepted third constraint formalism. The isothermal–isobaric, grand canonical and generalized ensembles are illustrated through a study of the classical nonrelativistic and extreme relativistic ideal gas models. An exact calculation is possible only in the case of the isothermal–isobaric ensemble. The study of the ideal gas models in the grand canonical and the generalized ensembles has been carried out using a perturbative procedure with the nonextensivity parameter (1 − q) as the expansion parameter. Though all the thermodynamic quantities have been computed up to a particular order in (1 − q) the procedure can be extended up to any arbitrary order in the expansion parameter. In the adiabatic class of ensembles the individual members of the ensemble have the same value of the heat function and a unified formulation to described all four ensembles is given. The nonrelativistic and the extreme relativistic ideal gases are studied in the isoenthalpic–isobaric ensemble, the adiabatic ensemble with number fluctuations and the adiabatic ensemble with number and particle fluctuations

  9. Epidemic extinction paths in complex networks

    Science.gov (United States)

    Hindes, Jason; Schwartz, Ira B.

    2017-05-01

    We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic susceptible-infected-susceptible model, we predict the distribution of large fluctuations, the most probable or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.

  10. Transition to collective oscillations in finite Kuramoto ensembles

    Science.gov (United States)

    Peter, Franziska; Pikovsky, Arkady

    2018-03-01

    We present an alternative approach to finite-size effects around the synchronization transition in the standard Kuramoto model. Our main focus lies on the conditions under which a collective oscillatory mode is well defined. For this purpose, the minimal value of the amplitude of the complex Kuramoto order parameter appears as a proper indicator. The dependence of this minimum on coupling strength varies due to sampling variations and correlates with the sample kurtosis of the natural frequency distribution. The skewness of the frequency sample determines the frequency of the resulting collective mode. The effects of kurtosis and skewness hold in the thermodynamic limit of infinite ensembles. We prove this by integrating a self-consistency equation for the complex Kuramoto order parameter for two families of distributions with controlled kurtosis and skewness, respectively.

  11. Paths of Cultural Systems

    Directory of Open Access Journals (Sweden)

    Paul Ballonoff

    2017-12-01

    Full Text Available A theory of cultural structures predicts the objects observed by anthropologists. We here define those which use kinship relationships to define systems. A finite structure we call a partially defined quasigroup (or pdq, as stated by Definition 1 below on a dictionary (called a natural language allows prediction of certain anthropological descriptions, using homomorphisms of pdqs onto finite groups. A viable history (defined using pdqs states how an individual in a population following such history may perform culturally allowed associations, which allows a viable history to continue to survive. The vector states on sets of viable histories identify demographic observables on descent sequences. Paths of vector states on sets of viable histories may determine which histories can exist empirically.

  12. Propagators and path integrals

    Energy Technology Data Exchange (ETDEWEB)

    Holten, J.W. van

    1995-08-22

    Path-integral expressions for one-particle propagators in scalar and fermionic field theories are derived, for arbitrary mass. This establishes a direct connection between field theory and specific classical point-particle models. The role of world-line reparametrization invariance of the classical action and the implementation of the corresponding BRST-symmetry in the quantum theory are discussed. The presence of classical world-line supersymmetry is shown to lead to an unwanted doubling of states for massive spin-1/2 particles. The origin of this phenomenon is traced to a `hidden` topological fermionic excitation. A different formulation of the pseudo-classical mechanics using a bosonic representation of {gamma}{sub 5} is shown to remove these extra states at the expense of losing manifest supersymmetry. (orig.).

  13. innovation path exploration

    Directory of Open Access Journals (Sweden)

    Li Jian

    2016-01-01

    Full Text Available The world has entered the information age, all kinds of information technologies such as cloud technology, big data technology are in rapid development, and the “Internet plus” appeared. The main purpose of “Internet plus” is to provide an opportunity for the further development of the enterprise, the enterprise technology, business and other aspects of factors combine. For enterprises, grasp the “Internet plus” the impact of the market economy will undoubtedly pave the way for the future development of enterprises. This paper will be on the innovation path of the enterprise management “Internet plus” era tied you study, hope to be able to put forward some opinions and suggestions.

  14. Ensemble hydro-meteorological forecasting for early warning of floods and scheduling of hydropower production

    Science.gov (United States)

    Solvang Johansen, Stian; Steinsland, Ingelin; Engeland, Kolbjørn

    2016-04-01

    Running hydrological models with precipitation and temperature ensemble forcing to generate ensembles of streamflow is a commonly used method in operational hydrology. Evaluations of streamflow ensembles have however revealed that the ensembles are biased with respect to both mean and spread. Thus postprocessing of the ensembles is needed in order to improve the forecast skill. The aims of this study is (i) to to evaluate how postprocessing of streamflow ensembles works for Norwegian catchments within different hydrological regimes and to (ii) demonstrate how post processed streamflow ensembles are used operationally by a hydropower producer. These aims were achieved by postprocessing forecasted daily discharge for 10 lead-times for 20 catchments in Norway by using EPS forcing from ECMWF applied the semi-distributed HBV-model dividing each catchment into 10 elevation zones. Statkraft Energi uses forecasts from these catchments for scheduling hydropower production. The catchments represent different hydrological regimes. Some catchments have stable winter condition with winter low flow and a major flood event during spring or early summer caused by snow melting. Others has a more mixed snow-rain regime, often with a secondary flood season during autumn, and in the coastal areas, the stream flow is dominated by rain, and the main flood season is autumn and winter. For post processing, a Bayesian model averaging model (BMA) close to (Kleiber et al 2011) is used. The model creates a predictive PDF that is a weighted average of PDFs centered on the individual bias corrected forecasts. The weights are here equal since all ensemble members come from the same model, and thus have the same probability. For modeling streamflow, the gamma distribution is chosen as a predictive PDF. The bias correction parameters and the PDF parameters are estimated using a 30-day sliding window training period. Preliminary results show that the improvement varies between catchments depending

  15. The Hydrologic Ensemble Prediction Experiment (HEPEX)

    Science.gov (United States)

    Wood, Andy; Wetterhall, Fredrik; Ramos, Maria-Helena

    2015-04-01

    The Hydrologic Ensemble Prediction Experiment was established in March, 2004, at a workshop hosted by the European Center for Medium Range Weather Forecasting (ECMWF), and co-sponsored by the US National Weather Service (NWS) and the European Commission (EC). The HEPEX goal was to bring the international hydrological and meteorological communities together to advance the understanding and adoption of hydrological ensemble forecasts for decision support. HEPEX pursues this goal through research efforts and practical implementations involving six core elements of a hydrologic ensemble prediction enterprise: input and pre-processing, ensemble techniques, data assimilation, post-processing, verification, and communication and use in decision making. HEPEX has grown through meetings that connect the user, forecast producer and research communities to exchange ideas, data and methods; the coordination of experiments to address specific challenges; and the formation of testbeds to facilitate shared experimentation. In the last decade, HEPEX has organized over a dozen international workshops, as well as sessions at scientific meetings (including AMS, AGU and EGU) and special issues of scientific journals where workshop results have been published. Through these interactions and an active online blog (www.hepex.org), HEPEX has built a strong and active community of nearly 400 researchers & practitioners around the world. This poster presents an overview of recent and planned HEPEX activities, highlighting case studies that exemplify the focus and objectives of HEPEX.

  16. A method for ensemble wildland fire simulation

    Science.gov (United States)

    Mark A. Finney; Isaac C. Grenfell; Charles W. McHugh; Robert C. Seli; Diane Trethewey; Richard D. Stratton; Stuart Brittain

    2011-01-01

    An ensemble simulation system that accounts for uncertainty in long-range weather conditions and two-dimensional wildland fire spread is described. Fuel moisture is expressed based on the energy release component, a US fire danger rating index, and its variation throughout the fire season is modeled using time series analysis of historical weather data. This analysis...

  17. The Phantasmagoria of Competition in School Ensembles

    Science.gov (United States)

    Abramo, Joseph Michael

    2017-01-01

    Participation in competition festivals--where students and ensembles compete against each other for high scores and accolades--is a widespread practice in North American formal music education. In this article, I use Marx's theories of labor, value, and phantasmagoria to suggest a capitalist logic that structures these competitions. Marx's…

  18. Ensembl Genomes 2016: more genomes, more complexity.

    Science.gov (United States)

    Kersey, Paul Julian; Allen, James E; Armean, Irina; Boddu, Sanjay; Bolt, Bruce J; Carvalho-Silva, Denise; Christensen, Mikkel; Davis, Paul; Falin, Lee J; Grabmueller, Christoph; Humphrey, Jay; Kerhornou, Arnaud; Khobova, Julia; Aranganathan, Naveen K; Langridge, Nicholas; Lowy, Ernesto; McDowall, Mark D; Maheswari, Uma; Nuhn, Michael; Ong, Chuang Kee; Overduin, Bert; Paulini, Michael; Pedro, Helder; Perry, Emily; Spudich, Giulietta; Tapanari, Electra; Walts, Brandon; Williams, Gareth; Tello-Ruiz, Marcela; Stein, Joshua; Wei, Sharon; Ware, Doreen; Bolser, Daniel M; Howe, Kevin L; Kulesha, Eugene; Lawson, Daniel; Maslen, Gareth; Staines, Daniel M

    2016-01-04

    Ensembl Genomes (http://www.ensemblgenomes.org) is an integrating resource for genome-scale data from non-vertebrate species, complementing the resources for vertebrate genomics developed in the context of the Ensembl project (http://www.ensembl.org). Together, the two resources provide a consistent set of programmatic and interactive interfaces to a rich range of data including reference sequence, gene models, transcriptional data, genetic variation and comparative analysis. This paper provides an update to the previous publications about the resource, with a focus on recent developments. These include the development of new analyses and views to represent polyploid genomes (of which bread wheat is the primary exemplar); and the continued up-scaling of the resource, which now includes over 23 000 bacterial genomes, 400 fungal genomes and 100 protist genomes, in addition to 55 genomes from invertebrate metazoa and 39 genomes from plants. This dramatic increase in the number of included genomes is one part of a broader effort to automate the integration of archival data (genome sequence, but also associated RNA sequence data and variant calls) within the context of reference genomes and make it available through the Ensembl user interfaces. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  19. NYYD Ensemble ja Riho Sibul / Anneli Remme

    Index Scriptorium Estoniae

    Remme, Anneli, 1968-

    2001-01-01

    Gavin Bryarsi teos "Jesus' Blood Never Failed Me Yet" NYYD Ensemble'i ja Riho Sibula esituses 27. detsembril Pauluse kirikus Tartus ja 28. detsembril Rootsi- Mihkli kirikus Tallinnas. Kaastegevad Tartu Ülikooli Kammerkoor (Tartus) ja kammerkoor Voces Musicales (Tallinnas). Kunstiline juht Olari Elts

  20. Conductor gestures influence evaluations of ensemble performance

    Directory of Open Access Journals (Sweden)

    Steven eMorrison

    2014-07-01

    Full Text Available Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance, articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and nonmajors (N = 285 viewed sixteen 30-second performances and evaluated the quality of the ensemble’s articulation, dynamics, technique and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  1. Genetic Algorithm Optimized Neural Networks Ensemble as ...

    African Journals Online (AJOL)

    NJD

    Improvements in neural network calibration models by a novel approach using neural network ensemble (NNE) for the simultaneous ... process by training a number of neural networks. .... Matlab® version 6.1 was employed for building principal component ... provide a fair simulation of calibration data set with some degree.

  2. A Theoretical Analysis of Why Hybrid Ensembles Work

    Directory of Open Access Journals (Sweden)

    Kuo-Wei Hsu

    2017-01-01

    Full Text Available Inspired by the group decision making process, ensembles or combinations of classifiers have been found favorable in a wide variety of application domains. Some researchers propose to use the mixture of two different types of classification algorithms to create a hybrid ensemble. Why does such an ensemble work? The question remains. Following the concept of diversity, which is one of the fundamental elements of the success of ensembles, we conduct a theoretical analysis of why hybrid ensembles work, connecting using different algorithms to accuracy gain. We also conduct experiments on classification performance of hybrid ensembles of classifiers created by decision tree and naïve Bayes classification algorithms, each of which is a top data mining algorithm and often used to create non-hybrid ensembles. Therefore, through this paper, we provide a complement to the theoretical foundation of creating and using hybrid ensembles.

  3. Ensemble of classifiers based network intrusion detection system performance bound

    CSIR Research Space (South Africa)

    Mkuzangwe, Nenekazi NP

    2017-11-01

    Full Text Available This paper provides a performance bound of a network intrusion detection system (NIDS) that uses an ensemble of classifiers. Currently researchers rely on implementing the ensemble of classifiers based NIDS before they can determine the performance...

  4. Global Ensemble Forecast System (GEFS) [2.5 Deg.

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Global Ensemble Forecast System (GEFS) is a weather forecast model made up of 21 separate forecasts, or ensemble members. The National Centers for Environmental...

  5. Eating disorder symptoms and weight and shape concerns in a large web-based convenience sample of women ages 50 and above: results of the Gender and Body Image (GABI) study.

    Science.gov (United States)

    Gagne, Danielle A; Von Holle, Ann; Brownley, Kimberly A; Runfola, Cristin D; Hofmeier, Sara; Branch, Kateland E; Bulik, Cynthia M

    2012-11-01

    Limited research exists on eating disorder symptoms and attitudes and weight and shape concerns in women in midlife to older adulthood. We conducted an online survey to characterize these behaviors and concerns in women ages 50 and above. Participants (n = 1,849) were recruited via the Internet and convenience sampling. Eating disorder symptoms, dieting and body checking behaviors, and weight and shape concerns were widely endorsed. Younger age and higher body mass index (BMI) were associated with greater endorsement of eating disorder symptoms, behaviors, and concerns. Weight and shape concerns and disordered eating behaviors occur in women over 50 and vary by age and BMI. Focused research on disordered eating patterns in this age group is necessary to develop age-appropriate interventions and to meet the developmental needs of an important, growing, and underserved population. Copyright © 2012 Wiley Periodicals, Inc.

  6. Using ensemble forecasting for wind power

    Energy Technology Data Exchange (ETDEWEB)

    Giebel, G.; Landberg, L.; Badger, J. [Risoe National Lab., Roskilde (Denmark); Sattler, K.

    2003-07-01

    Short-term prediction of wind power has a long tradition in Denmark. It is an essential tool for the operators to keep the grid from becoming unstable in a region like Jutland, where more than 27% of the electricity consumption comes from wind power. This means that the minimum load is already lower than the maximum production from wind energy alone. Danish utilities have therefore used short-term prediction of wind energy since the mid-90ies. However, the accuracy is still far from being sufficient in the eyes of the utilities (used to have load forecasts accurate to within 5% on a one-week horizon). The Ensemble project tries to alleviate the dependency of the forecast quality on one model by using multiple models, and also will investigate the possibilities of using the model spread of multiple models or of dedicated ensemble runs for a prediction of the uncertainty of the forecast. Usually, short-term forecasting works (especially for the horizon beyond 6 hours) by gathering input from a Numerical Weather Prediction (NWP) model. This input data is used together with online data in statistical models (this is the case eg in Zephyr/WPPT) to yield the output of the wind farms or of a whole region for the next 48 hours (only limited by the NWP model horizon). For the accuracy of the final production forecast, the accuracy of the NWP prediction is paramount. While many efforts are underway to increase the accuracy of the NWP forecasts themselves (which ultimately are limited by the amount of computing power available, the lack of a tight observational network on the Atlantic and limited physics modelling), another approach is to use ensembles of different models or different model runs. This can be either an ensemble of different models output for the same area, using different data assimilation schemes and different model physics, or a dedicated ensemble run by a large institution, where the same model is run with slight variations in initial conditions and

  7. Ensemble data assimilation in the Red Sea: sensitivity to ensemble selection and atmospheric forcing

    KAUST Repository

    Toye, Habib; Zhan, Peng; Gopalakrishnan, Ganesh; Kartadikaria, Aditya R.; Huang, Huang; Knio, Omar; Hoteit, Ibrahim

    2017-01-01

    We present our efforts to build an ensemble data assimilation and forecasting system for the Red Sea. The system consists of the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm) to simulate ocean circulation

  8. Robust Ensemble Filtering and Its Relation to Covariance Inflation in the Ensemble Kalman Filter

    KAUST Repository

    Luo, Xiaodong; Hoteit, Ibrahim

    2011-01-01

    A robust ensemble filtering scheme based on the H∞ filtering theory is proposed. The optimal H∞ filter is derived by minimizing the supremum (or maximum) of a predefined cost function, a criterion different from the minimum variance used

  9. Quantum canonical ensemble: A projection operator approach

    Science.gov (United States)

    Magnus, Wim; Lemmens, Lucien; Brosens, Fons

    2017-09-01

    Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.

  10. Tornado intensity estimated from damage path dimensions.

    Directory of Open Access Journals (Sweden)

    James B Elsner

    Full Text Available The Newcastle/Moore and El Reno tornadoes of May 2013 are recent reminders of the destructive power of tornadoes. A direct estimate of a tornado's power is difficult and dangerous to get. An indirect estimate on a categorical scale is available from a post-storm survery of the damage. Wind speed bounds are attached to the scale, but the scale is not adequate for analyzing trends in tornado intensity separate from trends in tornado frequency. Here tornado intensity on a continuum is estimated from damage path length and width, which are measured on continuous scales and correlated to the EF rating. The wind speeds on the EF scale are treated as interval censored data and regressed onto the path dimensions and fatalities. The regression model indicates a 25% increase in expected intensity over a threshold intensity of 29 m s(-1 for a 100 km increase in path length and a 17% increase in expected intensity for a one km increase in path width. The model shows a 43% increase in the expected intensity when fatalities are observed controlling for path dimensions. The estimated wind speeds correlate at a level of .77 (.34, .93 [95% confidence interval] with a small sample of wind speeds estimated independently from a doppler radar calibration. The estimated wind speeds allow analyses to be done on the tornado database that are not possible with the categorical scale. The modeled intensities can be used in climatology and in environmental and engineering applications. Research is needed to understand the upward trends in path length and width.

  11. Tornado intensity estimated from damage path dimensions.

    Science.gov (United States)

    Elsner, James B; Jagger, Thomas H; Elsner, Ian J

    2014-01-01

    The Newcastle/Moore and El Reno tornadoes of May 2013 are recent reminders of the destructive power of tornadoes. A direct estimate of a tornado's power is difficult and dangerous to get. An indirect estimate on a categorical scale is available from a post-storm survery of the damage. Wind speed bounds are attached to the scale, but the scale is not adequate for analyzing trends in tornado intensity separate from trends in tornado frequency. Here tornado intensity on a continuum is estimated from damage path length and width, which are measured on continuous scales and correlated to the EF rating. The wind speeds on the EF scale are treated as interval censored data and regressed onto the path dimensions and fatalities. The regression model indicates a 25% increase in expected intensity over a threshold intensity of 29 m s(-1) for a 100 km increase in path length and a 17% increase in expected intensity for a one km increase in path width. The model shows a 43% increase in the expected intensity when fatalities are observed controlling for path dimensions. The estimated wind speeds correlate at a level of .77 (.34, .93) [95% confidence interval] with a small sample of wind speeds estimated independently from a doppler radar calibration. The estimated wind speeds allow analyses to be done on the tornado database that are not possible with the categorical scale. The modeled intensities can be used in climatology and in environmental and engineering applications. Research is needed to understand the upward trends in path length and width.

  12. Path integral in Snyder space

    Energy Technology Data Exchange (ETDEWEB)

    Mignemi, S., E-mail: smignemi@unica.it [Dipartimento di Matematica e Informatica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato (Italy); Štrajn, R. [Dipartimento di Matematica e Informatica, Università di Cagliari, Viale Merello 92, 09123 Cagliari (Italy); INFN, Sezione di Cagliari, Cittadella Universitaria, 09042 Monserrato (Italy)

    2016-04-29

    The definition of path integrals in one- and two-dimensional Snyder space is discussed in detail both in the traditional setting and in the first-order formalism of Faddeev and Jackiw. - Highlights: • The definition of the path integral in Snyder space is discussed using phase space methods. • The same result is obtained in the first-order formalism of Faddeev and Jackiw. • The path integral formulation of the two-dimensional Snyder harmonic oscillator is outlined.

  13. Path integral in Snyder space

    International Nuclear Information System (INIS)

    Mignemi, S.; Štrajn, R.

    2016-01-01

    The definition of path integrals in one- and two-dimensional Snyder space is discussed in detail both in the traditional setting and in the first-order formalism of Faddeev and Jackiw. - Highlights: • The definition of the path integral in Snyder space is discussed using phase space methods. • The same result is obtained in the first-order formalism of Faddeev and Jackiw. • The path integral formulation of the two-dimensional Snyder harmonic oscillator is outlined.

  14. Efficient Completion of Weighted Automata

    Directory of Open Access Journals (Sweden)

    Johannes Waldmann

    2016-09-01

    Full Text Available We consider directed graphs with edge labels from a semiring. We present an algorithm that allows efficient execution of queries for existence and weights of paths, and allows updates of the graph: adding nodes and edges, and changing weights of existing edges. We apply this method in the construction of matchbound certificates for automatically proving termination of string rewriting. We re-implement the decomposition/completion algorithm of Endrullis et al. (2006 in our framework, and achieve comparable performance.

  15. Fitting a function to time-dependent ensemble averaged data.

    Science.gov (United States)

    Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias

    2018-05-03

    Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.

  16. Ensemble forecasting using sequential aggregation for photovoltaic power applications

    International Nuclear Information System (INIS)

    Thorey, Jean

    2017-01-01

    Our main objective is to improve the quality of photovoltaic power forecasts deriving from weather forecasts. Such forecasts are imperfect due to meteorological uncertainties and statistical modeling inaccuracies in the conversion of weather forecasts to power forecasts. First we gather several weather forecasts, secondly we generate multiple photovoltaic power forecasts, and finally we build linear combinations of the power forecasts. The minimization of the Continuous Ranked Probability Score (CRPS) allows to statistically calibrate the combination of these forecasts, and provides probabilistic forecasts under the form of a weighted empirical distribution function. We investigate the CRPS bias in this context and several properties of scoring rules which can be seen as a sum of quantile-weighted losses or a sum of threshold-weighted losses. The minimization procedure is achieved with online learning techniques. Such techniques come with theoretical guarantees of robustness on the predictive power of the combination of the forecasts. Essentially no assumptions are needed for the theoretical guarantees to hold. The proposed methods are applied to the forecast of solar radiation using satellite data, and the forecast of photovoltaic power based on high-resolution weather forecasts and standard ensembles of forecasts. (author) [fr

  17. Multidimensional generalized-ensemble algorithms for complex systems.

    Science.gov (United States)

    Mitsutake, Ayori; Okamoto, Yuko

    2009-06-07

    We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.

  18. Relationships between consumption of ultra-processed foods, gestational weight gain and neonatal outcomes in a sample of US pregnant women

    Directory of Open Access Journals (Sweden)

    Karthik W. Rohatgi

    2017-12-01

    Full Text Available Background An increasingly large share of diet comes from ultra-processed foods (UPFs, which are assemblages of food substances designed to create durable, convenient and palatable ready-to-eat products. There is increasing evidence that high UPF consumption is indicative of poor diet and is associated with obesity and metabolic disorders. This study sought to examine the relationship between percent of energy intake from ultra-processed foods (PEI-UPF during pregnancy and maternal gestational weight gain, maternal lipids and glycemia, and neonatal body composition. We also compared the PEI-UPF indicator against the US government’s Healthy Eating Index-2010 (HEI-2010. Methods Data were used from a longitudinal study performed in 2013–2014 at the Women’s Health Center and Obstetrics & Gynecology Clinic in St. Louis, MO, USA. Subjects were pregnant women in the normal and obese weight ranges, as well as their newborns (n = 45. PEI-UPF and the Healthy Eating Index-2010 (HEI-2010 were calculated for each subject from a one-month food frequency questionnaire (FFQ. Multiple regression (ANCOVA-like analysis was used to analyze the relationship between PEI-UPF or HEI-2010 and various clinical outcomes. The ability of these dietary indices to predict clinical outcomes was also compared with the predictive abilities of total energy intake and total fat intake. Results An average of 54.4 ± 13.2% of energy intake was derived from UPFs. A 1%-point increase in PEI-UPF was associated with a 1.33 kg increase in gestational weight gain (p = 0.016. Similarly, a 1%-point increase in PEI-UPF was associated with a 0.22 mm increase in thigh skinfold (p = 0.045, 0.14 mm in subscapular skinfold (p = 0.026, and 0.62 percentage points of total body adiposity (p = 0.037 in the neonate. Discussion PEI-UPF (percent of energy intake from ultra-processed foods was associated with and may be a useful predictor of increased gestational weight gain and neonatal

  19. The classicality and quantumness of a quantum ensemble

    International Nuclear Information System (INIS)

    Zhu Xuanmin; Pang Shengshi; Wu Shengjun; Liu Quanhui

    2011-01-01

    In this Letter, we investigate the classicality and quantumness of a quantum ensemble. We define a quantity called ensemble classicality based on classical cloning strategy (ECCC) to characterize how classical a quantum ensemble is. An ensemble of commuting states has a unit ECCC, while a general ensemble can have a ECCC less than 1. We also study how quantum an ensemble is by defining a related quantity called quantumness. We find that the classicality of an ensemble is closely related to how perfectly the ensemble can be cloned, and that the quantumness of the ensemble used in a quantum key distribution (QKD) protocol is exactly the attainable lower bound of the error rate in the sifted key. - Highlights: → A quantity is defined to characterize how classical a quantum ensemble is. → The classicality of an ensemble is closely related to the cloning performance. → Another quantity is also defined to investigate how quantum an ensemble is. → This quantity gives the lower bound of the error rate in a QKD protocol.

  20. Exploring and Listening to Chinese Classical Ensembles in General Music

    Science.gov (United States)

    Zhang, Wenzhuo

    2017-01-01

    Music diversity is valued in theory, but the extent to which it is efficiently presented in music class remains limited. Within this article, I aim to bridge this gap by introducing four genres of Chinese classical ensembles--Qin and Xiao duets, Jiang Nan bamboo and silk ensembles, Cantonese ensembles, and contemporary Chinese orchestras--into the…