WorldWideScience

Sample records for optimal importance sampling

  1. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  2. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  3. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  4. AND/OR Importance Sampling

    OpenAIRE

    Gogate, Vibhav; Dechter, Rina

    2012-01-01

    The paper introduces AND/OR importance sampling for probabilistic graphical models. In contrast to importance sampling, AND/OR importance sampling caches samples in the AND/OR space and then extracts a new sample mean from the stored samples. We prove that AND/OR importance sampling may have lower variance than importance sampling; thereby providing a theoretical justification for preferring it over importance sampling. Our empirical evaluation demonstrates that AND/OR importance sampling is ...

  5. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  6. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  7. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim

    2018-02-19

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a single-loop Monte Carlo method that uses the Laplace approximation of the return value of the inner loop. The first demonstration example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  8. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    Science.gov (United States)

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis F. R.; Long, Quan; Tempone, Raúl

    2018-06-01

    In calculating expected information gain in optimal Bayesian experimental design, the computation of the inner loop in the classical double-loop Monte Carlo requires a large number of samples and suffers from underflow if the number of samples is small. These drawbacks can be avoided by using an importance sampling approach. We present a computationally efficient method for optimal Bayesian experimental design that introduces importance sampling based on the Laplace method to the inner loop. We derive the optimal values for the method parameters in which the average computational cost is minimized according to the desired error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical double-loop Monte Carlo, and a more recent single-loop Monte Carlo method that uses the Laplace method as an approximation of the return value of the inner loop. The first example is a scalar problem that is linear in the uncertain parameter. The second example is a nonlinear scalar problem. The third example deals with the optimal sensor placement for an electrical impedance tomography experiment to recover the fiber orientation in laminate composites.

  9. Importance Sampling Simulation of Population Overflow in Two-node Tandem Networks

    NARCIS (Netherlands)

    Nicola, V.F.; Zaburnenko, T.S.; Baier, C; Chiola, G.; Smirni, E.

    2005-01-01

    In this paper we consider the application of importance sampling in simulations of Markovian tandem networks in order to estimate the probability of rare events, such as network population overflow. We propose a heuristic methodology to obtain a good approximation to the 'optimal' state-dependent

  10. Minimum variance Monte Carlo importance sampling with parametric dependence

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.; Halton, J.; Maynard, C.W.

    1981-01-01

    An approach for Monte Carlo Importance Sampling with parametric dependence is proposed. It depends upon obtaining by proper weighting over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adapted and other results rejected. Numerical calculation for the estimation of intergrals are compared to Crude Monte Carlo. Results explain the occurrences of the effective biases (even though the theoretical bias is zero) and infinite variances which arise in calculations involving severe biasing and a moderate number of historis. Extension to particle transport applications is briefly discussed. The approach constitutes an extension of a theory on the application of Monte Carlo for the calculation of functional dependences introduced by Frolov and Chentsov to biasing, or importance sample calculations; and is a generalization which avoids nonconvergence to the optimal values in some cases of a multistage method for variance reduction introduced by Spanier. (orig.) [de

  11. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  12. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  13. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  14. Non-parametric adaptive importance sampling for the probability estimation of a launcher impact position

    International Nuclear Information System (INIS)

    Morio, Jerome

    2011-01-01

    Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.

  15. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  16. Optimization of importance factors in inverse planning

    International Nuclear Information System (INIS)

    Xing, L.

    1999-01-01

    Inverse treatment planning starts with a treatment objective and obtains the solution by optimizing an objective function. The clinical objectives are usually multifaceted and potentially incompatible with one another. A set of importance factors is often incorporated in the objective function to parametrize trade-off strategies and to prioritize the dose conformality in different anatomical structures. Whereas the general formalism remains the same, different sets of importance factors characterize plans of obviously different flavour and thus critically determine the final plan. Up to now, the determination of these parameters has been a 'guessing' game based on empirical knowledge because the final dose distribution depends on the parameters in a complex and implicit way. The influence of these parameters is not known until the plan optimization is completed. In order to compromise properly the conflicting requirements of the target and sensitive structures, the parameters are usually adjusted through a trial-and-error process. In this paper, a method to estimate these parameters computationally is proposed and an iterative computer algorithm is described to determine these parameters numerically. The treatment plan selection is done in two steps. First, a set of importance factors are chosen and the corresponding beam parameters (e.g. beam profiles) are optimized under the guidance of a quadratic objective function using an iterative algorithm reported earlier. The 'optimal' plan is then evaluated by an additional scoring function. The importance factors in the objective function are accordingly adjusted to improve the ranking of the plan. For every change in the importance factors, the beam parameters need to be re-optimized. This process continues in an iterative fashion until the scoring function is saturated. The algorithm was applied to two clinical cases and the results demonstrated that it has the potential to improve significantly the existing method of

  17. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  18. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  19. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  20. Importance sampling the Rayleigh phase function

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall

    2011-01-01

    Rayleigh scattering is used frequently in Monte Carlo simulation of multiple scattering. The Rayleigh phase function is quite simple, and one might expect that it should be simple to importance sample it efficiently. However, there seems to be no one good way of sampling it in the literature....... This paper provides the details of several different techniques for importance sampling the Rayleigh phase function, and it includes a comparison of their performance as well as hints toward efficient implementation....

  1. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  2. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  3. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  4. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  5. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  6. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  8. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  9. Honest Importance Sampling with Multiple Markov Chains.

    Science.gov (United States)

    Tan, Aixin; Doss, Hani; Hobert, James P

    2015-01-01

    Importance sampling is a classical Monte Carlo technique in which a random sample from one probability density, π 1 , is used to estimate an expectation with respect to another, π . The importance sampling estimator is strongly consistent and, as long as two simple moment conditions are satisfied, it obeys a central limit theorem (CLT). Moreover, there is a simple consistent estimator for the asymptotic variance in the CLT, which makes for routine computation of standard errors. Importance sampling can also be used in the Markov chain Monte Carlo (MCMC) context. Indeed, if the random sample from π 1 is replaced by a Harris ergodic Markov chain with invariant density π 1 , then the resulting estimator remains strongly consistent. There is a price to be paid however, as the computation of standard errors becomes more complicated. First, the two simple moment conditions that guarantee a CLT in the iid case are not enough in the MCMC context. Second, even when a CLT does hold, the asymptotic variance has a complex form and is difficult to estimate consistently. In this paper, we explain how to use regenerative simulation to overcome these problems. Actually, we consider a more general set up, where we assume that Markov chain samples from several probability densities, π 1 , …, π k , are available. We construct multiple-chain importance sampling estimators for which we obtain a CLT based on regeneration. We show that if the Markov chains converge to their respective target distributions at a geometric rate, then under moment conditions similar to those required in the iid case, the MCMC-based importance sampling estimator obeys a CLT. Furthermore, because the CLT is based on a regenerative process, there is a simple consistent estimator of the asymptotic variance. We illustrate the method with two applications in Bayesian sensitivity analysis. The first concerns one-way random effects models under different priors. The second involves Bayesian variable

  10. The importance of personality and parental styles on optimism in adolescents.

    Science.gov (United States)

    Zanon, Cristian; Bastianello, Micheline Roat; Pacico, Juliana Cerentini; Hutz, Claudio Simon

    2014-01-01

    Some studies have suggested that personality factors are important to optimism development. Others have emphasized that family relations are relevant variables to optimism. This study aimed to evaluate the importance of parenting styles to optimism controlling for the variance accounted for by personality factors. Participants were 344 Brazilian high school students (44% male) with mean age of 16.2 years (SD = 1) who answered personality, optimism, responsiveness and demandingness scales. Hierarchical regression analyses were conducted having personality factors (in the first step) and maternal and paternal parenting styles, and demandingness and responsiveness (in the second step) as predictive variables and optimism as the criterion. Personality factors, especially neuroticism (β = -.34, p parental styles (1%). These findings suggest that personality is more important to optimism development than parental styles.

  11. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  12. Importance Sampling for Stochastic Timed Automata

    DEFF Research Database (Denmark)

    Jegourel, Cyrille; Larsen, Kim Guldstrand; Legay, Axel

    2016-01-01

    We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state-wise cha......We present an importance sampling framework that combines symbolic analysis and simulation to estimate the probability of rare reachability properties in stochastic timed automata. By means of symbolic exploration, our framework first identifies states that cannot reach the goal. A state...

  13. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  14. Adaptive Importance Sampling with a Rapidly Varying Importance Function

    International Nuclear Information System (INIS)

    Booth, Thomas E.

    2000-01-01

    It is known well that zero-variance Monte Carlo solutions are possible if an exact importance function is available to bias the random walks. Monte Carlo can be used to estimate the importance function. This estimated importance function then can be used to bias a subsequent Monte Carlo calculation that estimates an even better importance function; this iterative process is called adaptive importance sampling.To obtain the importance function, one can expand the importance function in a basis such as the Legendre polynomials and make Monte Carlo estimates of the expansion coefficients. For simple problems, Legendre expansions of order 10 to 15 are able to represent the importance function well enough to reduce the error geometrically by ten orders of magnitude or more. The more complicated problems are addressed in which the importance function cannot be represented well by Legendre expansions of order 10 to 15. In particular, a problem with a cross-section notch and a problem with a discontinuous cross section are considered

  15. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  16. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  17. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  18. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  19. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  20. Importance of sample preparation for molecular diagnosis of lyme borreliosis from urine.

    Science.gov (United States)

    Bergmann, A R; Schmidt, B L; Derler, A-M; Aberer, E

    2002-12-01

    Urine PCR has been used for the diagnosis of Borrelia burgdorferi infection in recent years but has been abandoned because of its low sensitivity and the irreproducibility of the results. Our study aimed to analyze technical details related to sample preparation and detection methods. Crucial for a successful urine PCR were (i) avoidance of the first morning urine sample; (ii) centrifugation at 36,000 x g; and (iii) the extraction method, with only DNAzol of the seven different extraction methods used yielding positive results with patient urine specimens. Furthermore, storage of frozen urine samples at -80 degrees C reduced the sensitivity of a positive urine PCR result obtained with samples from 72 untreated erythema migrans (EM) patients from 85% in the first 3 months to samples was proven by hybridization with a GEN-ETI-K-DEIA kit and for a 10 further positive amplicons by sequencing. By using all of these steps to optimize the urine PCR technique, B. burgdorferi infection could be diagnosed by using urine samples from EM patients with a sensitivity (85%) substantially better than that of serological methods (50%). This improved method could be of future importance as an additional laboratory technique for the diagnosis of unclear, unrecognized borrelia infections and diseases possibly related to Lyme borreliosis.

  1. Radioactivity monitoring of export/import samples - an update

    International Nuclear Information System (INIS)

    Shukla, V.K.; Murthy, M.V.R.; Sartandel, S.J.; Negi, B.S.; Sadasivan, S.

    2001-01-01

    137 Cs activity was measured in food samples exported from and imported into India during the period from 1993 to 2000. At present, on an average of about 1200 sample are estimated every year. Results showed no contamination of 137 Cs activity in samples that are exported from India. The few samples of diary products, imported in India during 1995 and 1996, showed low levels of 137 Cs activity. However, the levels were well with in the permissible values of Atomic Energy Regulatory Board (AERB). (author)

  2. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  3. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  4. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  5. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  6. Importance measures and genetic algorithms for designing a risk-informed optimally balanced system

    International Nuclear Information System (INIS)

    Zio, Enrico; Podofillini, Luca

    2007-01-01

    This paper deals with the use of importance measures for the risk-informed optimization of system design and management. An optimization approach is presented in which the information provided by the importance measures is incorporated in the formulation of a multi-objective optimization problem to drive the design towards a solution which, besides being optimal from the points of view of economics and safety, is also 'balanced' in the sense that all components have similar importance values. The approach allows identifying design systems without bottlenecks or unnecessarily high-performing components and with test/maintenance activities calibrated according to the components' importance ranking. The approach is tested at first against a multi-state system design optimization problem in which off-the-shelf components have to be properly allocated. Then, the more realistic problem of risk-informed optimization of the technical specifications of a safety system of a nuclear power plant is addressed

  7. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  8. The Importance of Supply Chain Management on Financial Optimization

    Directory of Open Access Journals (Sweden)

    Arawati Agus

    2013-01-01

    Full Text Available Many manufacturing companies are facing uncertainties and stiff competition both locally and globally, intensified by increasing needs for sophisticated and high value products from demanding customers. These companies are forced to improve the quality of their supply chain management decisions, products and reduce their manufacturing costs. With today’s volatile and very challenging global market, many manufacturing companies have started to realize the importance of the proper managing of their supply chains. Supply chain management (SCM involves practices such as strategic supplier partnership, customer focus, lean production, postpone concept and technology & innovation. This study investigates the importance of SCM on financial optimization. The study measures production or SCM managers’ perceptions regarding SCM and level of performances in their companies. The paper also specifically investigates whether supply chain performance acts as a mediating variable in the relationship between SCM and financial optimization. These associations were analyzed through statistical methods such as Pearson’s correlation and a regression-based mediated analysis. The findings suggest that SCM has significant correlations with supply chain performance and financial optimization. In addition, the result of the regression-based mediated analysis demonstrates that supply chain performance mediates the linkage between SCM and financial optimization. The findings of the study provide a striking demonstration of the importance of SCM in enhancing the performances of Malaysian manufacturing companies. The result indicates that manufac-turing companies should emphasize greater management support for SCM implementation and a greater degree of attention for production integration and information flow integration in the manufacturing system in order to maximize profit and tzerimize cost.

  9. IMPORTANCE OF KINETIC MEASURES IN TRAJECTORY PREDICTION WITH OPTIMAL CONTROL

    Directory of Open Access Journals (Sweden)

    Ömer GÜNDOĞDU

    2001-02-01

    Full Text Available A two-dimensional sagittally symmetric human-body model was established to simulate an optimal trajectory for manual material handling tasks. Nonlinear control techniques and genetic algorithms were utilized in the optimizations to explore optimal lifting patterns. The simulation results were then compared with the experimental data. Since the kinetic measures such as joint reactions and moments are vital parameters in injury determination, the importance of comparing kinetic measures rather than kinematical ones was emphasized.

  10. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  11. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  12. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  13. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  14. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  15. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  16. A support vector density-based importance sampling for reliability assessment

    International Nuclear Information System (INIS)

    Dai, Hongzhe; Zhang, Hao; Wang, Wei

    2012-01-01

    An importance sampling method based on the adaptive Markov chain simulation and support vector density estimation is developed in this paper for efficient structural reliability assessment. The methodology involves the generation of samples that can adaptively populate the important region by the adaptive Metropolis algorithm, and the construction of importance sampling density by support vector density. The use of the adaptive Metropolis algorithm may effectively improve the convergence and stability of the classical Markov chain simulation. The support vector density can approximate the sampling density with fewer samples in comparison to the conventional kernel density estimation. The proposed importance sampling method can effectively reduce the number of structural analysis required for achieving a given accuracy. Examples involving both numerical and practical structural problems are given to illustrate the application and efficiency of the proposed methodology.

  17. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  18. State-dependent importance sampling for a Jackson tandem network

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jacksonian two-node tandem queue – it is known that in this setting ‘traditional’ stateindependent importance-sampling distributions

  19. State-dependent importance sampling for a Jackson tandem network

    NARCIS (Netherlands)

    D.I. Miretskiy; W.R.W. Scheinhardt (Werner); M.R.H. Mandjes (Michel)

    2008-01-01

    htmlabstractThis paper considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jacksonian two-node tandem queue – it is known that in this setting ‘traditional’ stateindependent importance-sampling

  20. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  1. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  2. Iterative importance sampling algorithms for parameter estimation

    OpenAIRE

    Morzfeld, Matthias; Day, Marcus S.; Grout, Ray W.; Pau, George Shu Heng; Finsterle, Stefan A.; Bell, John B.

    2016-01-01

    In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov Chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is ...

  3. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  4. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  5. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    Science.gov (United States)

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  6. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  7. Optimal allocation of industrial PV-storage micro-grid considering important load

    Science.gov (United States)

    He, Shaohua; Ju, Rong; Yang, Yang; Xu, Shuai; Liang, Lei

    2018-03-01

    At present, the industrial PV-storage micro-grid has been widely used. This paper presents an optimal allocation model of PV-storage micro-grid capacity considering the important load of industrial users. A multi-objective optimization model is established to promote the local extinction of PV power generation and the maximum investment income of the enterprise as the objective function. Particle swarm optimization (PSO) is used to solve the case of a city in Jiangsu Province, the results are analyzed economically.

  8. ROLE AND IMPORTANCE OF SEARCH ENGINE OPTIMIZATION

    OpenAIRE

    Gurneet Kaur

    2017-01-01

    Search Engines are an indispensible platform for users all over the globe to search for relevant information online. Search Engine Optimization (SEO) is the exercise of improving the position of a website in search engine rankings, for a chosen set of keywords. SEO is divided into two parts: On-Page and Off-Page SEO. In order to be successful, both the areas require equal attention. This paper aims to explain the functioning of the search engines along with the role and importance of search e...

  9. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  10. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  11. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  12. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  13. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  14. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  15. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  16. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  17. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  18. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  19. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  20. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  1. Monte Carlo parametric importance sampling with particle tracks scaling

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.

    1981-01-01

    A method for Monte Carlo importance sampling with parametric dependence is proposed. It depends upon obtaining over a single stage the overall functional dependence of the variance on the importance function parameter over a broad range of its values. Results corresponding to minimum variance are adopted and others rejected. The proposed method is applied to the finite slab penetration problem. When the exponential transformation is used, our method involves scaling of the generated particle tracks, and is a new application of Morton's method of similar trajectories. The method constitutes a generalization of Spanier's multistage importance sampling method, obtained by proper weighting over a single stage the curves he obtains over several stages, and preserves the statistical correlations between histories. It represents an extension of a theory by Frolov and Chentsov on Monte Carlo calculations of smooth curves to surfaces and to importance sampling calculations. By the proposed method, it seems possible to systematically arrive at minimum variance results and to avoid the infinite variances and effective biases sometimes observed in this type of calculation. (orig.) [de

  2. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  3. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung

    2013-02-16

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  4. Stochastic approximation Monte Carlo importance sampling for approximating exact conditional probabilities

    KAUST Repository

    Cheon, Sooyoung; Liang, Faming; Chen, Yuguo; Yu, Kai

    2013-01-01

    Importance sampling and Markov chain Monte Carlo methods have been used in exact inference for contingency tables for a long time, however, their performances are not always very satisfactory. In this paper, we propose a stochastic approximation Monte Carlo importance sampling (SAMCIS) method for tackling this problem. SAMCIS is a combination of adaptive Markov chain Monte Carlo and importance sampling, which employs the stochastic approximation Monte Carlo algorithm (Liang et al., J. Am. Stat. Assoc., 102(477):305-320, 2007) to draw samples from an enlarged reference set with a known Markov basis. Compared to the existing importance sampling and Markov chain Monte Carlo methods, SAMCIS has a few advantages, such as fast convergence, ergodicity, and the ability to achieve a desired proportion of valid tables. The numerical results indicate that SAMCIS can outperform the existing importance sampling and Markov chain Monte Carlo methods: It can produce much more accurate estimates in much shorter CPU time than the existing methods, especially for the tables with high degrees of freedom. © 2013 Springer Science+Business Media New York.

  5. An approach to maintenance optimization where safety issues are important

    International Nuclear Information System (INIS)

    Vatn, Jorn; Aven, Terje

    2010-01-01

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  6. An approach to maintenance optimization where safety issues are important

    Energy Technology Data Exchange (ETDEWEB)

    Vatn, Jorn, E-mail: jorn.vatn@ntnu.n [NTNU, Production and Quality Engineering, 7491 Trondheim (Norway); Aven, Terje [University of Stavanger (Norway)

    2010-01-15

    The starting point for this paper is a traditional approach to maintenance optimization where an object function is used for optimizing maintenance intervals. The object function reflects maintenance cost, cost of loss of production/services, as well as safety costs, and is based on a classical cost-benefit analysis approach where a value of prevented fatality (VPF) is used to weight the importance of safety. However, the rationale for such an approach could be questioned. What is the meaning of such a VPF figure, and is it sufficient to reflect the importance of safety by calculating the expected fatality loss VPF and potential loss of lives (PLL) as being done in the cost-benefit analyses? Should the VPF be the same number for all type of accidents, or should it be increased in case of multiple fatality accidents to reflect gross accident aversion? In this paper, these issues are discussed. We conclude that we have to see beyond the expected values in situations with high safety impacts. A framework is presented which opens up for a broader decision basis, covering considerations on the potential for gross accidents, the type of uncertainties and lack of knowledge of important risk influencing factors. Decisions with a high safety impact are moved from the maintenance department to the 'Safety Board' for a broader discussion. In this way, we avoid that the object function is used in a mechanical way to optimize the maintenance and important safety-related decisions are made implicit and outside the normal arena for safety decisions, e.g. outside the traditional 'Safety Board'. A case study from the Norwegian railways is used to illustrate the discussions.

  7. Adaptive Importance Sampling Simulation of Queueing Networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.; Rubinstein, N.; Rubinstein, Reuven Y.

    2000-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (overflow) probabilities in Jackson queueing networks using importance sampling. The method differs in two ways from methods discussed in most earlier literature: the change of measure is state-dependent, i.e., it is a

  8. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  9. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  10. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  11. Coalescent: an open-science framework for importance sampling in coalescent theory.

    Science.gov (United States)

    Tewari, Susanta; Spouge, John L

    2015-01-01

    Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only

  12. Coalescent: an open-science framework for importance sampling in coalescent theory

    Directory of Open Access Journals (Sweden)

    Susanta Tewari

    2015-08-01

    Full Text Available Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner.Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3 for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux. Extensive tests and coverage make the framework reliable and maintainable.Conclusions. In coalescent theory, many studies of computational efficiency

  13. Rules of Normalisation and their Importance for Interpretation of Systems of Optimal Taxation

    DEFF Research Database (Denmark)

    Munk, Knud Jørgen

    representation of the general equilibrium conditions the rules of normalisation in standard optimal tax models. This allows us to provide an intuitive explanation of what determines the optimal tax system. Finally, we review a number of examples where lack of precision with respect to normalisation in otherwise...... important contributions to the literature on optimal taxation has given rise to misinterpretations of of analytical results....

  14. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul

    2015-01-01

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  15. Unified Importance Sampling Schemes for Efficient Simulation of Outage Capacity over Generalized Fading Channels

    KAUST Repository

    Rached, Nadhir B.

    2015-11-13

    The outage capacity (OC) is among the most important performance metrics of communication systems operating over fading channels. Of interest in the present paper is the evaluation of the OC at the output of the Equal Gain Combining (EGC) and the Maximum Ratio Combining (MRC) receivers. In this case, it can be seen that this problem turns out to be that of computing the Cumulative Distribution Function (CDF) for the sum of independent random variables. Since finding a closedform expression for the CDF of the sum distribution is out of reach for a wide class of commonly used distributions, methods based on Monte Carlo (MC) simulations take pride of price. In order to allow for the estimation of the operating range of small outage probabilities, it is of paramount importance to develop fast and efficient estimation methods as naive Monte Carlo (MC) simulations would require high computational complexity. In this line, we propose in this work two unified, yet efficient, hazard rate twisting Importance Sampling (IS) based approaches that efficiently estimate the OC of MRC or EGC diversity techniques over generalized independent fading channels. The first estimator is shown to possess the asymptotic optimality criterion and applies for arbitrary fading models, whereas the second one achieves the well-desired bounded relative error property for the majority of the well-known fading variates. Moreover, the second estimator is shown to achieve the asymptotic optimality property under the particular Log-normal environment. Some selected simulation results are finally provided in order to illustrate the substantial computational gain achieved by the proposed IS schemes over naive MC simulations.

  16. On the Use of Importance Sampling in Particle Transport Problems

    Energy Technology Data Exchange (ETDEWEB)

    Eriksson, B

    1965-06-15

    The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice.

  17. On the Use of Importance Sampling in Particle Transport Problems

    International Nuclear Information System (INIS)

    Eriksson, B.

    1965-06-01

    The idea of importance sampling is applied to the problem of solving integral equations of Fredholm's type. Especially Bolzmann's neutron transport equation is taken into consideration. For the solution of the latter equation, an importance sampling technique is derived from some simple transformations at the original transport equation into a similar equation. Examples of transformations are given, which have been used with great success in practice

  18. Improved metamodel-based importance sampling for the performance assessment of radioactive waste repositories

    International Nuclear Information System (INIS)

    Cadini, F.; Gioletta, A.; Zio, E.

    2015-01-01

    In the context of a probabilistic performance assessment of a radioactive waste repository, the estimation of the probability of exceeding the dose threshold set by a regulatory body is a fundamental task. This may become difficult when the probabilities involved are very small, since the classically used sampling-based Monte Carlo methods may become computationally impractical. This issue is further complicated by the fact that the computer codes typically adopted in this context requires large computational efforts, both in terms of time and memory. This work proposes an original use of a Monte Carlo-based algorithm for (small) failure probability estimation in the context of the performance assessment of a near surface radioactive waste repository. The algorithm, developed within the context of structural reliability, makes use of an estimated optimal importance density and a surrogate, kriging-based metamodel approximating the system response. On the basis of an accurate analytic analysis of the algorithm, a modification is proposed which allows further reducing the computational efforts by a more effective training of the metamodel. - Highlights: • We tackle uncertainty propagation in a radwaste repository performance assessment. • We improve a kriging-based importance sampling for estimating failure probabilities. • We justify the modification by an analytic, comparative analysis of the algorithms. • The probability of exceeding dose thresholds in radwaste repositories is estimated. • The algorithm is further improved reducing the number of its free parameters

  19. Adaptive multiple importance sampling for Gaussian processes

    Czech Academy of Sciences Publication Activity Database

    Xiong, X.; Šmídl, Václav; Filippone, M.

    2017-01-01

    Roč. 87, č. 8 (2017), s. 1644-1665 ISSN 0094-9655 R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Gaussian Process * Bayesian estimation * Adaptive importance sampling Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 0.757, year: 2016 http://library.utia.cas.cz/separaty/2017/AS/smidl-0469804.pdf

  20. Simulation of a Jackson tandem network using state-dependent importance sampling

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jackson two-node tandem queue. It is known that in this setting 'traditional' state-independent importance-sampling distributions perform

  1. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  2. A Hybrid Monte Carlo importance sampling of rare events in Turbulence and in Turbulent Models

    Science.gov (United States)

    Margazoglou, Georgios; Biferale, Luca; Grauer, Rainer; Jansen, Karl; Mesterhazy, David; Rosenow, Tillmann; Tripiccione, Raffaele

    2017-11-01

    Extreme and rare events is a challenging topic in the field of turbulence. Trying to investigate those instances through the use of traditional numerical tools turns to be a notorious task, as they fail to systematically sample the fluctuations around them. On the other hand, we propose that an importance sampling Monte Carlo method can selectively highlight extreme events in remote areas of the phase space and induce their occurrence. We present a brand new computational approach, based on the path integral formulation of stochastic dynamics, and employ an accelerated Hybrid Monte Carlo (HMC) algorithm for this purpose. Through the paradigm of stochastic one-dimensional Burgers' equation, subjected to a random noise that is white-in-time and power-law correlated in Fourier space, we will prove our concept and benchmark our results with standard CFD methods. Furthermore, we will present our first results of constrained sampling around saddle-point instanton configurations (optimal fluctuations). The research leading to these results has received funding from the EU Horizon 2020 research and innovation programme under Grant Agreement No. 642069, and from the EU Seventh Framework Programme (FP7/2007-2013) under ERC Grant Agreement No. 339032.

  3. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  4. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  5. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  6. Sequential determination of important ecotoxic radionuclides in nuclear waste samples

    International Nuclear Information System (INIS)

    Bilohuscin, J.

    2016-01-01

    In the dissertation thesis we focused on the development and optimization of a sequential determination method for radionuclides 93 Zr, 94 Nb, 99 Tc and 126 Sn, employing extraction chromatography sorbents TEVA (R) Resin and Anion Exchange Resin, supplied by Eichrom Industries. Prior to the attestation of sequential separation of these proposed radionuclides from radioactive waste samples, a unique sequential procedure of 90 Sr, 239 Pu, 241 Am separation from urine matrices was tried, using molecular recognition sorbents of AnaLig (R) series and extraction chromatography sorbent DGA (R) Resin. On these experiments, four various sorbents were continually used for separation, including PreFilter Resin sorbent, which removes interfering organic materials present in raw urine. After the acquisition of positive results of this sequential procedure followed experiments with a 126 Sn separation using TEVA (R) Resin and Anion Exchange Resin sorbents. Radiochemical recoveries obtained from samples of radioactive evaporate concentrates and sludge showed high efficiency of the separation, while values of 126 Sn were under the minimum detectable activities MDA. Activity of 126 Sn was determined after ingrowth of daughter nuclide 126m Sb on HPGe gamma detector, with minimal contamination of gamma interfering radionuclides with decontamination factors (D f ) higher then 1400 for 60 Co and 47000 for 137 Cs. Based on the acquired experiments and results of these separation procedures, a complex method of sequential separation of 93 Zr, 94 Nb, 99 Tc and 126 Sn was proposed, which included optimization steps similar to those used in previous parts of the dissertation work. Application of the sequential separation method for sorbents TEVA (R) Resin and Anion Exchange Resin on real samples of radioactive wastes provided satisfactory results and an economical, time sparing, efficient method. (author)

  7. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  8. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  9. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  10. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  11. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  12. Joint importance sampling of low-order volumetric scattering

    DEFF Research Database (Denmark)

    Georgiev, Iliyan; Křivánek, Jaroslav; Hachisuka, Toshiya

    2013-01-01

    Central to all Monte Carlo-based rendering algorithms is the construction of light transport paths from the light sources to the eye. Existing rendering approaches sample path vertices incrementally when constructing these light transport paths. The resulting probability density is thus a product...... of the conditional densities of each local sampling step, constructed without explicit control over the form of the final joint distribution of the complete path. We analyze why current incremental construction schemes often lead to high variance in the presence of participating media, and reveal...... that such approaches are an unnecessary legacy inherited from traditional surface-based rendering algorithms. We devise joint importance sampling of path vertices in participating media to construct paths that explicitly account for the product of all scattering and geometry terms along a sequence of vertices instead...

  13. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  14. Testing of a method of importance sampling for use with SYVAC

    International Nuclear Information System (INIS)

    Dalrymple, G.J.; Prust, J.O.; Edwards, H.H.

    1985-10-01

    The Importance Sampling Scheme is designed to concentrate sampling in the high dose region of the parameter space. A sensitivity analysis of an intitial case study is used to roughly define the high dose and risk region of the parameter space. By applying modified distribution to the individual parameter ranges it was possible to concentrate sampling in regions of the parameter range that lead to high doses and risks. Comparison of risk estimates and cumulative distribution functions of dose for an increasing number of runs of the SYVAC model indicated that the risk estimate had converged at 1200 Importance Sampling runs. Examination of a plot of risk in various dose bands supported this conclusion. It was clear that the random sampling had not achieved convergence at 400 runs. (author)

  15. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  16. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  17. Optimal sampling period of the digital control system for the nuclear power plant steam generator water level control

    International Nuclear Information System (INIS)

    Hur, Woo Sung; Seong, Poong Hyun

    1995-01-01

    A great effort has been made to improve the nuclear plant control system by use of digital technologies and a long term schedule for the control system upgrade has been prepared with an aim to implementation in the next generation nuclear plants. In case of digital control system, it is important to decide the sampling period for analysis and design of the system, because the performance and the stability of a digital control system depend on the value of the sampling period of the digital control system. There is, however, currently no systematic method used universally for determining the sampling period of the digital control system. Generally, a traditional way to select the sampling frequency is to use 20 to 30 times the bandwidth of the analog control system which has the same system configuration and parameters as the digital one. In this paper, a new method to select the sampling period is suggested which takes into account of the performance as well as the stability of the digital control system. By use of the Irving's model steam generator, the optimal sampling period of an assumptive digital control system for steam generator level control is estimated and is actually verified in the digital control simulation system for Kori-2 nuclear power plant steam generator level control. Consequently, we conclude the optimal sampling period of the digital control system for Kori-2 nuclear power plant steam generator level control is 1 second for all power ranges. 7 figs., 3 tabs., 8 refs. (Author)

  18. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  19. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  20. Optimal LNG (liquefied natural gas) regasification scheduling for import terminals with storage

    International Nuclear Information System (INIS)

    Trotter, Ian M.; Gomes, Marília Fernandes Maciel; Braga, Marcelo José; Brochmann, Bjørn; Lie, Ole Nikolai

    2016-01-01

    We describe a stochastic dynamic programming model for maximising the revenue generated by regasification of LNG (liquefied natural gas) from storage tanks at importation terminals in relation to a natural gas spot market. We present three numerical resolution strategies: a posterior optimal strategy, a rolling intrinsic strategy and a full option strategy based on a least-squares Monte Carlo algorithm. We then compare model simulation results to the observed behaviour of three LNG importation terminals in the UK for the period April 2011 to April 2012, and find that there was low correlation between the observed regasification decisions of the operators and those suggested by the three simulated strategies. However, the actions suggested by the model simulations would have generated significantly higher revenues, suggesting that the facilities might have been operated sub-optimally. A further numerical experiment shows that increasing the storage and regasification capacities of a facility can significantly increase the achievable revenue, even without altering the amount of LNG received, by allowing operators more flexibility to defer regasification. - Highlights: • We present a revenue maximisation model for LNG (liquefied natural gas) storage tanks at import terminals. • Three resolution strategies: posterior optimal, rolling intrinsic and full option. • The full option strategy is based on a least-squares Monte Carlo algorithm. • Model simulations show potential for higher revenue in three UK LNG terminals. • Numerical experiments show how storage and regasification capacities affect revenue.

  1. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  2. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  3. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  4. The optimal amount and allocation of of sampling effort for plant health inspection

    NARCIS (Netherlands)

    Surkov, I.; Oude Lansink, A.G.J.M.; Werf, van der W.

    2009-01-01

    Plant import inspection can prevent the introduction of exotic pests and diseases, thereby averting economic losses. We explore the optimal allocation of a fixed budget, taking into account risk differentials, and the optimal-sized budget to minimise total pest costs. A partial-equilibrium market

  5. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  6. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  7. Screen Space Ambient Occlusion Based Multiple Importance Sampling for Real-Time Rendering

    Science.gov (United States)

    Zerari, Abd El Mouméne; Babahenini, Mohamed Chaouki

    2018-03-01

    We propose a new approximation technique for accelerating the Global Illumination algorithm for real-time rendering. The proposed approach is based on the Screen-Space Ambient Occlusion (SSAO) method, which approximates the global illumination for large, fully dynamic scenes at interactive frame rates. Current algorithms that are based on the SSAO method suffer from difficulties due to the large number of samples that are required. In this paper, we propose an improvement to the SSAO technique by integrating it with a Multiple Importance Sampling technique that combines a stratified sampling method with an importance sampling method, with the objective of reducing the number of samples. Experimental evaluation demonstrates that our technique can produce high-quality images in real time and is significantly faster than traditional techniques.

  8. Convergence and Efficiency of Adaptive Importance Sampling Techniques with Partial Biasing

    Science.gov (United States)

    Fort, G.; Jourdain, B.; Lelièvre, T.; Stoltz, G.

    2018-04-01

    We propose a new Monte Carlo method to efficiently sample a multimodal distribution (known up to a normalization constant). We consider a generalization of the discrete-time Self Healing Umbrella Sampling method, which can also be seen as a generalization of well-tempered metadynamics. The dynamics is based on an adaptive importance technique. The importance function relies on the weights (namely the relative probabilities) of disjoint sets which form a partition of the space. These weights are unknown but are learnt on the fly yielding an adaptive algorithm. In the context of computational statistical physics, the logarithm of these weights is, up to an additive constant, the free-energy, and the discrete valued function defining the partition is called the collective variable. The algorithm falls into the general class of Wang-Landau type methods, and is a generalization of the original Self Healing Umbrella Sampling method in two ways: (i) the updating strategy leads to a larger penalization strength of already visited sets in order to escape more quickly from metastable states, and (ii) the target distribution is biased using only a fraction of the free-energy, in order to increase the effective sample size and reduce the variance of importance sampling estimators. We prove the convergence of the algorithm and analyze numerically its efficiency on a toy example.

  9. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  10. FACE Analysis as a Fast and Reliable Methodology to Monitor the Sulfation and Total Amount of Chondroitin Sulfate in Biological Samples of Clinical Importance

    Directory of Open Access Journals (Sweden)

    Evgenia Karousou

    2014-06-01

    Full Text Available Glycosaminoglycans (GAGs due to their hydrophilic character and high anionic charge densities play important roles in various (pathophysiological processes. The identification and quantification of GAGs in biological samples and tissues could be useful prognostic and diagnostic tools in pathological conditions. Despite the noteworthy progress in the development of sensitive and accurate methodologies for the determination of GAGs, there is a significant lack in methodologies regarding sample preparation and reliable fast analysis methods enabling the simultaneous analysis of several biological samples. In this report, developed protocols for the isolation of GAGs in biological samples were applied to analyze various sulfated chondroitin sulfate- and hyaluronan-derived disaccharides using fluorophore-assisted carbohydrate electrophoresis (FACE. Applications to biologic samples of clinical importance include blood serum, lens capsule tissue and urine. The sample preparation protocol followed by FACE analysis allows quantification with an optimal linearity over the concentration range 1.0–220.0 µg/mL, affording a limit of quantitation of 50 ng of disaccharides. Validation of FACE results was performed by capillary electrophoresis and high performance liquid chromatography techniques.

  11. Performance evaluation of an importance sampling technique in a Jackson network

    Science.gov (United States)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  12. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  13. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  14. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  15. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  16. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  17. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  18. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  19. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  1. Optimal grade control sampling practice in open-pit mining

    DEFF Research Database (Denmark)

    Engström, Karin; Esbensen, Kim Harry

    2017-01-01

    Misclassification of ore grades results in lost revenues, and the need for representative sampling procedures in open pit mining is increasingly important in all mining industries. This study evaluated possible improvements in sampling representativity with the use of Reverse Circulation (RC) drill...... sampling compared to manual Blast Hole (BH) sampling in the Leveäniemi open pit mine, northern Sweden. The variographic experiment results showed that sampling variability was lower for RC than for BH sampling. However, the total costs for RC drill sampling are significantly exceeding current costs...... for manual BH sampling, which needs to be compensated for by other benefits to motivate introduction of RC drilling. The main conclusion is that manual BH sampling can be fit-for-purpose in the studied open pit mine. However, with so many mineral commodities and mining methods in use globally...

  2. Adaptive control of theophylline therapy: importance of blood sampling times.

    Science.gov (United States)

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  3. The study of importance sampling in Monte-carlo calculation of blocking dips

    International Nuclear Information System (INIS)

    Pan Zhengying; Zhou Peng

    1988-01-01

    Angular blocking dips around the axis in Al single crystal of α-particles of about 2 Mev produced at a depth of 0.2 μm are calculated by a Monte-carlo simulation. The influence of the small solid angle emission of particles and the importance sampling in the solid angle emission have been investigated. By means of importance sampling, a more reasonable results with high accuracy are obtained

  4. The importance of functional form in optimal control solutions of problems in population dynamics

    Science.gov (United States)

    Runge, M.C.; Johnson, F.A.

    2002-01-01

    Optimal control theory is finding increased application in both theoretical and applied ecology, and it is a central element of adaptive resource management. One of the steps in an adaptive management process is to develop alternative models of system dynamics, models that are all reasonable in light of available data, but that differ substantially in their implications for optimal control of the resource. We explored how the form of the recruitment and survival functions in a general population model for ducks affected the patterns in the optimal harvest strategy, using a combination of analytical, numerical, and simulation techniques. We compared three relationships between recruitment and population density (linear, exponential, and hyperbolic) and three relationships between survival during the nonharvest season and population density (constant, logistic, and one related to the compensatory harvest mortality hypothesis). We found that the form of the component functions had a dramatic influence on the optimal harvest strategy and the ultimate equilibrium state of the system. For instance, while it is commonly assumed that a compensatory hypothesis leads to higher optimal harvest rates than an additive hypothesis, we found this to depend on the form of the recruitment function, in part because of differences in the optimal steady-state population density. This work has strong direct consequences for those developing alternative models to describe harvested systems, but it is relevant to a larger class of problems applying optimal control at the population level. Often, different functional forms will not be statistically distinguishable in the range of the data. Nevertheless, differences between the functions outside the range of the data can have an important impact on the optimal harvest strategy. Thus, development of alternative models by identifying a single functional form, then choosing different parameter combinations from extremes on the likelihood

  5. Hybrid algorithm of ensemble transform and importance sampling for assimilation of non-Gaussian observations

    Directory of Open Access Journals (Sweden)

    Shin'ya Nakano

    2014-05-01

    Full Text Available A hybrid algorithm that combines the ensemble transform Kalman filter (ETKF and the importance sampling approach is proposed. Since the ETKF assumes a linear Gaussian observation model, the estimate obtained by the ETKF can be biased in cases with nonlinear or non-Gaussian observations. The particle filter (PF is based on the importance sampling technique, and is applicable to problems with nonlinear or non-Gaussian observations. However, the PF usually requires an unrealistically large sample size in order to achieve a good estimation, and thus it is computationally prohibitive. In the proposed hybrid algorithm, we obtain a proposal distribution similar to the posterior distribution by using the ETKF. A large number of samples are then drawn from the proposal distribution, and these samples are weighted to approximate the posterior distribution according to the importance sampling principle. Since the importance sampling provides an estimate of the probability density function (PDF without assuming linearity or Gaussianity, we can resolve the bias due to the nonlinear or non-Gaussian observations. Finally, in the next forecast step, we reduce the sample size to achieve computational efficiency based on the Gaussian assumption, while we use a relatively large number of samples in the importance sampling in order to consider the non-Gaussian features of the posterior PDF. The use of the ETKF is also beneficial in terms of the computational simplicity of generating a number of random samples from the proposal distribution and in weighting each of the samples. The proposed algorithm is not necessarily effective in case that the ensemble is located distant from the true state. However, monitoring the effective sample size and tuning the factor for covariance inflation could resolve this problem. In this paper, the proposed hybrid algorithm is introduced and its performance is evaluated through experiments with non-Gaussian observations.

  6. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  7. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  8. Is It Feasible for China to Optimize Oil Import Source Diversification?

    Directory of Open Access Journals (Sweden)

    Jian Xu

    2014-11-01

    Full Text Available In 2013, China imported 282 million tons of crude oil with an external dependence of 58.1%, surpassing the USA as the world’s largest net oil importer. An import source diversification strategy has been adopted by China to ensure oil supply security and to prevent oil supply disruption. However, the strategy is restricted by the imbalance of oil reserves. What is the reasonable and clear objective of the diversification strategy under an imbalanced environment? How do we assess the natural imbalance? This paper analyzes the oil import diversification of China and the USA, as well as the oil production of oil export countries by the oil import source diversification index (OISDI. Our results are as follows: the distribution of oil import sources for China tends to coincide with the oil production distribution of oil exporters in the world. Compared with the USA, China has more diversified import sources. The Chinese government paid much attention to import sources in the past. In the future, China will adjust the distributions of regional sources rather than focus on the number of sources to further optimize the structure of imported regions in the course of implementing the import source diversification strategy.

  9. Sample preparation optimization in fecal metabolic profiling.

    Science.gov (United States)

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (w f /v s ) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Optimization of linear consecutive-k-out-of-n system with a Birnbaum importance-based genetic algorithm

    International Nuclear Information System (INIS)

    Cai, Zhiqiang; Si, Shubin; Sun, Shudong; Li, Caitao

    2016-01-01

    The optimization of linear consecutive-k-out-of-n (Lin/Con/k/n) is to find an optimal component arrangement where n components are assigned to n positions to maximize the system reliability. With the interchangeability of components in practical systems, the optimization of Lin/Con/k/n systems is becoming widely applied in engineering practice, which is also a typical component assignment problem concerned by many researchers. This paper proposes a Birnbaum importance-based genetic algorithm (BIGA) to search the near global optimal solution for Lin/Con/k/n systems. First, the operation procedures and corresponding execution methods of BIGA are described in detail. Then, comprehensive simulation experiments are implemented on both small and large systems to evaluate the performance of the BIGA by comparing with the Birnbaum importance-based two-stage approach and Birnbaum importance-based genetic local search algorithm. Thirdly, further experiments are provided to discuss the applicability of BIGA for Lin/Con/k/n system with different k and n. Finally, the case study on oil transportation system is implemented to demonstrate the application of BIGA in the optimization of Lin/Con/k/n system. - Highlights: • BIGA integrates BI and GA to solve the Lin/Con/k/n systems optimization problems. • The experiment results show that the BIGA performs well in most conditions. • Suggestions are given for the application of BIGA and BITA with different k and n. • The application procedure of BIGA is demonstrated by the oil transportation system.

  11. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  12. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  14. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Screening of variable importance for optimizing electrodialytic remediation of heavy metals from polluted harbour sediments

    DEFF Research Database (Denmark)

    Pedersen, Kristine B.; Lejon, Tore; Ottosen, Lisbeth M.

    2015-01-01

    Using multivariate design and modelling, the optimal conditions for electrodialytic remediation (EDR) of heavy metals were determined for polluted harbour sediments from Hammerfest harbour located in the geographic Arctic region of Norway. The comparative importance of the variables, current......) was computed and variable importance in the projection was used to assess the influence of the experimental variables. Current density and remediation time proved to have the highest influence on the remediation of the heavy metals Cr, Cu, Ni, Pb and Zn in the studied experimental domain. In addition......, it was shown that excluding the acidification time improved the PLS model, indicating the importance of applying a limited experimental domain that covers the removal phases of each heavy metal in the specific sediment. Based on PLS modelling, the optimal conditions for remediating the Hammerfest sediment were...

  16. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  17. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  18. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  19. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  20. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  1. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  2. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  3. Food and feed safety assessment: the importance of proper sampling.

    Science.gov (United States)

    Kuiper, Harry A; Paoletti, Claudia

    2015-01-01

    The general principles for safety and nutritional evaluation of foods and feed and the potential health risks associated with hazardous compounds are described as developed by the Food and Agriculture Organization (FAO) and the World Health Organization (WHO) and further elaborated in the European Union-funded project Safe Foods. We underline the crucial role of sampling in foods/feed safety assessment. High quality sampling should always be applied to ensure the use of adequate and representative samples as test materials for hazard identification, toxicological and nutritional characterization of identified hazards, as well as for estimating quantitative and reliable exposure levels of foods/feed or related compounds of concern for humans and animals. The importance of representative sampling is emphasized through examples of risk analyses in different areas of foods/feed production. The Theory of Sampling (TOS) is recognized as the only framework within which to ensure accuracy and precision of all sampling steps involved in the field-to-fork continuum, which is crucial to monitor foods and feed safety. Therefore, TOS must be integrated in the well-established FAO/WHO risk assessment approach in order to guarantee a transparent and correct frame for the risk assessment and decision making process.

  4. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Improved importance sampling technique for efficient simulation of digital communication systems

    Science.gov (United States)

    Lu, Dingqing; Yao, Kung

    1988-01-01

    A new, improved importance sampling (IIS) approach to simulation is considered. Some basic concepts of IS are introduced, and detailed evolutions of simulation estimation variances for Monte Carlo (MC) and IS simulations are given. The general results obtained from these evolutions are applied to the specific previously known conventional importance sampling (CIS) technique and the new IIS technique. The derivation for a linear system with no signal random memory is considered in some detail. For the CIS technique, the optimum input scaling parameter is found, while for the IIS technique, the optimum translation parameter is found. The results are generalized to a linear system with memory and signals. Specific numerical and simulation results are given which show the advantages of CIS over MC and IIS over CIS for simulations of digital communications systems.

  6. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  7. Wildlife Conservation Planning Using Stochastic Optimization and Importance Sampling

    Science.gov (United States)

    Robert G. Haight; Laurel E. Travis

    1997-01-01

    Formulations for determining conservation plans for sensitive wildlife species must account for economic costs of habitat protection and uncertainties about how wildlife populations will respond. This paper describes such a formulation and addresses the computational challenge of solving it. The problem is to determine the cost-efficient level of habitat protection...

  8. An Importance Sampling Simulation Method for Bayesian Decision Feedback Equalizers

    OpenAIRE

    Chen, S.; Hanzo, L.

    2000-01-01

    An importance sampling (IS) simulation technique is presented for evaluating the lower-bound bit error rate (BER) of the Bayesian decision feedback equalizer (DFE) under the assumption of correct decisions being fed back. A design procedure is developed, which chooses appropriate bias vectors for the simulation density to ensure asymptotic efficiency of the IS simulation.

  9. An Optimal Sample Data Usage Strategy to Minimize Overfitting and Underfitting Effects in Regression Tree Models Based on Remotely-Sensed Data

    Directory of Open Access Journals (Sweden)

    Yingxin Gu

    2016-11-01

    Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  10. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  11. Adaptive importance sampling for probabilistic validation of advanced driver assistance systems

    NARCIS (Netherlands)

    Gietelink, O.J.; Schutter, B. de; Verhaegen, M.

    2006-01-01

    We present an approach for validation of advanced driver assistance systems, based on randomized algorithms. The new method consists of an iterative randomized simulation using adaptive importance sampling. The randomized algorithm is more efficient than conventional simulation techniques. The

  12. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  13. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  14. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  15. Adaptive importance sampling of random walks on continuous state spaces

    International Nuclear Information System (INIS)

    Baggerly, K.; Cox, D.; Picard, R.

    1998-01-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material

  16. Optimal sample size for predicting viability of cabbage and radish seeds based on near infrared spectra of single seeds

    DEFF Research Database (Denmark)

    Shetty, Nisha; Min, Tai-Gi; Gislum, René

    2011-01-01

    The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub......-sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both cabbage...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...

  17. Numerically Accelerated Importance Sampling for Nonlinear Non-Gaussian State Space Models

    NARCIS (Netherlands)

    Koopman, S.J.; Lucas, A.; Scharth, M.

    2015-01-01

    We propose a general likelihood evaluation method for nonlinear non-Gaussian state-space models using the simulation-based method of efficient importance sampling. We minimize the simulation effort by replacing some key steps of the likelihood estimation procedure by numerical integration. We refer

  18. Sampling high-altitude and stratified mating flights of red imported fire ant.

    Science.gov (United States)

    Fritz, Gary N; Fritz, Ann H; Vander Meer, Robert K

    2011-05-01

    With the exception of an airplane equipped with nets, no method has been developed that successfully samples red imported fire ant, Solenopsis invicta Buren, sexuals in mating/dispersal flights throughout their potential altitudinal trajectories. We developed and tested a method for sampling queens and males during mating flights at altitudinal intervals reaching as high as "140 m. Our trapping system uses an electric winch and a 1.2-m spindle bolted to a swiveling platform. The winch dispenses up to 183 m of Kevlar-core, nylon rope and the spindle stores 10 panels (0.9 by 4.6 m each) of nylon tulle impregnated with Tangle-Trap. The panels can be attached to the rope at various intervals and hoisted into the air by using a 3-m-diameter, helium-filled balloon. Raising or lowering all 10 panels takes approximately 15-20 min. This trap also should be useful for altitudinal sampling of other insects of medical importance.

  19. Optimal coal import strategy

    International Nuclear Information System (INIS)

    Chen, C.Y.; Shih, L.H.

    1992-01-01

    Recently, the main power company in Taiwan has shifted the primary energy resource from oil to coal and tried to diversify the coal supply from various sources. The company wants to have the imported coal meet the environmental standards and operation requirements as well as to have high heating value. In order to achieve these objectives, establishment of a coal blending system for Taiwan is necessary. A mathematical model using mixed integer programming technique is used to model the import strategy and the blending system. 6 refs., 1 tab

  20. 40 CFR 80.335 - What gasoline sample retention requirements apply to refiners and importers?

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What gasoline sample retention... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Gasoline Sulfur Sampling, Testing and Retention Requirements for Refiners and Importers § 80.335 What gasoline sample...

  1. The use of importance sampling in a trial assessment to obtain converged estimates of radiological risk

    International Nuclear Information System (INIS)

    Johnson, K.; Lucas, R.

    1986-12-01

    In developing a methodology for assessing potential sites for the disposal of radioactive wastes, the Department of the Environment has conducted a series of trial assessment exercises. In order to produce converged estimates of radiological risk using the SYVAC A/C simulation system an efficient sampling procedure is required. Previous work has demonstrated that importance sampling can substantially increase sampling efficiency. This study used importance sampling to produce converged estimates of risk for the first DoE trial assessment. Four major nuclide chains were analysed. In each case importance sampling produced converged risk estimates with between 10 and 170 times fewer runs of the SYVAC A/C model. This increase in sampling efficiency can reduce the total elapsed time required to obtain a converged estimate of risk from one nuclide chain by a factor of 20. The results of this study suggests that the use of importance sampling could reduce the elapsed time required to perform a risk assessment of a potential site by a factor of ten. (author)

  2. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  3. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    Science.gov (United States)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  4. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  5. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  6. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  7. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  8. Optimizing the protein switch: altering nuclear import and export signals, and ligand binding domain

    Science.gov (United States)

    Kakar, Mudit; Davis, James R.; Kern, Steve E.; Lim, Carol S.

    2007-01-01

    Ligand regulated localization controllable protein constructs were optimized in this study. Several constructs were made from a classical nuclear export signal (HIV-rev, MAPKK, or progesterone receptor) in combination with a SV40 T-antigen type nuclear import signal. Different ligand binding domains (LBDs from glucocorticoid receptor or progesterone receptor) were also tested for their ability to impart control over localization of proteins. This study was designed to create constructs which are cytoplasmic in the absence of ligand and nuclear in the presence of ligand, and also to regulate the amount of protein translocating to the nucleus on ligand induction. The balance between the strengths of import and export signals was critical for overall localization of proteins. The amount of protein entering the nucleus was also affected by the dose of ligand (10-100nM). However, the overall import characteristics were determined by the strengths of localization signals and the inherent localization properties of the LBD used. This study established that the amount of protein present in a particular compartment can be regulated by the use of localization signals of various strengths. These optimized localization controllable protein constructs can be used to correct for diseases due to aberrant localization of proteins. PMID:17574289

  9. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  10. Design and sampling plan optimization for RT-qPCR experiments in plants: a case study in blueberry

    Directory of Open Access Journals (Sweden)

    Jose V Die

    2016-03-01

    Full Text Available The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  11. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation

    International Nuclear Information System (INIS)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A.; Bouquerel, Hélène

    2016-01-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L"−"1 and 10% for 10 mBq L"−"1. While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L"−"1, a conservative experimental estimate is rather 5 mBq L"−"1, corresponding to 0.14 fg g"−"1. The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. - Highlights: • Radium-226 concentration measured with optimized accumulation in a container. • Radon-222 in air measured precisely with scintillation flasks and long countings. • Method tested by repetition tests, dilution experiments, and successful blind tests. • Estimated conservative detection limit without pre-concentration is 5 mBq L"−"1. • Method is portable, cost

  12. Isolation and identification of phytase-producing strains from soil samples and optimization of production parameters

    Directory of Open Access Journals (Sweden)

    Masoud Mohammadi

    2017-09-01

    Discussion and conclusion: Penicillium sp. isolated from a soil sample near Qazvin, was able to produce highly active phytase in optimized environmental conditions, which could be a suitable candidate for commercial production of phytase to be used as complement in poultry feeding industries.

  13. Importance Sampling for Failure Probabilities in Computing and Data Transmission

    DEFF Research Database (Denmark)

    Asmussen, Søren

    We study efficient simulation algorithms for estimating P(Χ > χ), where Χ is the total time of a job with ideal time T that needs to be restarted after a failure. The main tool is importance sampling where one tries to identify a good importance distribution via an asymptotic description...... of the conditional distribution of T given Χ > χ. If T ≡ t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér type root  γ(t) is available. However, we also discuss  an algorithm avoiding the rootfinding. If T is random, particular attention...... is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different type of conditional limits occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using...

  14. Importance sampling for failure probabilities in computing and data transmission

    DEFF Research Database (Denmark)

    Asmussen, Søren

    2009-01-01

    In this paper we study efficient simulation algorithms for estimating P(X›x), where X is the total time of a job with ideal time $T$ that needs to be restarted after a failure. The main tool is importance sampling, where a good importance distribution is identified via an asymptotic description...... of the conditional distribution of T given X›x. If T≡t is constant, the problem reduces to the efficient simulation of geometric sums, and a standard algorithm involving a Cramér-type root, γ(t), is available. However, we also discuss an algorithm that avoids finding the root. If T is random, particular attention...... is given to T having either a gamma-like tail or a regularly varying tail, and to failures at Poisson times. Different types of conditional limit occur, in particular exponentially tilted Gumbel distributions and Pareto distributions. The algorithms based upon importance distributions for T using...

  15. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  16. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  17. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  18. The importance of hydration thermodynamics in fragment-to-lead optimization.

    Science.gov (United States)

    Ichihara, Osamu; Shimada, Yuzo; Yoshidome, Daisuke

    2014-12-01

    Using a computational approach to assess changes in solvation thermodynamics upon ligand binding, we investigated the effects of water molecules on the binding energetics of over 20 fragment hits and their corresponding optimized lead compounds. Binding activity and X-ray crystallographic data of published fragment-to-lead optimization studies from various therapeutically relevant targets were studied. The analysis reveals a distinct difference between the thermodynamic profile of water molecules displaced by fragment hits and those displaced by the corresponding optimized lead compounds. Specifically, fragment hits tend to displace water molecules with notably unfavorable excess entropies-configurationally constrained water molecules-relative to those displaced by the newly added moieties of the lead compound during the course of fragment-to-lead optimization. Herein we describe the details of this analysis with the goal of providing practical guidelines for exploiting thermodynamic signatures of binding site water molecules in the context of fragment-to-lead optimization. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  20. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  1. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  2. Subsolutions of an Isaacs Equation and Efficient Schemes for Importance Sampling: Convergence Analysis

    National Research Council Canada - National Science Library

    Dupuis, Paul; Wang, Hui

    2005-01-01

    Previous papers by authors establish the connection between importance sampling algorithms for estimating rare-event probabilities, two-person zero-sum differential games, and the associated Isaacs equation...

  3. Role of importance of X-ray fluorescence analysis of forensic samples

    International Nuclear Information System (INIS)

    Jha, Shailendra; Sharma, M.

    2009-01-01

    Full text: In the field of forensic science, it is very important to investigate the evidential samples obtained at various crime scenes. X-ray fluorescence (XRF) is used widely in forensic science [1]. Its main strength is its non-destructive nature, thus preserving evidence [2, 3]. In this paper, we report the application of XRF to examine the evidences like purity gold and silver jewelry (Indian Ornaments), remnants of glass pieces and paint chips recovered from crime scenes. The experimental measurements on these samples have been made using X-ray fluorescence spectrometer (LAB Center XRF-1800) procured from Shimazdu Scientific Inst., USA. The results are explained in terms of quantitative/ qualitative analysis of trace elements. (author)

  4. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  5. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  6. A method of language sampling

    DEFF Research Database (Denmark)

    Rijkhoff, Jan; Bakker, Dik; Hengeveld, Kees

    1993-01-01

    In recent years more attention is paid to the quality of language samples in typological work. Without an adequate sampling strategy, samples may suffer from various kinds of bias. In this article we propose a sampling method in which the genetic criterion is taken as the most important: samples...... created with this method will reflect optimally the diversity of the languages of the world. On the basis of the internal structure of each genetic language tree a measure is computed that reflects the linguistic diversity in the language families represented by these trees. This measure is used...... to determine how many languages from each phylum should be selected, given any required sample size....

  7. Estimating cross-validatory predictive p-values with integrated importance sampling for disease mapping models.

    Science.gov (United States)

    Li, Longhai; Feng, Cindy X; Qiu, Shi

    2017-06-30

    An important statistical task in disease mapping problems is to identify divergent regions with unusually high or low risk of disease. Leave-one-out cross-validatory (LOOCV) model assessment is the gold standard for estimating predictive p-values that can flag such divergent regions. However, actual LOOCV is time-consuming because one needs to rerun a Markov chain Monte Carlo analysis for each posterior distribution in which an observation is held out as a test case. This paper introduces a new method, called integrated importance sampling (iIS), for estimating LOOCV predictive p-values with only Markov chain samples drawn from the posterior based on a full data set. The key step in iIS is that we integrate away the latent variables associated the test observation with respect to their conditional distribution without reference to the actual observation. By following the general theory for importance sampling, the formula used by iIS can be proved to be equivalent to the LOOCV predictive p-value. We compare iIS and other three existing methods in the literature with two disease mapping datasets. Our empirical results show that the predictive p-values estimated with iIS are almost identical to the predictive p-values estimated with actual LOOCV and outperform those given by the existing three methods, namely, the posterior predictive checking, the ordinary importance sampling, and the ghosting method by Marshall and Spiegelhalter (2003). Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  9. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  10. Stochastic global optimization as a filtering problem

    International Nuclear Information System (INIS)

    Stinis, Panos

    2012-01-01

    We present a reformulation of stochastic global optimization as a filtering problem. The motivation behind this reformulation comes from the fact that for many optimization problems we cannot evaluate exactly the objective function to be optimized. Similarly, we may not be able to evaluate exactly the functions involved in iterative optimization algorithms. For example, we may only have access to noisy measurements of the functions or statistical estimates provided through Monte Carlo sampling. This makes iterative optimization algorithms behave like stochastic maps. Naive global optimization amounts to evolving a collection of realizations of this stochastic map and picking the realization with the best properties. This motivates the use of filtering techniques to allow focusing on realizations that are more promising than others. In particular, we present a filtering reformulation of global optimization in terms of a special case of sequential importance sampling methods called particle filters. The increasing popularity of particle filters is based on the simplicity of their implementation and their flexibility. We utilize the flexibility of particle filters to construct a stochastic global optimization algorithm which can converge to the optimal solution appreciably faster than naive global optimization. Several examples of parametric exponential density estimation are provided to demonstrate the efficiency of the approach.

  11. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  12. Importance sampling and histogrammic representations of reactivity functions and product distributions in Monte Carlo quasiclassical trajectory calculations

    International Nuclear Information System (INIS)

    Faist, M.B.; Muckerman, J.T.; Schubert, F.E.

    1978-01-01

    The application of importance sampling as a variance reduction technique in Monte Carlo quasiclassical trajectory calculations is discussed. Two measures are proposed which quantify the quality of the importance sampling used, and indicate whether further improvements may be obtained by some other choice of importance sampling function. A general procedure for constructing standardized histogrammic representations of differential functions which integrate to the appropriate integral value obtained from a trajectory calculation is presented. Two criteria for ''optimum'' binning of these histogrammic representations of differential functions are suggested. These are (1) that each bin makes an equal contribution to the integral value, and (2) each bin has the same relative error. Numerical examples illustrating these sampling and binning concepts are provided

  13. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  14. Importance of design optimization of gamma processing plants

    International Nuclear Information System (INIS)

    George, Jain Reji

    2014-01-01

    Radiation processing of food commodities using ionizing radiations is well established world wide. In India too, novel designs are coming up for food irradiation as well as for multiproduct irradiation. It has been observed that though the designs of the product movement systems are excelling, the actual purpose for which the designs are made are failing in some. In such situations it is difficult to achieve an effective dose delivery by controlling the process parameters or even by modifying the source activity distribution without compromising some other aspects like throughput. It is very essential to arrive at an optimization in all components such as radiation source geometry, source product geometry and protective barriers of an irradiator system. Optimization of the various parameters can be done by modeling and analysis of the design

  15. Optimization of a method based on micro-matrix solid-phase dispersion (micro-MSPD for the determination of PCBs in mussel samples

    Directory of Open Access Journals (Sweden)

    Nieves Carro

    2017-03-01

    Full Text Available This paper reports the development and optimization of micro-matrix solid-phase dispersion (micro-MSPD of nine polychlorinated biphenyls (PCBs in mussel samples (Mytilus galloprovincialis by using a two-level factorial design. Four variables (amount of sample, anhydrous sodium sulphate, Florisil and solvent volume were considered as factors in the optimization process. The results suggested that only the interaction between the amount of anhydrous sodium sulphate and the solvent volume was statistically significant for the overall recovery of a trichlorinated compound, CB 28. Generally most of the considered species exhibited a similar behaviour, the sample and Florisil amounts had a positive effect on PCBs extractions and solvent volume and sulphate amount had a negative effect. The analytical determination and confirmation of PCBs were carried out by using GC-ECD and GC-MS/MS, respectively. The method was validated having satisfactory precision and accuracy with RSD values below 6% and recoveries between 81 and 116% for all congeners. The optimized method was applied to the extraction of real mussel samples from two Galician Rías.

  16. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng

    2016-11-14

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected and analyzed daily riverine diatom samples over a 1-year period (25 April 2013–30 April 2014) at the outlet of a German lowland river. The samples were classified into five clusters (1–5) by a Kohonen Self-Organizing Map (SOM) method based on similarity between species compositions over time. ASFs were determined to be 25 days at Cluster 2 (June-July 2013) and 13 days at Cluster 5 (February-April 2014), whereas no specific ASFs were found at Cluster 1 (April-May 2013), 3 (August-November 2013) (>30 days) and Cluster 4 (December 2013 - January 2014) (<1 day). ASFs showed dramatic seasonality and were negatively related to hydrological wetness conditions, suggesting that sampling interval should be reduced with increasing catchment wetness. A key implication of our findings for freshwater management is that long-term bio-monitoring protocols should be developed with the knowledge of tracking algal temporal dynamics with an appropriate sampling frequency.

  17. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    International Nuclear Information System (INIS)

    Carpy, R; Picker, G; Amann, B; Ranebo, H; Vincent-Bonnieu, S; Minster, O; Winter, J; Dettmann, J; Castiglione, L; Höhler, R; Langevin, D

    2011-01-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of 'wet foams' have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 3 . These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  18. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  19. Importance of optimizing chromatographic conditions and mass spectrometric parameters for supercritical fluid chromatography/mass spectrometry.

    Science.gov (United States)

    Fujito, Yuka; Hayakawa, Yoshihiro; Izumi, Yoshihiro; Bamba, Takeshi

    2017-07-28

    Supercritical fluid chromatography/mass spectrometry (SFC/MS) has great potential in high-throughput and the simultaneous analysis of a wide variety of compounds, and it has been widely used in recent years. The use of MS for detection provides the advantages of high sensitivity and high selectivity. However, the sensitivity of MS detection depends on the chromatographic conditions and MS parameters. Thus, optimization of MS parameters corresponding to the SFC condition is mandatory for maximizing performance when connecting SFC to MS. The aim of this study was to reveal a way to decide the optimum composition of the mobile phase and the flow rate of the make-up solvent for MS detection in a wide range of compounds. Additionally, we also showed the basic concept for determination of the optimum values of the MS parameters focusing on the MS detection sensitivity in SFC/MS analysis. To verify the versatility of these findings, a total of 441 pesticides with a wide polarity range (logP ow from -4.21 to 7.70) and pKa (acidic, neutral and basic). In this study, a new SFC-MS interface was used, which can transfer the entire volume of eluate into the MS by directly coupling the SFC with the MS. This enabled us to compare the sensitivity or optimum MS parameters for MS detection between LC/MS and SFC/MS for the same sample volume introduced into the MS. As a result, it was found that the optimum values of some MS parameters were completely different from those of LC/MS, and that SFC/MS-specific optimization of the analytical conditions is required. Lastly, we evaluated the sensitivity of SFC/MS using fully optimized analytical conditions. As a result, we confirmed that SFC/MS showed much higher sensitivity than LC/MS when the analytical conditions were fully optimized for SFC/MS; and the high sensitivity also increase the number of the compounds that can be detected with good repeatability in real sample analysis. This result indicates that SFC/MS has potential for

  20. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  1. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  2. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  3. 40 CFR 80.330 - What are the sampling and testing requirements for refiners and importers?

    Science.gov (United States)

    2010-07-01

    ... and importers shall collect a representative sample from each batch of gasoline produced or imported... January 1, 2004, any refiner who produces gasoline using computer-controlled in-line blending equipment is... listed in § 80.46(a)(3) to measure the sulfur content of gasoline they produce or import. (2) Except as...

  4. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  5. Categorical Inputs, Sensitivity Analysis, Optimization and Importance Tempering with tgp Version 2, an R Package for Treed Gaussian Process Models

    Directory of Open Access Journals (Sweden)

    Robert B. Gramacy

    2010-02-01

    Full Text Available This document describes the new features in version 2.x of the tgp package for R, implementing treed Gaussian process (GP models. The topics covered include methods for dealing with categorical inputs and excluding inputs from the tree or GP part of the model; fully Bayesian sensitivity analysis for inputs/covariates; sequential optimization of black-box functions; and a new Monte Carlo method for inference in multi-modal posterior distributions that combines simulated tempering and importance sampling. These additions extend the functionality of tgp across all models in the hierarchy: from Bayesian linear models, to classification and regression trees (CART, to treed Gaussian processes with jumps to the limiting linear model. It is assumed that the reader is familiar with the baseline functionality of the package, outlined in the first vignette (Gramacy 2007.

  6. Simultaneous assay of multiple antibiotics in human plasma by LC-MS/MS: importance of optimizing formic acid concentration.

    Science.gov (United States)

    Chen, Feng; Hu, Zhe-Yi; Laizure, S Casey; Hudson, Joanna Q

    2017-03-01

    Optimal dosing of antibiotics in critically ill patients is complicated by the development of resistant organisms requiring treatment with multiple antibiotics and alterations in systemic exposure due to diseases and extracorporeal drug removal. Developing guidelines for optimal antibiotic dosing is an important therapeutic goal requiring robust analytical methods to simultaneously measure multiple antibiotics. An LC-MS/MS assay using protein precipitation for cleanup followed by a 6-min gradient separation was developed to simultaneously determine five antibiotics in human plasma. The precision and accuracy were within the 15% acceptance range. The formic acid concentration was an important determinant of signal intensity, peak shape and matrix effects. The method was designed to be simple and successfully applied to a clinical pharmacokinetic study.

  7. 'Intelligent' approach to radioimmunoassay sample counting employing a microprocessor controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1977-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore particularly imperitive that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. The majority of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be releted to the counting errors for that sample. It is the objective of this presentation to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5-10 fold to be made. (orig.) [de

  8. Subsolutions of an Isaacs Equation and Efficient Schemes for Importance Sampling: Examples and Numerics

    National Research Council Canada - National Science Library

    Dupuis, Paul; Wang, Hui

    2005-01-01

    It has been established that importance sampling algorithms for estimating rare-event probabilities are intimately connected with two-person zero-sum differential games and the associated Isaacs equation...

  9. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    Science.gov (United States)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 40).

  10. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  11. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  12. Training set optimization under population structure in genomic selection.

    Science.gov (United States)

    Isidro, Julio; Jannink, Jean-Luc; Akdemir, Deniz; Poland, Jesse; Heslot, Nicolas; Sorrells, Mark E

    2015-01-01

    Population structure must be evaluated before optimization of the training set population. Maximizing the phenotypic variance captured by the training set is important for optimal performance. The optimization of the training set (TRS) in genomic selection has received much interest in both animal and plant breeding, because it is critical to the accuracy of the prediction models. In this study, five different TRS sampling algorithms, stratified sampling, mean of the coefficient of determination (CDmean), mean of predictor error variance (PEVmean), stratified CDmean (StratCDmean) and random sampling, were evaluated for prediction accuracy in the presence of different levels of population structure. In the presence of population structure, the most phenotypic variation captured by a sampling method in the TRS is desirable. The wheat dataset showed mild population structure, and CDmean and stratified CDmean methods showed the highest accuracies for all the traits except for test weight and heading date. The rice dataset had strong population structure and the approach based on stratified sampling showed the highest accuracies for all traits. In general, CDmean minimized the relationship between genotypes in the TRS, maximizing the relationship between TRS and the test set. This makes it suitable as an optimization criterion for long-term selection. Our results indicated that the best selection criterion used to optimize the TRS seems to depend on the interaction of trait architecture and population structure.

  13. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  14. The stock selection problem: Is the stock selection approach more important than the optimization method? Evidence from the Danish stock market

    OpenAIRE

    Grobys, Klaus

    2011-01-01

    Passive investment strategies basically aim to replicate an underlying benchmark. Thereby, the management usually selects a subset of stocks being employed in the optimization procedure. Apart from the optimization procedure, the stock selection approach determines the stock portfolios' out-of-sample performance. The empirical study here takes into account the Danish stock market from 2000-2010 and gives evidence that stock portfolios including small companies' stocks being estimated via coin...

  15. Some advances in importance sampling of reliability models based on zero variance approximation

    NARCIS (Netherlands)

    Reijsbergen, D.P.; de Boer, Pieter-Tjerk; Scheinhardt, Willem R.W.; Juneja, Sandeep

    We are interested in estimating, through simulation, the probability of entering a rare failure state before a regeneration state. Since this probability is typically small, we apply importance sampling. The method that we use is based on finding the most likely paths to failure. We present an

  16. Importance sampling of rare events in chaotic systems

    DEFF Research Database (Denmark)

    Leitão, Jorge C.; Parente Lopes, João M.Viana; Altmann, Eduardo G.

    2017-01-01

    space of chaotic systems. As examples of our general framework we compute the distribution of finite-time Lyapunov exponents (in different chaotic maps) and the distribution of escape times (in transient-chaos problems). Our methods sample exponentially rare states in polynomial number of samples (in......Finding and sampling rare trajectories in dynamical systems is a difficult computational task underlying numerous problems and applications. In this paper we show how to construct Metropolis-Hastings Monte-Carlo methods that can efficiently sample rare trajectories in the (extremely rough) phase...... both low- and high-dimensional systems). An open-source software that implements our algorithms and reproduces our results can be found in reference [J. Leitao, A library to sample chaotic systems, 2017, https://github.com/jorgecarleitao/chaospp]....

  17. Sworn testimony of the model evidence: Gaussian Mixture Importance (GAME) sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-07-01

    What is the "best" model? The answer to this question lies in part in the eyes of the beholder, nevertheless a good model must blend rigorous theory with redeeming qualities such as parsimony and quality of fit. Model selection is used to make inferences, via weighted averaging, from a set of K candidate models, Mk; k=>(1,…,K>), and help identify which model is most supported by the observed data, Y>˜=>(y˜1,…,y˜n>). Here, we introduce a new and robust estimator of the model evidence, p>(Y>˜|Mk>), which acts as normalizing constant in the denominator of Bayes' theorem and provides a single quantitative measure of relative support for each hypothesis that integrates model accuracy, uncertainty, and complexity. However, p>(Y>˜|Mk>) is analytically intractable for most practical modeling problems. Our method, coined GAussian Mixture importancE (GAME) sampling, uses bridge sampling of a mixture distribution fitted to samples of the posterior model parameter distribution derived from MCMC simulation. We benchmark the accuracy and reliability of GAME sampling by application to a diverse set of multivariate target distributions (up to 100 dimensions) with known values of p>(Y>˜|Mk>) and to hypothesis testing using numerical modeling of the rainfall-runoff transformation of the Leaf River watershed in Mississippi, USA. These case studies demonstrate that GAME sampling provides robust and unbiased estimates of the evidence at a relatively small computational cost outperforming commonly used estimators. The GAME sampler is implemented in the MATLAB package of DREAM and simplifies considerably scientific inquiry through hypothesis testing and model selection.

  18. Acceleration of intensity-modulated radiotherapy dose calculation by importance sampling of the calculation matrices

    International Nuclear Information System (INIS)

    Thieke, Christian; Nill, Simeon; Oelfke, Uwe; Bortfeld, Thomas

    2002-01-01

    In inverse planning for intensity-modulated radiotherapy, the dose calculation is a crucial element limiting both the maximum achievable plan quality and the speed of the optimization process. One way to integrate accurate dose calculation algorithms into inverse planning is to precalculate the dose contribution of each beam element to each voxel for unit fluence. These precalculated values are stored in a big dose calculation matrix. Then the dose calculation during the iterative optimization process consists merely of matrix look-up and multiplication with the actual fluence values. However, because the dose calculation matrix can become very large, this ansatz requires a lot of computer memory and is still very time consuming, making it not practical for clinical routine without further modifications. In this work we present a new method to significantly reduce the number of entries in the dose calculation matrix. The method utilizes the fact that a photon pencil beam has a rapid radial dose falloff, and has very small dose values for the most part. In this low-dose part of the pencil beam, the dose contribution to a voxel is only integrated into the dose calculation matrix with a certain probability. Normalization with the reciprocal of this probability preserves the total energy, even though many matrix elements are omitted. Three probability distributions were tested to find the most accurate one for a given memory size. The sampling method is compared with the use of a fully filled matrix and with the well-known method of just cutting off the pencil beam at a certain lateral distance. A clinical example of a head and neck case is presented. It turns out that a sampled dose calculation matrix with only 1/3 of the entries of the fully filled matrix does not sacrifice the quality of the resulting plans, whereby the cutoff method results in a suboptimal treatment plan

  19. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  20. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmet Demir

    2017-01-01

    Full Text Available In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an important role on providing software related techniques to improve the associated literature. Today, intelligent optimization techniques based on Artificial Intelligence are widely used for optimization problems. The objective of this paper is to provide a comparative study on the employment of classical optimization solutions and Artificial Intelligence solutions for enabling readers to have idea about the potential of intelligent optimization techniques. At this point, two recently developed intelligent optimization algorithms, Vortex Optimization Algorithm (VOA and Cognitive Development Optimization Algorithm (CoDOA, have been used to solve some multidisciplinary optimization problems provided in the source book Thomas' Calculus 11th Edition and the obtained results have compared with classical optimization solutions. 

  1. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2017-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 17-19, 2016. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, health, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  2. Modeling and Optimization : Theory and Applications Conference

    CERN Document Server

    Terlaky, Tamás

    2015-01-01

    This volume contains a selection of contributions that were presented at the Modeling and Optimization: Theory and Applications Conference (MOPTA) held at Lehigh University in Bethlehem, Pennsylvania, USA on August 13-15, 2014. The conference brought together a diverse group of researchers and practitioners, working on both theoretical and practical aspects of continuous or discrete optimization. Topics presented included algorithms for solving convex, network, mixed-integer, nonlinear, and global optimization problems, and addressed the application of deterministic and stochastic optimization techniques in energy, finance, logistics, analytics, healthcare, and other important fields. The contributions contained in this volume represent a sample of these topics and applications and illustrate the broad diversity of ideas discussed at the meeting.

  3. Sleep and optimism: A longitudinal study of bidirectional causal relationship and its mediating and moderating variables in a Chinese student sample.

    Science.gov (United States)

    Lau, Esther Yuet Ying; Hui, C Harry; Lam, Jasmine; Cheung, Shu-Fai

    2017-01-01

    While both sleep and optimism have been found to be predictive of well-being, few studies have examined their relationship with each other. Neither do we know much about the mediators and moderators of the relationship. This study investigated (1) the causal relationship between sleep quality and optimism in a college student sample, (2) the role of symptoms of depression, anxiety, and stress as mediators, and (3) how circadian preference might moderate the relationship. Internet survey data were collected from 1,684 full-time university students (67.6% female, mean age = 20.9 years, SD = 2.66) at three time-points, spanning about 19 months. Measures included the Attributional Style Questionnaire, the Pittsburgh Sleep Quality Index, the Composite Scale of Morningness, and the Depression Anxiety Stress Scale-21. Moderate correlations were found among sleep quality, depressive mood, stress symptoms, anxiety symptoms, and optimism. Cross-lagged analyses showed a bidirectional effect between optimism and sleep quality. Moreover, path analyses demonstrated that anxiety and stress symptoms partially mediated the influence of optimism on sleep quality, while depressive mood partially mediated the influence of sleep quality on optimism. In support of our hypothesis, sleep quality affects mood symptoms and optimism differently for different circadian preferences. Poor sleep results in depressive mood and thus pessimism in non-morning persons only. In contrast, the aggregated (direct and indirect) effects of optimism on sleep quality were invariant of circadian preference. Taken together, people who are pessimistic generally have more anxious mood and stress symptoms, which adversely affect sleep while morningness seems to have a specific protective effect countering the potential damage poor sleep has on optimism. In conclusion, optimism and sleep quality were both cause and effect of each other. Depressive mood partially explained the effect of sleep quality on optimism

  4. An 'intelligent' approach to radioimmunoassay sample counting employing a microprocessor-controlled sample counter

    International Nuclear Information System (INIS)

    Ekins, R.P.; Sufi, S.; Malan, P.G.

    1978-01-01

    The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore imperative that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. Most of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be related to the counting errors for that sample. The objective of the paper is to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5- to 10-fold to be made. (author)

  5. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  6. Monte Carlo importance sampling for the MCNP trademark general source

    International Nuclear Information System (INIS)

    Lichtenstein, H.

    1996-01-01

    Research was performed to develop an importance sampling procedure for a radiation source. The procedure was developed for the MCNP radiation transport code, but the approach itself is general and can be adapted to other Monte Carlo codes. The procedure, as adapted to MCNP, relies entirely on existing MCNP capabilities. It has been tested for very complex descriptions of a general source, in the context of the design of spent-reactor-fuel storage casks. Dramatic improvements in calculation efficiency have been observed in some test cases. In addition, the procedure has been found to provide an acceleration to acceptable convergence, as well as the benefit of quickly identifying user specified variance-reduction in the transport that effects unstable convergence

  7. Representative sampling of animal feed and mixtures in the Danish agricultural sector

    DEFF Research Database (Denmark)

    Petersen, Lars; Esbensen, Kim Harry

    2005-01-01

    Sampling of grain, animal feeds (solid & liquid) including important mineral mixtures in the Danish agricultural sector is subject to an ongoing investigation with the objective of improving existing (sub-optimal) sampling procedures. Results from the first 6 months are presented here; the projec...

  8. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  9. Neutron activation analysis for the optimal sampling and extraction of extractable organohalogens in human hari

    International Nuclear Information System (INIS)

    Zhang, H.; Chai, Z.F.; Sun, H.B.; Xu, H.F.

    2005-01-01

    Many persistent organohalogen compounds such as DDTs and polychlorinated biphenyls have caused seriously environmental pollution problem that now involves all life. It is know that neutron activation analysis (NAA) is a very convenient method for halogen analysis and is also the only method currently available for simultaneously determining organic chlorine, bromine and iodine in one extract. Human hair is a convenient material to evaluate the burden of such compounds in human body and dan be easily collected from people over wide ranges of age, sex, residential areas, eating habits and working environments. To effectively extract organohalogen compounds from human hair, in present work the optimal Soxhelt-extraction time of extractable organohalogen (EOX) and extractable persistent organohalogen (EPOX) from hair of different lengths were studied by NAA. The results indicated that the optimal Soxhelt-extraction time of EOX and EPOX from human hair was 8-11 h, and the highest EOX and EPOX contents were observed in hair powder extract. The concentrations of both EOX and EPOX in different hair sections were in the order of hair powder ≥ 2 mm > 5 mm, which stated that hair samples milled into hair powder or cut into very short sections were not only for homogeneous. hair sample but for the best hair extraction efficiency.

  10. Dielectric sample with two-layer charge distribution for space charge calibration purposes

    DEFF Research Database (Denmark)

    Holbøll, Joachim; Henriksen, Mogens; Rasmussen, C.

    2002-01-01

    In the present paper is described a dielectric test sample with two very narrow concentrations of bulk charges, achieved by two internal electrodes not affecting the acoustical properties of the sample, a fact important for optimal application of most space charge measuring systems. Space charge...

  11. Catching Stardust and Bringing it Home: The Astronomical Importance of Sample Return

    Science.gov (United States)

    Brownlee, D.

    2002-12-01

    orbit of Mars will provide important insight into the materials, environments and processes that occurred from the central regions to outer fringes of the solar nebula. One of the most exciting aspects of the January 2006 return of comet samples will be the synergistic linking of data on real comet and interstellar dust samples with the vast amount of astronomical data on these materials and analogous particles that orbit other stars Stardust is a NASA Discovery mission that has successfully traveled over 2.5 billion kilometers.

  12. Optimal design of constant-stress accelerated degradation tests using the M-optimality criterion

    International Nuclear Information System (INIS)

    Wang, Han; Zhao, Yu; Ma, Xiaobing; Wang, Hongyu

    2017-01-01

    In this paper, we propose the M-optimality criterion for designing constant-stress accelerated degradation tests (ADTs). The newly proposed criterion concentrates on the degradation mechanism equivalence rather than evaluation precision or prediction accuracy which is usually considered in traditional optimization criteria. Subject to the constraints of total sample number, test termination time as well as the stress region, an optimum constant-stress ADT plan is derived by determining the combination of stress levels and the number of samples allocated to each stress level, when the degradation path comes from inverse Gaussian (IG) process model with covariates and random effects. A numerical example is presented to verify the robustness of our proposed optimum plan and compare its efficiency with other test plans. Results show that, with a slightly relaxed requirement of evaluation precision and prediction accuracy, our proposed optimum plan reduces the dispersion of the estimated acceleration factor between the usage stress level and a higher accelerated stress level, which makes an important contribution to reliability demonstration and assessment tests. - Highlights: • We establish the necessary conditions for degradation mechanism equivalence of ADTs. • We propose the M-optimality criterion for designing constant-stress ADT plans. • The M-optimality plan reduces the dispersion of the estimated accelerated factors. • An electrical connector with its stress relaxation data is used for illustration.

  13. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  14. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  15. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    International Nuclear Information System (INIS)

    Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.

    2011-01-01

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3 , the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  16. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    Energy Technology Data Exchange (ETDEWEB)

    Brum, Daniel M.; Lima, Claudio F. [Departamento de Quimica, Universidade Federal de Vicosa, A. Peter Henry Rolfs s/n, Vicosa/MG, 36570-000 (Brazil); Robaina, Nicolle F. [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil); Fonseca, Teresa Cristina O. [Petrobras, Cenpes/PDEDS/QM, Av. Horacio Macedo 950, Ilha do Fundao, Rio de Janeiro/RJ, 21941-915 (Brazil); Cassella, Ricardo J., E-mail: cassella@vm.uff.br [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil)

    2011-05-15

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO{sub 3}, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO{sub 3} medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  17. Focusing light through dynamical samples using fast continuous wavefront optimization.

    Science.gov (United States)

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  18. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  19. Efficiency of analytical and sampling-based uncertainty propagation in intensity-modulated proton therapy

    Science.gov (United States)

    Wahl, N.; Hennig, P.; Wieser, H. P.; Bangert, M.

    2017-07-01

    The sensitivity of intensity-modulated proton therapy (IMPT) treatment plans to uncertainties can be quantified and mitigated with robust/min-max and stochastic/probabilistic treatment analysis and optimization techniques. Those methods usually rely on sparse random, importance, or worst-case sampling. Inevitably, this imposes a trade-off between computational speed and accuracy of the uncertainty propagation. Here, we investigate analytical probabilistic modeling (APM) as an alternative for uncertainty propagation and minimization in IMPT that does not rely on scenario sampling. APM propagates probability distributions over range and setup uncertainties via a Gaussian pencil-beam approximation into moments of the probability distributions over the resulting dose in closed form. It supports arbitrary correlation models and allows for efficient incorporation of fractionation effects regarding random and systematic errors. We evaluate the trade-off between run-time and accuracy of APM uncertainty computations on three patient datasets. Results are compared against reference computations facilitating importance and random sampling. Two approximation techniques to accelerate uncertainty propagation and minimization based on probabilistic treatment plan optimization are presented. Runtimes are measured on CPU and GPU platforms, dosimetric accuracy is quantified in comparison to a sampling-based benchmark (5000 random samples). APM accurately propagates range and setup uncertainties into dose uncertainties at competitive run-times (GPU ≤slant {5} min). The resulting standard deviation (expectation value) of dose show average global γ{3% / {3}~mm} pass rates between 94.2% and 99.9% (98.4% and 100.0%). All investigated importance sampling strategies provided less accuracy at higher run-times considering only a single fraction. Considering fractionation, APM uncertainty propagation and treatment plan optimization was proven to be possible at constant time complexity

  20. Optimism, Social Support, and Adjustment in African American Women with Breast Cancer

    Science.gov (United States)

    Shelby, Rebecca A.; Crespin, Tim R.; Wells-Di Gregorio, Sharla M.; Lamdan, Ruth M.; Siegel, Jamie E.; Taylor, Kathryn L.

    2013-01-01

    Past studies show that optimism and social support are associated with better adjustment following breast cancer treatment. Most studies have examined these relationships in predominantly non-Hispanic White samples. The present study included 77 African American women treated for nonmetastatic breast cancer. Women completed measures of optimism, social support, and adjustment within 10-months of surgical treatment. In contrast to past studies, social support did not mediate the relationship between optimism and adjustment in this sample. Instead, social support was a moderator of the optimism-adjustment relationship, as it buffered the negative impact of low optimism on psychological distress, well-being, and psychosocial functioning. Women with high levels of social support experienced better adjustment even when optimism was low. In contrast, among women with high levels of optimism, increasing social support did not provide an added benefit. These data suggest that perceived social support is an important resource for women with low optimism. PMID:18712591

  1. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  2. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  3. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    Science.gov (United States)

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  4. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  5. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Transmission characteristics and optimal diagnostic samples to detect an FMDV infection in vaccinated and non-vaccinated sheep

    NARCIS (Netherlands)

    Eble, P.L.; Orsel, K.; Kluitenberg-van Hemert, F.; Dekker, A.

    2015-01-01

    We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2

  7. Application of importance sampling method in sliding failure simulation of caisson breakwaters

    Science.gov (United States)

    Wang, Yu-chi; Wang, Yuan-zhan; Li, Qing-mei; Chen, Liang-zhi

    2016-06-01

    It is assumed that the storm wave takes place once a year during the design period, and N histories of storm waves are generated on the basis of wave spectrum corresponding to the N-year design period. The responses of the breakwater to the N histories of storm waves in the N-year design period are calculated by mass-spring-dashpot mode and taken as a set of samples. The failure probability of caisson breakwaters during the design period of N years is obtained by the statistical analysis of many sets of samples. It is the key issue to improve the efficiency of the common Monte Carlo simulation method in the failure probability estimation of caisson breakwaters in the complete life cycle. In this paper, the kernel method of importance sampling, which can greatly increase the efficiency of failure probability calculation of caisson breakwaters, is proposed to estimate the failure probability of caisson breakwaters in the complete life cycle. The effectiveness of the kernel method is investigated by an example. It is indicated that the calculation efficiency of the kernel method is over 10 times the common Monte Carlo simulation method.

  8. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  9. Solid Phase Microextraction (SPME in Determination of Pesticide Residues in Soil Samples

    Directory of Open Access Journals (Sweden)

    Rada Đurović

    2011-01-01

    Full Text Available The basic principles and application possibilities of the methods based on solid phase microextraction (SPME in the analysis of pesticide residues in soil samples are presented in the paper. The most important experimental parameters which affect SPME efficacy inpesticide determination (type and thickness of microextraction fiber, duration of microextraction,temperature at which it is conducted, effect of addition of salts (the effect of efflorescence,temperature and time of desorption, the choice of optimal solvent for pesticide exctraction from the soil and the optimal number of extraction steps, as well as general guidelines for their optimization are also shown. In the end, current applications of SPMEmethods in the analysis of pesticide residues in soil samples are presented.

  10. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  11. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  13. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  14. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    International Nuclear Information System (INIS)

    Bontha, J.R.; Golcar, G.R.; Hannigan, N.

    2000-01-01

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%

  15. Optimized pre-thinning procedures of ion-beam thinning for TEM sample preparation by magnetorheological polishing.

    Science.gov (United States)

    Luo, Hu; Yin, Shaohui; Zhang, Guanhua; Liu, Chunhui; Tang, Qingchun; Guo, Meijian

    2017-10-01

    Ion-beam-thinning is a well-established sample preparation technique for transmission electron microscopy (TEM), but tedious procedures and labor consuming pre-thinning could seriously reduce its efficiency. In this work, we present a simple pre-thinning technique by using magnetorheological (MR) polishing to replace manual lapping and dimpling, and demonstrate the successful preparation of electron-transparent single crystal silicon samples after MR polishing and single-sided ion milling. Dimples pre-thinned to less than 30 microns and with little mechanical surface damage were repeatedly produced under optimized MR polishing conditions. Samples pre-thinned by both MR polishing and traditional technique were ion-beam thinned from the rear side until perforation, and then observed by optical microscopy and TEM. The results show that the specimen pre-thinned by MR technique was free from dimpling related defects, which were still residual in sample pre-thinned by conventional technique. Nice high-resolution TEM images could be acquired after MR polishing and one side ion-thinning. MR polishing promises to be an adaptable and efficient method for pre-thinning in preparation of TEM specimens, especially for brittle ceramics. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  17. Optimization of loop-mediated isothermal amplification (LAMP) assays for the detection of Leishmania DNA in human blood samples.

    Science.gov (United States)

    Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon

    2016-10-01

    Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Simple and efficient importance sampling scheme for a tandem queue with server slow-down

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rare-event simulation. The system at hand is a so-called tandem queue with slow-down, which essentially means that the server of the first queue (or: upstream queue) switches to a lower speed when the second queue (downstream queue) exceeds some

  19. A Simulation Approach to Statistical Estimation of Multiperiod Optimal Portfolios

    Directory of Open Access Journals (Sweden)

    Hiroshi Shiraishi

    2012-01-01

    Full Text Available This paper discusses a simulation-based method for solving discrete-time multiperiod portfolio choice problems under AR(1 process. The method is applicable even if the distributions of return processes are unknown. We first generate simulation sample paths of the random returns by using AR bootstrap. Then, for each sample path and each investment time, we obtain an optimal portfolio estimator, which optimizes a constant relative risk aversion (CRRA utility function. When an investor considers an optimal investment strategy with portfolio rebalancing, it is convenient to introduce a value function. The most important difference between single-period portfolio choice problems and multiperiod ones is that the value function is time dependent. Our method takes care of the time dependency by using bootstrapped sample paths. Numerical studies are provided to examine the validity of our method. The result shows the necessity to take care of the time dependency of the value function.

  20. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  1. Importance of sampling frequency when collecting diatoms

    KAUST Repository

    Wu, Naicheng; Faber, Claas; Sun, Xiuming; Qu, Yueming; Wang, Chao; Ivetic, Snjezana; Riis, Tenna; Ulrich, Uta; Fohrer, Nicola

    2016-01-01

    There has been increasing interest in diatom-based bio-assessment but we still lack a comprehensive understanding of how to capture diatoms’ temporal dynamics with an appropriate sampling frequency (ASF). To cover this research gap, we collected

  2. Global Optimization using Interval Analysis : Interval Optimization for Aerospace Applications

    NARCIS (Netherlands)

    Van Kampen, E.

    2010-01-01

    Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in

  3. Importance analysis for reconfigurable systems

    International Nuclear Information System (INIS)

    Si, Shubin; Levitin, Gregory; Dui, Hongyan; Sun, Shudong

    2014-01-01

    Importance measures are used in reliability engineering to rank the system components according to their contributions to proper functioning of the entire system and to find the most effective ways of reliability enhancement. Traditionally, the importance measures do not consider the possible change of system structure with the improvement of specific component reliability. However, if a component's reliability changes, the optimal system structure/configuration may also change and the importance of the corresponding component will depend on the chosen structure. When the most promising component reliability improvement is determined, the component importance should be taken into account with respect to the possible structure changes. This paper studies the component reliability importance indices with respect to the changes of the optimal component sequencing. This importance measure indicates the critical components in providing the system reliability enhancement by both enhancing the component's reliability and reconfiguring the system. Examples of linear consecutive-k-out-of-n: F and G systems are considered to demonstrate the change of the component Birnbaum importance with the optimal system reconfiguration. The results show that the change of the importance index corresponds to the change of the system optimal configuration and the importance index can change not monotonically with the variation of the component reliability

  4. Sterile Reverse Osmosis Water Combined with Friction Are Optimal for Channel and Lever Cavity Sample Collection of Flexible Duodenoscopes

    Directory of Open Access Journals (Sweden)

    Michelle J. Alfa

    2017-11-01

    Full Text Available IntroductionSimulated-use buildup biofilm (BBF model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes.Materials and methodsBBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa. Sterile reverse osmosis (RO water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey–Engley were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes.ResultsDespite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF extraction provided significantly better recovery of E. coli (p = 0.02 from duodenoscope lever cavities compared to the CDC flush method.Discussion and conclusionWe recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.

  5. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  6. Special nuclear material inventory sampling plans

    International Nuclear Information System (INIS)

    Vaccaro, H.S.; Goldman, A.S.

    1987-01-01

    This paper presents improved procedures for obtaining statistically valid sampling plans for nuclear facilities. The double sampling concept and methods for developing optimal double sampling plans are described. An algorithm is described that is satisfactory for finding optimal double sampling plans and choosing appropriate detection and false alarm probabilities

  7. The importance of sound methodology in environmental DNA sampling

    Science.gov (United States)

    T. M. Wilcox; K. J. Carim; M. K. Young; K. S. McKelvey; T. W. Franklin; M. K. Schwartz

    2018-01-01

    Environmental DNA (eDNA) sampling - which enables inferences of species’ presence from genetic material in the environment - is a powerful tool for sampling rare fishes. Numerous studies have demonstrated that eDNA sampling generally provides greater probabilities of detection than traditional techniques (e.g., Thomsen et al. 2012; McKelvey et al. 2016; Valentini et al...

  8. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  9. Optimizing health system response to patient's needs: an argument for the importance of functioning information.

    Science.gov (United States)

    Hopfe, Maren; Prodinger, Birgit; Bickenbach, Jerome E; Stucki, Gerold

    2017-06-06

    Current health systems are increasingly challenged to meet the needs of a growing number of patients living with chronic and often multiple health conditions. The primary outcome of care, it is argued, is not merely curing disease but also optimizing functioning over a person's life span. According to the World Health Organization, functioning can serve as foundation for a comprehensive picture of health and augment the biomedical perspective with a broader and more comprehensive picture of health as it plays out in people's lives. The crucial importance of information about patient's functioning for a well-performing health system, however, has yet to be sufficiently appreciated. This paper argues that functioning information is fundamental in all components of health systems and enhances the capacity of health systems to optimize patients' health and health-related needs. Beyond making sense of biomedical disease patterns, health systems can profit from using functioning information to improve interprofessional collaboration and achieve cross-cutting disease treatment outcomes. Implications for rehabilitation Functioning is a key health outcome for rehabilitation within health systems. Information on restoring, maintaining, and optimizing human functioning can strengthen health system response to patients' health and rehabilitative needs. Functioning information guides health systems to achieve cross-cutting health outcomes that respond to the needs of the growing number of individuals living with chronic and multiple health conditions. Accounting for individuals functioning helps to overcome fragmentation of care and to improve interprofessional collaboration across settings.

  10. Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku kose

    2017-01-01

    In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Science took an...

  11. Sampling plans for pest mites on physic nut.

    Science.gov (United States)

    Rosado, Jander F; Sarmento, Renato A; Pedro-Neto, Marçal; Galdino, Tarcísio V S; Marques, Renata V; Erasmo, Eduardo A L; Picanço, Marcelo C

    2014-08-01

    The starting point for generating a pest control decision-making system is a conventional sampling plan. Because the mites Polyphagotarsonemus latus and Tetranychus bastosi are among the most important pests of the physic nut (Jatropha curcas), in the present study, we aimed to establish sampling plans for these mite species on physic nut. Mite densities were monitored in 12 physic nut crops. Based on the obtained results, sampling of P. latus and T. bastosi should be performed by assessing the number of mites per cm(2) in 160 samples using a handheld 20× magnifying glass. The optimal sampling region for T. bastosi is the abaxial surface of the 4th most apical leaf on the branch of the middle third of the canopy. On the abaxial surface, T. bastosi should then be observed on the side parts of the middle portion of the leaf, near its edge. As for P. latus, the optimal sampling region is the abaxial surface of the 4th most apical leaf on the branch of the apical third of the canopy on the abaxial surface. Polyphagotarsonemus latus should then be assessed on the side parts of the leaf's petiole insertion. Each sampling procedure requires 4 h and costs US$ 7.31.

  12. Sampling frequency of ciliated protozoan microfauna for seasonal distribution research in marine ecosystems.

    Science.gov (United States)

    Xu, Henglong; Yong, Jiang; Xu, Guangjian

    2015-12-30

    Sampling frequency is important to obtain sufficient information for temporal research of microfauna. To determine an optimal strategy for exploring the seasonal variation in ciliated protozoa, a dataset from the Yellow Sea, northern China was studied. Samples were collected with 24 (biweekly), 12 (monthly), 8 (bimonthly per season) and 4 (seasonally) sampling events. Compared to the 24 samplings (100%), the 12-, 8- and 4-samplings recovered 94%, 94%, and 78% of the total species, respectively. To reveal the seasonal distribution, the 8-sampling regime may result in >75% information of the seasonal variance, while the traditional 4-sampling may only explain sampling frequency, the biotic data showed stronger correlations with seasonal variables (e.g., temperature, salinity) in combination with nutrients. It is suggested that the 8-sampling events per year may be an optimal sampling strategy for ciliated protozoan seasonal research in marine ecosystems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Burnout and Engagement: Relative Importance of Predictors and Outcomes in Two Health Care Worker Samples.

    Science.gov (United States)

    Fragoso, Zachary L; Holcombe, Kyla J; McCluney, Courtney L; Fisher, Gwenith G; McGonagle, Alyssa K; Friebe, Susan J

    2016-06-09

    This study's purpose was twofold: first, to examine the relative importance of job demands and resources as predictors of burnout and engagement, and second, the relative importance of engagement and burnout related to health, depressive symptoms, work ability, organizational commitment, and turnover intentions in two samples of health care workers. Nurse leaders (n = 162) and licensed emergency medical technicians (EMTs; n = 102) completed surveys. In both samples, job demands predicted burnout more strongly than job resources, and job resources predicted engagement more strongly than job demands. Engagement held more weight than burnout for predicting commitment, and burnout held more weight for predicting health outcomes, depressive symptoms, and work ability. Results have implications for the design, evaluation, and effectiveness of workplace interventions to reduce burnout and improve engagement among health care workers. Actionable recommendations for increasing engagement and decreasing burnout in health care organizations are provided. © 2016 The Author(s).

  14. 40 CFR 80.583 - What alternative sampling and testing requirements apply to importers who transport motor vehicle...

    Science.gov (United States)

    2010-07-01

    ... requirements apply to importers who transport motor vehicle diesel fuel, NRLM diesel fuel, or ECA marine fuel... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Motor Vehicle Diesel Fuel... alternative sampling and testing requirements apply to importers who transport motor vehicle diesel fuel, NRLM...

  15. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming

    2014-08-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  16. Blue-noise remeshing with farthest point optimization

    KAUST Repository

    Yan, Dongming; Guo, Jianwei; Jia, Xiaohong; Zhang, Xiaopeng; Wonka, Peter

    2014-01-01

    In this paper, we present a novel method for surface sampling and remeshing with good blue-noise properties. Our approach is based on the farthest point optimization (FPO), a relaxation technique that generates high quality blue-noise point sets in 2D. We propose two important generalizations of the original FPO framework: adaptive sampling and sampling on surfaces. A simple and efficient algorithm for accelerating the FPO framework is also proposed. Experimental results show that the generalized FPO generates point sets with excellent blue-noise properties for adaptive and surface sampling. Furthermore, we demonstrate that our remeshing quality is superior to the current state-of-the art approaches. © 2014 The Eurographics Association and John Wiley & Sons Ltd.

  17. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  18. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  19. Application of Nontraditional Optimization Techniques for Airfoil Shape Optimization

    Directory of Open Access Journals (Sweden)

    R. Mukesh

    2012-01-01

    Full Text Available The method of optimization algorithms is one of the most important parameters which will strongly influence the fidelity of the solution during an aerodynamic shape optimization problem. Nowadays, various optimization methods, such as genetic algorithm (GA, simulated annealing (SA, and particle swarm optimization (PSO, are more widely employed to solve the aerodynamic shape optimization problems. In addition to the optimization method, the geometry parameterization becomes an important factor to be considered during the aerodynamic shape optimization process. The objective of this work is to introduce the knowledge of describing general airfoil geometry using twelve parameters by representing its shape as a polynomial function and coupling this approach with flow solution and optimization algorithms. An aerodynamic shape optimization problem is formulated for NACA 0012 airfoil and solved using the methods of simulated annealing and genetic algorithm for 5.0 deg angle of attack. The results show that the simulated annealing optimization scheme is more effective in finding the optimum solution among the various possible solutions. It is also found that the SA shows more exploitation characteristics as compared to the GA which is considered to be more effective explorer.

  20. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  1. Importance Sampling for a Markov Modulated Queuing Network with Customer Impatience until the End of Service

    Directory of Open Access Journals (Sweden)

    Ebrahim MAHDIPOUR

    2009-01-01

    Full Text Available For more than two decades, there has been a growing of interest in fast simulation techniques for estimating probabilities of rare events in queuing networks. Importance sampling is a variance reduction method for simulating rare events. The present paper carries out strict deadlines to the paper by Dupuis et al for a two node tandem network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We derive a closed form solution for the probability of missing deadlines. Then we have employed the results to an importance sampling technique to estimate the probability of total population overflow which is a rare event. We have also shown that the probability of this rare event may be affected by various deadline values.

  2. The influence of optimism, social support and anxiety on aggression in a sample of dermatology patients: an analysis of cross-sectional data.

    Science.gov (United States)

    Coneo, A M C; Thompson, A R; Lavda, A

    2017-05-01

    Individuals with visible skin conditions often experience stigmatization and discrimination. This may trigger maladaptive responses such as feelings of anger and hostility, with negative consequences to social interactions and relationships. To identify psychosocial factors contributing to aggression levels in dermatology patients. Data were obtained from 91 participants recruited from outpatient clinics in the north of England, U.K. This study used dermatology-specific data extracted from a large U.K. database of medical conditions collected by The Appearance Research Collaboration. This study looked at the impact of optimism, perceptions of social support and social acceptance, fear of negative evaluation, appearance concern, appearance discrepancy, social comparison and well-being on aggression levels in a sample of dermatology patients. In order to assess the relationship between variables, a hierarchical regression analysis was performed. Dispositional style (optimism) was shown to have a strong negative relationship with aggression (β = -0·37, t = -2·97, P = 0·004). Higher levels of perceived social support were significantly associated with lower levels of aggression (β = -0·26, t = -2·26, P = 0·02). Anxiety was also found to have a significant positive relationship with aggression (β = 0·36, t = 2·56, P = 0·01). This study provides evidence for the importance of perceived social support and optimism in psychological adjustment to skin conditions. Psychosocial interventions provided to dermatology patients might need to address aggression levels and seek to enhance social support and the ability to be optimistic. © 2016 British Association of Dermatologists.

  3. Application of Chitosan-Zinc Oxide Nanoparticles for Lead Extraction From Water Samples by Combining Ant Colony Optimization with Artificial Neural Network

    Science.gov (United States)

    Khajeh, M.; Pourkarami, A.; Arefnejad, E.; Bohlooli, M.; Khatibi, A.; Ghaffari-Moghaddam, M.; Zareian-Jahromi, S.

    2017-09-01

    Chitosan-zinc oxide nanoparticles (CZPs) were developed for solid-phase extraction. Combined artificial neural network-ant colony optimization (ANN-ACO) was used for the simultaneous preconcentration and determination of lead (Pb2+) ions in water samples prior to graphite furnace atomic absorption spectrometry (GF AAS). The solution pH, mass of adsorbent CZPs, amount of 1-(2-pyridylazo)-2-naphthol (PAN), which was used as a complexing agent, eluent volume, eluent concentration, and flow rates of sample and eluent were used as input parameters of the ANN model, and the percentage of extracted Pb2+ ions was used as the output variable of the model. A multilayer perception network with a back-propagation learning algorithm was used to fit the experimental data. The optimum conditions were obtained based on the ACO. Under the optimized conditions, the limit of detection for Pb2+ ions was found to be 0.078 μg/L. This procedure was also successfully used to determine the amounts of Pb2+ ions in various natural water samples.

  4. Specific determination of clinical and toxicological important substances in biological samples by LC-MS

    International Nuclear Information System (INIS)

    Mitulovic, G.

    2001-02-01

    This thesis of this dissertation is the specific determination of clinical and toxicological important substances in biological samples by LC-MS. Nicotine was determined in serum after application of nicotine plaster and nicotine nasal spray with HPLC-ESI-MS. Cotinine was determined direct in urine with HPLC-ESI-MS. Short time anesthetics were determined in blood and cytostatics were determined in liquor with HPLC-ESI-MS. (botek)

  5. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  6. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  7. Gas chromatographic-mass spectrometric analysis of urinary volatile organic metabolites: Optimization of the HS-SPME procedure and sample storage conditions.

    Science.gov (United States)

    Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica

    2018-01-01

    Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Quality of omeprazole purchased via the Internet and personally imported into Japan: comparison with products sampled in other Asian countries.

    Science.gov (United States)

    Rahman, Mohammad Sofiqur; Yoshida, Naoko; Sugiura, Sakura; Tsuboi, Hirohito; Keila, Tep; Kiet, Heng Bun; Zin, Theingi; Tanimoto, Tsuyoshi; Kimura, Kazuko

    2018-03-01

    To evaluate the quality of omeprazole personally imported into Japan via the Internet and to compare the quality of these samples with previously collected samples from two other Asian countries. The samples were evaluated by observation, authenticity investigation and pharmacopoeial quality analysis. Quality comparison of some selected samples was carried out by dissolution profiling, Raman spectroscopy and principle component analysis (PCA). Observation of the Internet sites and samples revealed some discrepancies including the delivery of a wrong sample and the selling of omeprazole without a prescription, although it is a prescription medicine. Among the 28 samples analysed, all passed the identification test, 26 (93%) passed the quantity and content uniformity tests and all passed the dissolution test. Dissolution profiling confirmed that all the personally imported omeprazole samples remained intact in the acid medium. On the other hand, six samples from two of the same manufacturers, previously collected during surveys in Cambodia and Myanmar, frequently showed premature omeprazole release in acid. Raman spectroscopy and PCA showed significant variation between omeprazole formulations in personally imported samples and the samples from Cambodia and Myanmar. Our results indicate that the pharmaceutical quality of omeprazole purchased through the Internet was sufficient, as determined by pharmacopeial tests. However, omeprazole formulations distributed in different market segments by the same manufacturers were of diverse quality. Measures are needed to ensure consistent quality of products and to prevent entry of substandard products into the legitimate supply chain. © 2018 The Authors. Tropical Medicine & International Health Published by John Wiley & Sons Ltd.

  9. Optimising import phytosanitary inspection

    NARCIS (Netherlands)

    Surkov, I.

    2007-01-01

    Keywords: quarantine pest, plant health policy, optimization, import phytosanitary inspection, ‘reduced checks’, optimal allocation of resources, multinomial logistic regression, the Netherlands World trade is a major vector of spread of quarantine plant pests. Border phytosanitary inspection

  10. BRAIN Journal - Solving Optimization Problems via Vortex Optimization Algorithm and Cognitive Development Optimization Algorithm

    OpenAIRE

    Ahmet Demir; Utku Kose

    2016-01-01

    ABSTRACT In the fields which require finding the most appropriate value, optimization became a vital approach to employ effective solutions. With the use of optimization techniques, many different fields in the modern life have found solutions to their real-world based problems. In this context, classical optimization techniques have had an important popularity. But after a while, more advanced optimization problems required the use of more effective techniques. At this point, Computer Sc...

  11. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  12. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  13. Optimizing detectability

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    HPLC is useful for trace and ultratrace analyses of a variety of compounds. For most applications, HPLC is useful for determinations in the nanogram-to-microgram range; however, detection limits of a picogram or less have been demonstrated in certain cases. These determinations require state-of-the-art capability; several examples of such determinations are provided in this chapter. As mentioned before, to detect and/or analyze low quantities of a given analyte at submicrogram or ultratrace levels, it is necessary to optimize the whole separation system, including the quantity and type of sample, sample preparation, HPLC equipment, chromatographic conditions (including column), choice of detector, and quantitation techniques. A limited discussion is provided here for optimization based on theoretical considerations, chromatographic conditions, detector selection, and miscellaneous approaches to detectability optimization. 59 refs

  14. An antithetic variate to facilitate upper-stem height measurements for critical height sampling with importance sampling

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2013-01-01

    Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...

  15. Avoiding misdiagnosis of imported malaria: screening of emergency department samples with thrombocytopenia detects clinically unsuspected cases

    NARCIS (Netherlands)

    Hänscheid, Thomas; Melo-Cristino, José; Grobusch, Martin P.; Pinto, Bernardino G.

    2003-01-01

    BACKGROUND: Misdiagnosis of imported malaria is not uncommon and even abnormal routine laboratory tests may not trigger malaria smears. However, blind screening of all thrombocytopenic samples might be a possible way to detect clinically unsuspected malaria cases in the accident and emergency

  16. Application of D-optimal experimental design method to optimize the formulation of O/W cosmetic emulsions.

    Science.gov (United States)

    Djuris, J; Vasiljevic, D; Jokic, S; Ibric, S

    2014-02-01

    This study investigates the application of D-optimal mixture experimental design in optimization of O/W cosmetic emulsions. Cetearyl glucoside was used as a natural, biodegradable non-ionic emulsifier in the relatively low concentration (1%), and the mixture of co-emulsifiers (stearic acid, cetyl alcohol, stearyl alcohol and glyceryl stearate) was used to stabilize the formulations. To determine the optimal composition of co-emulsifiers mixture, D-optimal mixture experimental design was used. Prepared emulsions were characterized with rheological measurements, centrifugation test, specific conductivity and pH value measurements. All prepared samples appeared as white and homogenous creams, except for one homogenous and viscous lotion co-stabilized by stearic acid alone. Centrifugation testing revealed some phase separation only in the case of sample co-stabilized using glyceryl stearate alone. The obtained pH values indicated that all samples expressed mild acid value acceptable for cosmetic preparations. Specific conductivity values are attributed to the multiple phases O/W emulsions with high percentages of fixed water. Results of the rheological measurements have shown that the investigated samples exhibited non-Newtonian thixotropic behaviour. To determine the influence of each of the co-emulsifiers on emulsions properties, the obtained results were evaluated by the means of statistical analysis (ANOVA test). On the basis of comparison of statistical parameters for each of the studied responses, mixture reduced quadratic model was selected over the linear model implying that interactions between co-emulsifiers play the significant role in overall influence of co-emulsifiers on emulsions properties. Glyceryl stearate was found to be the dominant co-emulsifier affecting emulsions properties. Interactions between the glyceryl stearate and other co-emulsifiers were also found to significantly influence emulsions properties. These findings are especially important

  17. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  18. Performance of an Optimized Paper-Based Test for Rapid Visual Measurement of Alanine Aminotransferase (ALT in Fingerstick and Venipuncture Samples.

    Directory of Open Access Journals (Sweden)

    Sidhartha Jain

    Full Text Available A paper-based, multiplexed, microfluidic assay has been developed to visually measure alanine aminotransferase (ALT in a fingerstick sample, generating rapid, semi-quantitative results. Prior studies indicated a need for improved accuracy; the device was subsequently optimized using an FDA-approved automated platform (Abaxis Piccolo Xpress as a comparator. Here, we evaluated the performance of the optimized paper test for measurement of ALT in fingerstick blood and serum, as compared to Abaxis and Roche/Hitachi platforms. To evaluate feasibility of remote results interpretation, we also compared reading cell phone camera images of completed tests to reading the device in real time.96 ambulatory patients with varied baseline ALT concentration underwent fingerstick testing using the paper device; cell phone images of completed devices were taken and texted to a blinded off-site reader. Venipuncture serum was obtained from 93/96 participants for routine clinical testing (Roche/Hitachi; subsequently, 88/93 serum samples were captured and applied to paper and Abaxis platforms. Paper test and reference standard results were compared by Bland-Altman analysis.For serum, there was excellent agreement between paper test and Abaxis results, with negligible bias (+4.5 U/L. Abaxis results were systematically 8.6% lower than Roche/Hitachi results. ALT values in fingerstick samples tested on paper were systematically lower than values in paired serum tested on paper (bias -23.6 U/L or Abaxis (bias -18.4 U/L; a correction factor was developed for the paper device to match fingerstick blood to serum. Visual reads of cell phone images closely matched reads made in real time (bias +5.5 U/L.The paper ALT test is highly accurate for serum testing, matching the reference method against which it was optimized better than the reference methods matched each other. A systematic difference exists between ALT values in fingerstick and paired serum samples, and can be

  19. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  20. 76 FR 65165 - Importation of Plants for Planting; Risk-Based Sampling and Inspection Approach and Propagative...

    Science.gov (United States)

    2011-10-20

    ..., this 14th day of October 2011. Kevin Shea, Acting Administrator, Animal and Plant Health Inspection... DEPARTMENT OF AGRICULTURE Animal and Plant Health Inspection Service [Docket No. APHIS-2011-0092] Importation of Plants for Planting; Risk-Based Sampling and Inspection Approach and Propagative Monitoring and...

  1. Important aspects of residue sampling in drilling dikes; Aspectos importantes para a amostragem de residuos em diques de perfuracao

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Gilvan Ferreira da [PETROBRAS, Rio de Janeiro (Brazil). Centro de Pesquisas. Div. de Explotacao

    1990-12-31

    This paper describes the importance of sampling in the evaluation of physical and chemical properties of residues found in drilling dikes, considering the later selection of treatment methods or discard of these residues. We present the fundamental concepts of applied statistics, which are essential to the elaboration of sampling plans, with views of obtaining exact and precise results. Other types of samples are also presented, as well as sampling equipment and methods for storage and preservation of the samples. As a conclusion, we the example of the implementation of a sampling plan. (author) 3 refs., 9 figs., 3 tabs.

  2. Important aspects of residue sampling in drilling dikes; Aspectos importantes para a amostragem de residuos em diques de perfuracao

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Gilvan Ferreira da [PETROBRAS, Rio de Janeiro (Brazil). Centro de Pesquisas. Div. de Explotacao

    1989-12-31

    This paper describes the importance of sampling in the evaluation of physical and chemical properties of residues found in drilling dikes, considering the later selection of treatment methods or discard of these residues. We present the fundamental concepts of applied statistics, which are essential to the elaboration of sampling plans, with views of obtaining exact and precise results. Other types of samples are also presented, as well as sampling equipment and methods for storage and preservation of the samples. As a conclusion, we the example of the implementation of a sampling plan. (author) 3 refs., 9 figs., 3 tabs.

  3. Stepwise multi-criteria optimization for robotic radiosurgery

    International Nuclear Information System (INIS)

    Schlaefer, A.; Schweikard, A.

    2008-01-01

    Achieving good conformality and a steep dose gradient around the target volume remains a key aspect of radiosurgery. Clearly, this involves a trade-off between target coverage, conformality of the dose distribution, and sparing of critical structures. Yet, image guidance and robotic beam placement have extended highly conformal dose delivery to extracranial and moving targets. Therefore, the multi-criteria nature of the optimization problem becomes even more apparent, as multiple conflicting clinical goals need to be considered coordinate to obtain an optimal treatment plan. Typically, planning for robotic radiosurgery is based on constrained optimization, namely linear programming. An extension of that approach is presented, such that each of the clinical goals can be addressed separately and in any sequential order. For a set of common clinical goals the mapping to a mathematical objective and a corresponding constraint is defined. The trade-off among the clinical goals is explored by modifying the constraints and optimizing a simple objective, while retaining feasibility of the solution. Moreover, it becomes immediately obvious whether a desired goal can be achieved and where a trade-off is possible. No importance factors or predefined prioritizations of clinical goals are necessary. The presented framework forms the basis for interactive and automated planning procedures. It is demonstrated for a sample case that the linear programming formulation is suitable to search for a clinically optimal treatment, and that the optimization steps can be performed quickly to establish that a Pareto-efficient solution has been found. Furthermore, it is demonstrated how the stepwise approach is preferable compared to modifying importance factors

  4. The Importance of Parameter Estimates for Stock-REIT-Bond Optimal Asset Allocation

    OpenAIRE

    Lin, Lisa; Lo, Jonathan

    2012-01-01

    This study is an extension of the research done by Waggle & Agrrawal (2006), which assesses the marginal effects of changes in optimal portfolio weights with respect to changes in the REIT-stock risk premium and correlation coefficients under a three-asset setting. We also consider two time periods from 1988-2011 and from 2000-2011. The results show that the sensitivity of changes in the REIT-stock risk premium on optimal portfolio weights is significantly higher than the effect of change...

  5. Dispositional optimism and sleep quality: a test of mediating pathways.

    Science.gov (United States)

    Uchino, Bert N; Cribbet, Matthew; de Grey, Robert G Kent; Cronan, Sierra; Trettevik, Ryan; Smith, Timothy W

    2017-04-01

    Dispositional optimism has been related to beneficial influences on physical health outcomes. However, its links to global sleep quality and the psychological mediators responsible for such associations are less studied. This study thus examined if trait optimism predicted global sleep quality, and if measures of subjective well-being were statistical mediators of such links. A community sample of 175 participants (93 men, 82 women) completed measures of trait optimism, depression, and life satisfaction. Global sleep quality was assessed using the Pittsburgh Sleep Quality Index. Results indicated that trait optimism was a strong predictor of better PSQI global sleep quality. Moreover, this association was mediated by depression and life satisfaction in both single and multiple mediator models. These results highlight the importance of optimism for the restorative process of sleep, as well as the utility of multiple mediator models in testing distinct psychological pathways.

  6. Search engine optimization

    OpenAIRE

    Marolt, Klemen

    2013-01-01

    Search engine optimization techniques, often shortened to “SEO,” should lead to first positions in organic search results. Some optimization techniques do not change over time, yet still form the basis for SEO. However, as the Internet and web design evolves dynamically, new optimization techniques flourish and flop. Thus, we looked at the most important factors that can help to improve positioning in search results. It is important to emphasize that none of the techniques can guarantee high ...

  7. Primary studies on particle recovery of swipe samples for nuclear safeguards

    International Nuclear Information System (INIS)

    Fan Wang; Yan Chen; Yong-gang Zhao; Yan Zhang; Tong-xing Wang; Jing-huai Li; Zhi-yuan Chang; Hai-ping Cui

    2013-01-01

    Environmental sampling plays a significant role in nuclear safeguards. Isotopic ratio in uranium-bearing particles from swipe samples provides important information for detecting undeclared activities. Particle recovery which is the primary step of particle analysis, would affect the following analysis. The particle recovery efficiency of ultrasoneration recovery and vacuum suction-impact recovery were measured by alpha spectrometer with standard particles produced via aerosol spray pyrolysis method. The conditions of ultrasoneration were optimized and both recovery methods were evaluated. Finally, a procedure of particle recovery for unknown swipe samples was set up. (author)

  8. Accounting for sampling patterns reverses the relative importance of trade and climate for the global sharing of exotic plants

    Science.gov (United States)

    Sofaer, Helen R.; Jarnevich, Catherine S.

    2017-01-01

    AimThe distributions of exotic species reflect patterns of human-mediated dispersal, species climatic tolerances and a suite of other biotic and abiotic factors. The relative importance of each of these factors will shape how the spread of exotic species is affected by ongoing economic globalization and climate change. However, patterns of trade may be correlated with variation in scientific sampling effort globally, potentially confounding studies that do not account for sampling patterns.LocationGlobal.Time periodMuseum records, generally from the 1800s up to 2015.Major taxa studiedPlant species exotic to the United States.MethodsWe used data from the Global Biodiversity Information Facility (GBIF) to summarize the number of plant species with exotic occurrences in the United States that also occur in each other country world-wide. We assessed the relative importance of trade and climatic similarity for explaining variation in the number of shared species while evaluating several methods to account for variation in sampling effort among countries.ResultsAccounting for variation in sampling effort reversed the relative importance of trade and climate for explaining numbers of shared species. Trade was strongly correlated with numbers of shared U.S. exotic plants between the United States and other countries before, but not after, accounting for sampling variation among countries. Conversely, accounting for sampling effort strengthened the relationship between climatic similarity and species sharing. Using the number of records as a measure of sampling effort provided a straightforward approach for the analysis of occurrence data, whereas species richness estimators and rarefaction were less effective at removing sampling bias.Main conclusionsOur work provides support for broad-scale climatic limitation on the distributions of exotic species, illustrates the need to account for variation in sampling effort in large biodiversity databases, and highlights the

  9. Estimation variance bounds of importance sampling simulations in digital communication systems

    Science.gov (United States)

    Lu, D.; Yao, K.

    1991-01-01

    In practical applications of importance sampling (IS) simulation, two basic problems are encountered, that of determining the estimation variance and that of evaluating the proper IS parameters needed in the simulations. The authors derive new upper and lower bounds on the estimation variance which are applicable to IS techniques. The upper bound is simple to evaluate and may be minimized by the proper selection of the IS parameter. Thus, lower and upper bounds on the improvement ratio of various IS techniques relative to the direct Monte Carlo simulation are also available. These bounds are shown to be useful and computationally simple to obtain. Based on the proposed technique, one can readily find practical suboptimum IS parameters. Numerical results indicate that these bounding techniques are useful for IS simulations of linear and nonlinear communication systems with intersymbol interference in which bit error rate and IS estimation variances cannot be obtained readily using prior techniques.

  10. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    Science.gov (United States)

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  11. Study on Design Optimization of Centrifugal Compressors Considering Efficiency and Weight

    International Nuclear Information System (INIS)

    Lee, Younghwan; Kang, Shinhyoung; Ha, Kyunggu

    2015-01-01

    Various centrifugal compressors are currently used extensively in industrial fields, where the design requirements are more complicated. This makes it more difficult to determine the optimal design point of a centrifugal compressor. Traditionally, the efficiency is an important factor for optimization. In this study, the weight of the compressor was also considered. The aim of this study was to present the design tendency considering the stage efficiency and weight. In addition, this study suggested possibility of a selection of compressor design objectives at an early design stage based on the optimization results. Only a vaneless diffuser was considered in this case. The Kriging method was used with sample points from 1D design program data. The optimal points were determined in a substitute design space.

  12. Study on Design Optimization of Centrifugal Compressors Considering Efficiency and Weight

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Younghwan; Kang, Shinhyoung [Seoul National University, Seoul (Korea, Republic of); Ha, Kyunggu [Hyundai Motor Group, Ulsan (Korea, Republic of)

    2015-04-15

    Various centrifugal compressors are currently used extensively in industrial fields, where the design requirements are more complicated. This makes it more difficult to determine the optimal design point of a centrifugal compressor. Traditionally, the efficiency is an important factor for optimization. In this study, the weight of the compressor was also considered. The aim of this study was to present the design tendency considering the stage efficiency and weight. In addition, this study suggested possibility of a selection of compressor design objectives at an early design stage based on the optimization results. Only a vaneless diffuser was considered in this case. The Kriging method was used with sample points from 1D design program data. The optimal points were determined in a substitute design space.

  13. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions. II. A simplified implementation.

    Science.gov (United States)

    Tao, Guohua; Miller, William H

    2012-09-28

    An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.

  14. Data Assimilation with Optimal Maps

    Science.gov (United States)

    El Moselhy, T.; Marzouk, Y.

    2012-12-01

    Tarek El Moselhy and Youssef Marzouk Massachusetts Institute of Technology We present a new approach to Bayesian inference that entirely avoids Markov chain simulation and sequential importance resampling, by constructing a map that pushes forward the prior measure to the posterior measure. Existence and uniqueness of a suitable measure-preserving map is established by formulating the problem in the context of optimal transport theory. The map is written as a multivariate polynomial expansion and computed efficiently through the solution of a stochastic optimization problem. While our previous work [1] focused on static Bayesian inference problems, we now extend the map-based approach to sequential data assimilation, i.e., nonlinear filtering and smoothing. One scheme involves pushing forward a fixed reference measure to each filtered state distribution, while an alternative scheme computes maps that push forward the filtering distribution from one stage to the other. We compare the performance of these schemes and extend the former to problems of smoothing, using a map implementation of the forward-backward smoothing formula. Advantages of a map-based representation of the filtering and smoothing distributions include analytical expressions for posterior moments and the ability to generate arbitrary numbers of independent uniformly-weighted posterior samples without additional evaluations of the dynamical model. Perhaps the main advantage, however, is that the map approach inherently avoids issues of sample impoverishment, since it explicitly represents the posterior as the pushforward of a reference measure, rather than with a particular set of samples. The computational complexity of our algorithm is comparable to state-of-the-art particle filters. Moreover, the accuracy of the approach is controlled via the convergence criterion of the underlying optimization problem. We demonstrate the efficiency and accuracy of the map approach via data assimilation in

  15. Importance sampling large deviations in nonequilibrium steady states. I

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.

    2018-03-01

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  16. Importance sampling large deviations in nonequilibrium steady states. I.

    Science.gov (United States)

    Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T

    2018-03-28

    Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.

  17. Performance Optimization of the ATLAS Detector Simulation

    CERN Document Server

    AUTHOR|(CDS)2091018

    In the thesis at hand the current performance of the ATLAS detector simulation, part of the Athena framework, is analyzed and possible optimizations are examined. For this purpose the event based sampling profiler VTune Amplifier by Intel is utilized. As the most important metric to measure improvements, the total execution time of the simulation of $t\\bar{t}$ events is also considered. All efforts are focused on structural changes, which do not influence the simulation output and can be attributed to CPU specific issues, especially front end stalls and vectorization. The most promising change is the activation of profile guided optimization for Geant4, which is a critical external dependency of the simulation. Profile guided optimization gives an average improvement of $8.9\\%$ and $10.0\\%$ for the two considered cases at the cost of one additional compilation (instrumented binaries) and execution (training to obtain profiling data) at build time.

  18. Immunosuppressant therapeutic drug monitoring by LC-MS/MS: workflow optimization through automated processing of whole blood samples.

    Science.gov (United States)

    Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario

    2013-11-01

    Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.

  19. Sampling efficacy for the red imported fire ant Solenopsis invicta (Hymenoptera: Formicidae).

    Science.gov (United States)

    Stringer, Lloyd D; Suckling, David Maxwell; Baird, David; Vander Meer, Robert K; Christian, Sheree J; Lester, Philip J

    2011-10-01

    Cost-effective detection of invasive ant colonies before establishment in new ranges is imperative for the protection of national borders and reducing their global impact. We examined the sampling efficiency of food-baits and pitfall traps (baited and nonbaited) in detecting isolated red imported fire ant (Solenopsis invicta Buren) nests in multiple environments in Gainesville, FL. Fire ants demonstrated a significantly higher preference for a mixed protein food type (hotdog or ground meat combined with sweet peanut butter) than for the sugar or water baits offered. Foraging distance success was a function of colony size, detection trap used, and surveillance duration. Colony gyne number did not influence detection success. Workers from small nests (0- to 15-cm mound diameter) traveled no >3 m to a food source, whereas large colonies (>30-cm mound diameter) traveled up to 17 m. Baited pitfall traps performed best at detecting incipient ant colonies followed by nonbaited pitfall traps then food baits, whereas food baits performed well when trying to detect large colonies. These results were used to create an interactive model in Microsoft Excel, whereby surveillance managers can alter trap type, density, and duration parameters to estimate the probability of detecting specified or unknown S. invicta colony sizes. This model will support decision makers who need to balance the sampling cost and risk of failure to detect fire ant colonies.

  20. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  1. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  2. A review of electro analytical determinations of some important elements (Zn, Se, As) in environmental samples

    International Nuclear Information System (INIS)

    Lichiang; James, B.D.; Magee, R.J.

    1991-01-01

    This review covers electro analytical methods reported in the literature for the determination of zinc, cadmium, selenium and arsenic in environmental and biological samples. A comprehensive survey of electro analytical techniques used for the determination of four important elements, i.e. zinc, cadmium, selenium and arsenic is reported herein with 322 references up to 1990. (Orig./A.B.)

  3. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. Coupled Low-thrust Trajectory and System Optimization via Multi-Objective Hybrid Optimal Control

    Science.gov (United States)

    Vavrina, Matthew A.; Englander, Jacob Aldo; Ghosh, Alexander R.

    2015-01-01

    The optimization of low-thrust trajectories is tightly coupled with the spacecraft hardware. Trading trajectory characteristics with system parameters ton identify viable solutions and determine mission sensitivities across discrete hardware configurations is labor intensive. Local independent optimization runs can sample the design space, but a global exploration that resolves the relationships between the system variables across multiple objectives enables a full mapping of the optimal solution space. A multi-objective, hybrid optimal control algorithm is formulated using a multi-objective genetic algorithm as an outer loop systems optimizer around a global trajectory optimizer. The coupled problem is solved simultaneously to generate Pareto-optimal solutions in a single execution. The automated approach is demonstrated on two boulder return missions.

  2. System performance optimization

    International Nuclear Information System (INIS)

    Bednarz, R.J.

    1978-01-01

    The System Performance Optimization has become an important and difficult field for large scientific computer centres. Important because the centres must satisfy increasing user demands at the lowest possible cost. Difficult because the System Performance Optimization requires a deep understanding of hardware, software and workload. The optimization is a dynamic process depending on the changes in hardware configuration, current level of the operating system and user generated workload. With the increasing complication of the computer system and software, the field for the optimization manoeuvres broadens. The hardware of two manufacturers IBM and CDC is discussed. Four IBM and two CDC operating systems are described. The description concentrates on the organization of the operating systems, the job scheduling and I/O handling. The performance definitions, workload specification and tools for the system stimulation are given. The measurement tools for the System Performance Optimization are described. The results of the measurement and various methods used for the operating system tuning are discussed. (Auth.)

  3. Determination of Ergot Alkaloids: Purity and Stability Assessment of Standards and Optimization of Extraction Conditions for Cereal Samples

    DEFF Research Database (Denmark)

    Krska, R.; Berthiller, F.; Schuhmacher, R.

    2008-01-01

    as those that are the most common and physiologically active. The purity of the standards was investigated by means of liquid chromatography with diode array detection, electrospray ionization, and time-of-flight mass spectrometry (LC-DAD-ESI-TOF-MS). All of the standards assessed showed purity levels...... (PSA) before LC/MS/MS. Based on the results obtained from these optimization studies, a mixture of acetonitrile with ammonium carbonate buffer was used as extraction solvent, as recoveries for all analyzed ergot alkaloids were significantly higher than those with the other solvents. Different sample...

  4. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  5. Sampling soils for 137Cs using various field-sampling volumes

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Schofield, T.G.; White, G.C.; Trujillo, G.

    1981-10-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from intensive study area in the fallout pathway of Trinity were sampled for 137 Cs using 25-, 500-, 2500-, and 12 500-cm 3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137 Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137 Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, where CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137 Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2 to 4 aliquots out of an many as 30 collected need be assayed for 137 Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137 Cs concentration decreased dramatically, but decreased very little with additional labor

  6. Optimal model-based sensorless adaptive optics for epifluorescence microscopy.

    Science.gov (United States)

    Pozzi, Paolo; Soloviev, Oleg; Wilding, Dean; Vdovin, Gleb; Verhaegen, Michel

    2018-01-01

    We report on a universal sample-independent sensorless adaptive optics method, based on modal optimization of the second moment of the fluorescence emission from a point-like excitation. Our method employs a sample-independent precalibration, performed only once for the particular system, to establish the direct relation between the image quality and the aberration. The method is potentially applicable to any form of microscopy with epifluorescence detection, including the practically important case of incoherent fluorescence emission from a three dimensional object, through minor hardware modifications. We have applied the technique successfully to a widefield epifluorescence microscope and to a multiaperture confocal microscope.

  7. Fluoroquinolone antibiotics in environmental waters: sample preparation and determination.

    Science.gov (United States)

    Speltini, Andrea; Sturini, Michela; Maraschi, Federica; Profumo, Antonella

    2010-04-01

    The aim of this review is to provide a general overview on the analytical methods proposed in the last decade for trace fluoroquinolone (FQ) determination in environmental waters. A large number of studies have been developed on this topic in reason of the importance of their monitoring in the studies of environmental mobility and potential degradation pathways. Every step of the analysis has been carefully considered, with a particular attention to sample preparation, in relationship with the problems involved in the analysis of real matrices. The different strategies to minimise interference from organic matter and to achieve optimal sensitivity, especially important in those samples with lower FQ concentrations, were also highlighted. Results and progress in this field have been described and critically commented. Moreover, a worldwide overview on the presence of FQs in the environmental waters has been reported.

  8. Energy Preserved Sampling for Compressed Sensing MRI

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-01-01

    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  9. Optimization of well field management

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine

    Groundwater is a limited but important resource for fresh water supply. Differ- ent conflicting objectives are important when operating a well field. This study investigates how the management of a well field can be improved with respect to different objectives simultaneously. A framework...... for optimizing well field man- agement using multi-objective optimization is developed. The optimization uses the Strength Pareto Evolutionary Algorithm 2 (SPEA2) to find the Pareto front be- tween the conflicting objectives. The Pareto front is a set of non-inferior optimal points and provides an important tool...... for the decision-makers. The optimization framework is tested on two case studies. Both abstract around 20,000 cubic meter of water per day, but are otherwise rather different. The first case study concerns the management of Hardhof waterworks, Switzer- land, where artificial infiltration of river water...

  10. Optimism, well-being, and perceived stigma in individuals living with HIV.

    Science.gov (United States)

    Ammirati, Rachel J; Lamis, Dorian A; Campos, Peter E; Farber, Eugene W

    2015-01-01

    Given the significant psychological challenges posed by HIV-related stigma for individuals living with HIV, investigating psychological resource factors for coping with HIV-related stigma is important. Optimism, which refers to generalized expectations regarding favorable outcomes, has been associated with enhanced psychological adaptation to health conditions, including HIV. Therefore, this cross-sectional study investigated associations among optimism, psychological well-being, and HIV stigma in a sample of 116 adults living with HIV and seeking mental health services. Consistent with study hypotheses, optimism was positively associated with psychological well-being, and psychological well-being was negatively associated with HIV-related stigma. Moreover, results of a full structural equation model suggested a mediation pattern such that as optimism increases, psychological well-being increases, and perceived HIV-related stigma decreases. The implications of these findings for clinical interventions and future research are discussed.

  11. Bionic optimization in structural design stochastically based methods to improve the performance of parts and assemblies

    CERN Document Server

    Gekeler, Simon

    2016-01-01

    The book provides suggestions on how to start using bionic optimization methods, including pseudo-code examples of each of the important approaches and outlines of how to improve them. The most efficient methods for accelerating the studies are discussed. These include the selection of size and generations of a study’s parameters, modification of these driving parameters, switching to gradient methods when approaching local maxima, and the use of parallel working hardware. Bionic Optimization means finding the best solution to a problem using methods found in nature. As Evolutionary Strategies and Particle Swarm Optimization seem to be the most important methods for structural optimization, we primarily focus on them. Other methods such as neural nets or ant colonies are more suited to control or process studies, so their basic ideas are outlined in order to motivate readers to start using them. A set of sample applications shows how Bionic Optimization works in practice. From academic studies on simple fra...

  12. Novel synthesis of nanocomposite for the extraction of Sildenafil Citrate (Viagra) from water and urine samples: Process screening and optimization.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Purkait, Mihir Kumar

    2017-09-01

    A sensitive analytical method is investigated to concentrate and determine trace level of Sildenafil Citrate (SLC) present in water and urine samples. The method is based on a sample treatment using dispersive solid-phase micro-extraction (DSPME) with laboratory-made Mn@ CuS/ZnS nanocomposite loaded on activated carbon (Mn@ CuS/ZnS-NCs-AC) as a sorbent for the target analyte. The efficiency was enhanced by ultrasound-assisted (UA) with dispersive nanocomposite solid-phase micro-extraction (UA-DNSPME). Four significant variables affecting SLC recovery like; pH, eluent volume, sonication time and adsorbent mass were selected by the Plackett-Burman design (PBD) experiments. These selected factors were optimized by the central composite design (CCD) to maximize extraction of SLC. The results exhibited that the optimum conditions for maximizing extraction of SLC were 6.0 pH, 300μL eluent (acetonitrile) volume, 10mg of adsorbent and 6min sonication time. Under optimized conditions, virtuous linearity of SLC was ranged from 30 to 4000ngmL -1 with R 2 of 0.99. The limit of detection (LOD) was 2.50ngmL -1 and the recoveries at two spiked levels were ranged from 97.37 to 103.21% with the relative standard deviation (RSD) less than 4.50% (n=15). The enhancement factor (EF) was 81.91. The results show that the combination UAE with DNSPME is a suitable method for the determination of SLC in water and urine samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Characteristics of psychiatric patients for whom financial considerations affect optimal treatment provision.

    Science.gov (United States)

    West, Joyce C; Pingitore, David; Zarin, Deborah A

    2002-12-01

    This study assessed characteristics of psychiatric patients for whom financial considerations affected the provision of "optimal" treatment. Psychiatrists reported that for 33.8 percent of 1,228 patients from a national sample, financial considerations such as managed care limitations, the patient's personal finances, and limitations inherent in the public care system adversely affected the provision of optimal treatment. Patients were more likely to have their treatment adversely affected by financial considerations if they were more severely ill, had more than one behavioral health disorder or a psychosocial problem, or were receiving treatment under managed care arrangements. Patients for whom financial considerations affect the provision of optimal treatment represent a population for whom access to treatment may be particularly important.

  14. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...

  15. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  16. Nonlinear optimization

    CERN Document Server

    Ruszczynski, Andrzej

    2011-01-01

    Optimization is one of the most important areas of modern applied mathematics, with applications in fields from engineering and economics to finance, statistics, management science, and medicine. While many books have addressed its various aspects, Nonlinear Optimization is the first comprehensive treatment that will allow graduate students and researchers to understand its modern ideas, principles, and methods within a reasonable time, but without sacrificing mathematical precision. Andrzej Ruszczynski, a leading expert in the optimization of nonlinear stochastic systems, integrates the theory and the methods of nonlinear optimization in a unified, clear, and mathematically rigorous fashion, with detailed and easy-to-follow proofs illustrated by numerous examples and figures. The book covers convex analysis, the theory of optimality conditions, duality theory, and numerical methods for solving unconstrained and constrained optimization problems. It addresses not only classical material but also modern top...

  17. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    International Nuclear Information System (INIS)

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin; Wang, Shuang

    2015-01-01

    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  18. A hybrid algorithm for reliability analysis combining Kriging and subset simulation importance sampling

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Cao; Sun, Zhili; Zhao, Qianli; Wang, Qibin [Northeastern University, Shenyang (China); Wang, Shuang [Jiangxi University of Science and Technology, Ganzhou (China)

    2015-08-15

    To solve the problem of large computation when failure probability with time-consuming numerical model is calculated, we propose an improved active learning reliability method called AK-SSIS based on AK-IS algorithm. First, an improved iterative stopping criterion in active learning is presented so that iterations decrease dramatically. Second, the proposed method introduces Subset simulation importance sampling (SSIS) into the active learning reliability calculation, and then a learning function suitable for SSIS is proposed. Finally, the efficiency of AK-SSIS is proved by two academic examples from the literature. The results show that AK-SSIS requires fewer calls to the performance function than AK-IS, and the failure probability obtained from AK-SSIS is very robust and accurate. Then this method is applied on a spur gear pair for tooth contact fatigue reliability analysis.

  19. Comparison of optimal design methods in inverse problems

    International Nuclear Information System (INIS)

    Banks, H T; Holm, K; Kappel, F

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst–Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667–77; De Gaetano A and Arino O 2000 J. Math. Biol. 40 136–68; Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979–90)

  20. Comparison of optimal design methods in inverse problems

    Science.gov (United States)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  1. Validation of dipslides as a tool for environmental sampling in a real-life hospital setting

    DEFF Research Database (Denmark)

    Ibfelt, T; Foged, Charlotte Bernhardt Laiho; Andersen, L P

    2014-01-01

    Environmental sampling in hospitals is becoming increasingly important because of the rise in nosocomial infections. In order to monitor and track these infections and optimize cleaning and disinfection, we need to be able to locate the fomites with the highest amount of microorganisms, but the o...

  2. Simultaneous Optimization of Tallies in Difficult Shielding Problems

    International Nuclear Information System (INIS)

    Peplow, Douglas E.; Evans, Thomas M.; Wagner, John C.

    2008-01-01

    Monte Carlo is quite useful for calculating specific quantities in complex transport problems. Many variance reduction strategies have been developed that accelerate Monte Carlo calculations for specific tallies. However, when trying to calculate multiple tallies or a mesh tally, users have had to accept different levels of relative uncertainty among the tallies or run separate calculations optimized for each individual tally. To address this limitation, an extension of the CADIS (Consistent Adjoint Driven Importance Sampling) method, which is used for difficult source/detector problems, has been developed to optimize several tallies or the cells of a mesh tally simultaneously. The basis for this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. This method utilizes the results of a forward discrete ordinates solution, which may be based on a quick, coarse-mesh calculation, to develop a forward-weighted source for the adjoint calculation. The importance map and the biased source computed from the adjoint flux are then used in the forward Monte Carlo calculation to obtain approximately uniform relative uncertainties for the desired tallies. This extension is called forward-weighted CADIS, or FW-CADIS

  3. Optimization and approximation

    CERN Document Server

    Pedregal, Pablo

    2017-01-01

    This book provides a basic, initial resource, introducing science and engineering students to the field of optimization. It covers three main areas: mathematical programming, calculus of variations and optimal control, highlighting the ideas and concepts and offering insights into the importance of optimality conditions in each area. It also systematically presents affordable approximation methods. Exercises at various levels have been included to support the learning process.

  4. Technical Note: Comparison of storage strategies of sea surface microlayer samples

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2013-07-01

    Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.

  5. Characterizing lentic freshwater fish assemblages using multiple sampling methods

    Science.gov (United States)

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    Characterizing fish assemblages in lentic ecosystems is difficult, and multiple sampling methods are almost always necessary to gain reliable estimates of indices such as species richness. However, most research focused on lentic fish sampling methodology has targeted recreationally important species, and little to no information is available regarding the influence of multiple methods and timing (i.e., temporal variation) on characterizing entire fish assemblages. Therefore, six lakes and impoundments (48–1,557 ha surface area) were sampled seasonally with seven gear types to evaluate the combined influence of sampling methods and timing on the number of species and individuals sampled. Probabilities of detection for species indicated strong selectivities and seasonal trends that provide guidance on optimal seasons to use gears when targeting multiple species. The evaluation of species richness and number of individuals sampled using multiple gear combinations demonstrated that appreciable benefits over relatively few gears (e.g., to four) used in optimal seasons were not present. Specifically, over 90 % of the species encountered with all gear types and season combinations (N = 19) from six lakes and reservoirs were sampled with nighttime boat electrofishing in the fall and benthic trawling, modified-fyke, and mini-fyke netting during the summer. Our results indicated that the characterization of lentic fish assemblages was highly influenced by the selection of sampling gears and seasons, but did not appear to be influenced by waterbody type (i.e., natural lake, impoundment). The standardization of data collected with multiple methods and seasons to account for bias is imperative to monitoring of lentic ecosystems and will provide researchers with increased reliability in their interpretations and decisions made using information on lentic fish assemblages.

  6. Optimization of pressurized liquid extraction (PLE) of dioxin-furans and dioxin-like PCBs from environmental samples.

    Science.gov (United States)

    Antunes, Pedro; Viana, Paula; Vinhas, Tereza; Capelo, J L; Rivera, J; Gaspar, Elvira M S M

    2008-05-30

    Pressurized liquid extraction (PLE) applying three extraction cycles, temperature and pressure, improved the efficiency of solvent extraction when compared with the classical Soxhlet extraction. Polychlorinated-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like PCBs (coplanar polychlorinated biphenyls (Co-PCBs)) in two Certified Reference Materials [DX-1 (sediment) and BCR 529 (soil)] and in two contaminated environmental samples (sediment and soil) were extracted by ASE and Soxhlet methods. Unlike data previously reported by other authors, results demonstrated that ASE using n-hexane as solvent and three extraction cycles, 12.4 MPa (1800 psi) and 150 degrees C achieves similar recovery results than the classical Soxhlet extraction for PCDFs and Co-PCBs, and better recovery results for PCDDs. ASE extraction, performed in less time and with less solvent proved to be, under optimized conditions, an excellent extraction technique for the simultaneous analysis of PCDD/PCDFs and Co-PCBs from environmental samples. Such fast analytical methodology, having the best cost-efficiency ratio, will improve the control and will provide more information about the occurrence of dioxins and the levels of toxicity and thereby will contribute to increase human health.

  7. The importance of community building for establishing data management and curation practices for physical samples

    Science.gov (United States)

    Ramdeen, S.; Hangsterfer, A.; Stanley, V. L.

    2017-12-01

    There is growing enthusiasm for curation of physical samples in the Earth Science community (see sessions at AGU, GSA, ESIP). Multiple federally funded efforts aim to develop best practices for curation of physical samples; however, these efforts have not yet been consolidated. Harmonizing these concurrent efforts would enable the community as a whole to build the necessary tools and community standards to move forward together. Preliminary research indicate the various groups focused on this topic are working in isolation, and the development of standards needs to come from the broadest view of `community'. We will investigate the gaps between communities by collecting information about preservation policies and practices from curators, who can provide a diverse cross-section of the grand challenges to the overall community. We will look at existing reports and study results to identify example cases, then develop a survey to gather large scale data to reinforce or clarify the example cases. We will be targeting the various community groups which are working on similar issues, and use the survey to improve the visibility of developed best practices. Given that preservation and digital collection management for physical samples are both important and difficult at present (GMRWG, 2015; NRC, 2002), barriers to both need to be addressed in order to achieve open science goals for the entire community. To address these challenges, EarthCube's iSamples, a research coordination network established to advance discoverability, access, and curation of physical samples using cyberinfrastructure, has formed a working group to collect use cases to examine the breadth of earth scientists' work with physical samples. This research team includes curators of state survey and oceanographic geological collections, and a researcher from information science. In our presentation, we will share our research and the design of the proposed survey. Our goal is to engage the audience in a

  8. Purification and concentration of lead samples in biological monitoring of occupational exposures

    Directory of Open Access Journals (Sweden)

    A Rahimi-Froushani

    2006-04-01

    Full Text Available Background and Aims:Lead is an important environmental constituent widely used in industrialprocesses for production of synthetic materials and therefore can be released in the environmentcausing public exposure especially around the industrial residence area. For evaluation of humanexposure to trace toxic metal of Pb (II, environmental and biological monitoring are essentialprocesses, in which, preparation of such samples is one of the most time-consuming and errorproneaspects prior to analysis. The use of solid-phase extraction (SPE has grown and is a fertiletechnique of sample preparation as it provides better results than those produced by liquid-liquidextraction (LLE. The aim of this study was to investigate factors influencing sample pretreatmentfor trace analysis of lead in biological samples for evaluation of occupational exposure.Method :To evaluate factors influencing quantitative analysis scheme of lead, solid phaseextraction using mini columns filled with XAD-4 resin was optimized with regard to sample pH,ligand concentration, loading flow rate, elution solvent, sample volume (up to 500 ml, elutionvolume, amount of resins, and sample matrix interferences.Results :Lead was retained on solid sorbent and eluted followed by simple determination ofanalytes by using flame atomic absorption spectrometery. Obtained recoveries of the metal ionwere more than 92%. The amount of the analyte detected after simultaneous pre-concentrationwas basically in agreement with the added amounts. The optimized procedure was also validatedwith three different pools of spiked urine samples and showed a good reproducibility over sixconsecutive days as well as six within-day experiments. The developed method promised to beapplicable for evaluation of other metal ions present in different environmental and occupationalsamples as suitable results were obtained for relative standard deviation (less than 10%.Conclusion:This optimized method can be considered to be

  9. BWROPT: A multi-cycle BWR fuel cycle optimization code

    Energy Technology Data Exchange (ETDEWEB)

    Ottinger, Keith E.; Maldonado, G. Ivan, E-mail: Ivan.Maldonado@utk.edu

    2015-09-15

    Highlights: • A multi-cycle BWR fuel cycle optimization algorithm is presented. • New fuel inventory and core loading pattern determination. • The parallel simulated annealing algorithm was used for the optimization. • Variable sampling probabilities were compared to constant sampling probabilities. - Abstract: A new computer code for performing BWR in-core and out-of-core fuel cycle optimization for multiple cycles simultaneously has been developed. Parallel simulated annealing (PSA) is used to optimize the new fuel inventory and placement of new and reload fuel for each cycle considered. Several algorithm improvements were implemented and evaluated. The most significant of these are variable sampling probabilities and sampling new fuel types from an ordered array. A heuristic control rod pattern (CRP) search algorithm was also implemented, which is useful for single CRP determinations, however, this feature requires significant computational resources and is currently not practical for use in a full multi-cycle optimization. The PSA algorithm was demonstrated to be capable of significant objective function reduction and finding candidate loading patterns without constraint violations. The use of variable sampling probabilities was shown to reduce runtime while producing better results compared to using constant sampling probabilities. Sampling new fuel types from an ordered array was shown to have a mixed effect compared to random new fuel type sampling, whereby using both random and ordered sampling produced better results but required longer runtimes.

  10. Optimization and application of octadecyl-modified monolithic silica for solid-phase extraction of drugs in whole blood samples.

    Science.gov (United States)

    Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka

    2017-09-29

    Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. The multi-objective optimization of the horizontal-axis marine current turbine based on NSGA-II algorithm

    International Nuclear Information System (INIS)

    Zhu, G J; Guo, P C; Luo, X Q; Feng, J J

    2012-01-01

    The present paper describes a hydrodynamic optimization technique for horizontal-axial marine current turbine. The pitch angle distribution is important to marine current turbine. In this paper, the pitch angle distribution curve is parameterized as four control points by Bezier curve method. The coordinates of the four control points are chosen as optimization variables, and the sample space are structured according to the Box-Behnken experimental design method (BBD). Then the power capture coefficient and axial thrust coefficient in design tip-speed ratio is obtained for all the elements in the sample space by CFD numerical simulation. The power capture coefficient and axial thrust are chosen as objective function, and quadratic polynomial regression equations are constructed to fit the relationship between the optimization variables and each objective function according to response surface model. With the obtained quadratic polynomial regression equations as performance prediction model, the marine current turbine is optimized using the NSGA-II multi-objective genetic algorithm, which finally offers an improved marine current turbine.

  12. Optimal protocols and optimal transport in stochastic thermodynamics.

    Science.gov (United States)

    Aurell, Erik; Mejía-Monasterio, Carlos; Muratore-Ginanneschi, Paolo

    2011-06-24

    Thermodynamics of small systems has become an important field of statistical physics. Such systems are driven out of equilibrium by a control, and the question is naturally posed how such a control can be optimized. We show that optimization problems in small system thermodynamics are solved by (deterministic) optimal transport, for which very efficient numerical methods have been developed, and of which there are applications in cosmology, fluid mechanics, logistics, and many other fields. We show, in particular, that minimizing expected heat released or work done during a nonequilibrium transition in finite time is solved by the Burgers equation and mass transport by the Burgers velocity field. Our contribution hence considerably extends the range of solvable optimization problems in small system thermodynamics.

  13. Optimization Extracting Technology of Cynomorium songaricum Rupr. Saponins by Ultrasonic and Determination of Saponins Content in Samples with Different Source

    OpenAIRE

    Xiaoli Wang; Qingwei Wei; Xinqiang Zhu; Chunmei Wang; Yonggang Wang; Peng Lin; Lin Yang

    2015-01-01

    Extraction process was optimized by single factor and orthogonal experiment (L9 (34)). Moreover, the content determination was studied in methodology. The optimum ultrasonic extraction conditions were: ethanol concentration of 75%, ultrasonic power of 420 w, the solid-liquid ratio of 1:15, extraction duration of 45 min, extraction temperature of 90°C and extraction for 2 times. Saponins content in Guazhou samples was significantly higher than those in Xinjiang and Inner Mongolia. Meanwhile, G...

  14. Optimizing the magnetization-prepared rapid gradient-echo (MP-RAGE sequence.

    Directory of Open Access Journals (Sweden)

    Jinghua Wang

    Full Text Available The three-dimension (3D magnetization-prepared rapid gradient-echo (MP-RAGE sequence is one of the most popular sequences for structural brain imaging in clinical and research settings. The sequence captures high tissue contrast and provides high spatial resolution with whole brain coverage in a short scan time. In this paper, we first computed the optimal k-space sampling by optimizing the contrast of simulated images acquired with the MP-RAGE sequence at 3.0 Tesla using computer simulations. Because the software of our scanner has only limited settings for k-space sampling, we then determined the optimal k-space sampling for settings that can be realized on our scanner. Subsequently we optimized several major imaging parameters to maximize normal brain tissue contrasts under the optimal k-space sampling. The optimal parameters are flip angle of 12°, effective inversion time within 900 to 1100 ms, and delay time of 0 ms. In vivo experiments showed that the quality of images acquired with our optimal protocol was significantly higher than that of images obtained using recommended protocols in prior publications. The optimization of k-spacing sampling and imaging parameters significantly improved the quality and detection sensitivity of brain images acquired with MP-RAGE.

  15. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  16. Parallel importance sampling in conditional linear Gaussian networks

    DEFF Research Database (Denmark)

    Salmerón, Antonio; Ramos-López, Darío; Borchani, Hanen

    2015-01-01

    In this paper we analyse the problem of probabilistic inference in CLG networks when evidence comes in streams. In such situations, fast and scalable algorithms, able to provide accurate responses in a short time are required. We consider the instantiation of variational inference and importance ...

  17. Optimization the composition of sand-lime products modified of diabase aggregate

    Science.gov (United States)

    Komisarczyk, K.; Stępień, A.

    2017-10-01

    The problem of optimizing the composition of building materials is currently of great importance due to the increasing competitiveness and technological development in the construction industry. This phenomenon also applies to catalog sand-lime. The respective arrangement of individual components or their equivalents, and linking them with the main parameters of the composition of the mixture, i.e. The lime/sand/water should lead to the intended purpose. The introduction of sand-lime diabase aggregate is concluded with a positive effect of final products. The paper presents the results of optimization with the addition of diabase aggregate. The constant value was the amount of water, variable - the mass of the dry ingredients. The program of experimental studies was taken for 6 series of silicates made in industrial conditions. Final samples were tested for mechanical and physico-chemical expanding the analysis of the mercury intrusion porosimetry, SEM and XRD. The results show that, depending on the aggregate’s contribution, exhibit differences. The sample in an amount of 10% diabase aggregate the compressive strength was higher than in the case of reference sample, while modified samples absorbed less water.

  18. A microscale protein NMR sample screening pipeline

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, Paolo; Swapna, G. V. T.; Huang, Yuanpeng J.; Aramini, James M. [State University of New Jersey, Center for Advanced Biotechnology and Medicine, Department of Molecular Biology and Biochemistry, Rutgers (United States); Anklin, Clemens [Bruker Biospin Corporation (United States); Conover, Kenith; Hamilton, Keith; Xiao, Rong; Acton, Thomas B.; Ertekin, Asli; Everett, John K.; Montelione, Gaetano T., E-mail: guy@cabm.rutgers.ed [State University of New Jersey, Center for Advanced Biotechnology and Medicine, Department of Molecular Biology and Biochemistry, Rutgers (United States)

    2010-01-15

    As part of efforts to develop improved methods for NMR protein sample preparation and structure determination, the Northeast Structural Genomics Consortium (NESG) has implemented an NMR screening pipeline for protein target selection, construct optimization, and buffer optimization, incorporating efficient microscale NMR screening of proteins using a micro-cryoprobe. The process is feasible because the newest generation probe requires only small amounts of protein, typically 30-200 {mu}g in 8-35 {mu}l volume. Extensive automation has been made possible by the combination of database tools, mechanization of key process steps, and the use of a micro-cryoprobe that gives excellent data while requiring little optimization and manual setup. In this perspective, we describe the overall process used by the NESG for screening NMR samples as part of a sample optimization process, assessing optimal construct design and solution conditions, as well as for determining protein rotational correlation times in order to assess protein oligomerization states. Database infrastructure has been developed to allow for flexible implementation of new screening protocols and harvesting of the resulting output. The NESG micro NMR screening pipeline has also been used for detergent screening of membrane proteins. Descriptions of the individual steps in the NESG NMR sample design, production, and screening pipeline are presented in the format of a standard operating procedure.

  19. Dual-mode nested search method for categorical uncertain multi-objective optimization

    Science.gov (United States)

    Tang, Long; Wang, Hu

    2016-10-01

    Categorical multi-objective optimization is an important issue involved in many matching design problems. Non-numerical variables and their uncertainty are the major challenges of such optimizations. Therefore, this article proposes a dual-mode nested search (DMNS) method. In the outer layer, kriging metamodels are established using standard regular simplex mapping (SRSM) from categorical candidates to numerical values. Assisted by the metamodels, a k-cluster-based intelligent sampling strategy is developed to search Pareto frontier points. The inner layer uses an interval number method to model the uncertainty of categorical candidates. To improve the efficiency, a multi-feature convergent optimization via most-promising-area stochastic search (MFCOMPASS) is proposed to determine the bounds of objectives. Finally, typical numerical examples are employed to demonstrate the effectiveness of the proposed DMNS method.

  20. The effect of an optimized imaging flow cytometry analysis template on sample throughput in the reduced culture cytokinesis-block micronucleus assay

    International Nuclear Information System (INIS)

    Rodrigues, M.A.; Beaton-Green, L.A.; Wilkins, R.C.; Probst, C.E.

    2016-01-01

    In cases of overexposure to ionizing radiation, the cytokinesis-block micronucleus (CBMN) assay can be performed in order to estimate the dose of radiation to an exposed individual. However, in the event of a large-scale radiation accident with many potentially exposed casualties, the assay must be able to generate accurate dose estimates to within ±0.5 Gy as quickly as possible. The assay has been adapted to, validated and optimized on the ImageStream"X imaging flow cyto-meter. The ease of running this automated version of the CBMN assay allowed investigation into the accuracy of dose estimates after reducing the volume of whole blood cultured to 200 μl and reducing the culture time to 48 h. The data analysis template used to identify binucleated lymphocyte cells (BNCs) and micronuclei (MN) has since been optimized to improve the sensitivity and specificity of BNC and MN detection. This paper presents a re-analysis of existing data using this optimized analysis template to demonstrate that dose estimations from blinded samples can be obtained to the same level of accuracy in a shorter data collection time. Here, we show that dose estimates from blinded samples were obtained to within ±0.5 Gy of the delivered dose when data collection time was reduced by 30 min at standard culture conditions and by 15 min at reduced culture conditions. Reducing data collection time while retaining the same level of accuracy in our imaging flow cytometry-based version of the CBMN assay results in higher throughput and further increases the relevancy of the CBMN assay as a radiation bio-dosimeter. (authors)

  1. Is patient size important in dose determination and optimization in cardiology?

    International Nuclear Information System (INIS)

    Reay, J; Chapple, C L; Kotre, C J

    2003-01-01

    Patient dose determination and optimization have become more topical in recent years with the implementation of the Medical Exposures Directive into national legislation, the Ionising Radiation (Medical Exposure) Regulations. This legislation incorporates a requirement for new equipment to provide a means of displaying a measure of patient exposure and introduces the concept of diagnostic reference levels. It is normally assumed that patient dose is governed largely by patient size; however, in cardiology, where procedures are often very complex, the significance of patient size is less well understood. This study considers over 9000 cardiology procedures, undertaken throughout the north of England, and investigates the relationship between patient size and dose. It uses simple linear regression to calculate both correlation coefficients and significance levels for data sorted by both room and individual clinician for the four most common examinations, left ventrical and/or coronary angiography, single vessel stent insertion and single vessel angioplasty. This paper concludes that the correlation between patient size and dose is weak for the procedures considered. It also illustrates the use of an existing method for removing the effect of patient size from dose survey data. This allows typical doses and, therefore, reference levels to be defined for the purposes of dose optimization

  2. Optimization of time characteristics in activation analysis

    International Nuclear Information System (INIS)

    Gurvich, L.G.; Umaraliev, A.T.

    2006-01-01

    Full text: The activation analysis temporal characteristics optimization methods developed at present are aimed at determination of optimal values of the three important parameters - irradiation time, cooling time and measurement time. In the performed works, especially in [1-5] the activation analysis processes are described, the optimal values of optimization parameters are obtained from equations solved, and the computational results are given for these parameters for a number of elements. However, the equations presented in [2] were inaccurate, did not allow one to have optimization parameters results for one element content calculations, and it did not take into account background dependence of time. Therefore, we proposed modified equations to determine the optimal temporal parameters and iteration processes for the solution of these equations. It is well-known that the activity of studied sample during measurements does not change significantly, i.e. measurement time is much shorter than the half-life, thus the processes taking place can be described by the Poisson probability distribution, and in general case one can apply binomial distribution. The equation and iteration processes use in this research describe both probability distributions. Expectedly, the cooling time iteration expressions obtained for one element analysis case are similar for the both distribution types, as the optimised time values occurred to be of the same order as half-life values, whereas the cooling time, as we observed, depends on the ratio of the studied sample's peak value to the background peak, and can be significantly larger than the half-life value. This pattern is general, and can be derived from the optimized time expressions, which is supported by the experimental data on short-living isotopes [3,4]. For the isotopes with large half-lives, up to years, like cobalt-60, the cooling time values given in the above mentioned works are equal to months which, apparently

  3. m-AAA Complexes Are Not Crucial for the Survival of Arabidopsis Under Optimal Growth Conditions Despite Their Importance for Mitochondrial Translation.

    Science.gov (United States)

    Kolodziejczak, Marta; Skibior-Blaszczyk, Renata; Janska, Hanna

    2018-05-01

    For optimal mitochondrial activity, the mitochondrial proteome must be properly maintained or altered in response to developmental and environmental stimuli. Based on studies of yeast and humans, one of the key players in this control are m-AAA proteases, mitochondrial inner membrane-bound ATP-dependent metalloenzymes. This study focuses on the importance of m-AAA proteases in plant mitochondria, providing their first experimentally proven physiological substrate. We found that the Arabidopsis m- AAA complexes composed of AtFTSH3 and/or AtFTSH10 are involved in the proteolytic maturation of ribosomal subunit L32. Consequently, in the double Arabidopsis ftsh3/10 mutant, mitoribosome biogenesis, mitochondrial translation and functionality of OXPHOS (oxidative phosphorylation) complexes are impaired. However, in contrast to their mammalian or yeast counterparts, plant m-AAA complexes are not critical for the survival of Arabidopsis under optimal conditions; ftsh3/10 plants are only slightly smaller in size at the early developmental stage compared with plants containing m-AAA complexes. Our data suggest that a lack of significant visible morphological alterations under optimal growth conditions involves mechanisms which rely on existing functional redundancy and induced functional compensation in Arabidopsis mitochondria.

  4. Determination of Pu in soil samples

    International Nuclear Information System (INIS)

    Torres C, C. O.; Hernandez M, H.; Romero G, E. T.; Vega C, H. R.

    2016-10-01

    The irreversible consequences of accidents occurring in nuclear plants and in nuclear fuel reprocessing sites are mainly the distribution of different radionuclides in different matrices such as the soil. The distribution in the superficial soil is related to the internal and external exposure to the radiation of the affected population. The internal contamination with radionuclides such as Pu is of great relevance to the nuclear forensic science, where is important to know the chemical and isotopic compositions of nuclear materials. The objective of this work is to optimize the radiochemical separation of plutonium (Pu) from soil samples and to determine their concentration. The soil samples were prepared using acid digestion assisted by microwave; purification of Pu was carried out with AG1X8 resin using ion exchange chromatography. Pu isotopes were measured using ICP-SFMS. In order to reduce the interference due to the presence of "2"3"8UH "+ in the samples, a solvent removal system (Apex) was used. In addition, the limit of detection and quantification of Pu was determined. It was found that the recovery efficiency of Pu in soil samples ranges from 70 to 93%. (Author)

  5. Future xenon system operational parameter optimization

    International Nuclear Information System (INIS)

    Lowrey, J.D.; Eslinger, P.W.; Miley, H.S.

    2016-01-01

    Any atmospheric monitoring network will have practical limitations in the density of its sampling stations. The classical approach to network optimization has been to have 12 or 24-h integration of air samples at the highest station density possible to improve minimum detectable concentrations. The authors present here considerations on optimizing sampler integration time to make the best use of any network and maximize the likelihood of collecting quality samples at any given location. In particular, this work makes the case that shorter duration sample integration (i.e. <12 h) enhances critical isotopic information and improves the source location capability of a radionuclide network, or even just one station. (author)

  6. Convergent evolution of vascular optimization in kelp (Laminariales).

    Science.gov (United States)

    Drobnitch, Sarah Tepler; Jensen, Kaare H; Prentice, Paige; Pittermann, Jarmila

    2015-10-07

    Terrestrial plants and mammals, although separated by a great evolutionary distance, have each arrived at a highly conserved body plan in which universal allometric scaling relationships govern the anatomy of vascular networks and key functional metabolic traits. The universality of allometric scaling suggests that these phyla have each evolved an 'optimal' transport strategy that has been overwhelmingly adopted by extant species. To truly evaluate the dominance and universality of vascular optimization, however, it is critical to examine other, lesser-known, vascularized phyla. The brown algae (Phaeophyceae) are one such group--as distantly related to plants as mammals, they have convergently evolved a plant-like body plan and a specialized phloem-like transport network. To evaluate possible scaling and optimization in the kelp vascular system, we developed a model of optimized transport anatomy and tested it with measurements of the giant kelp, Macrocystis pyrifera, which is among the largest and most successful of macroalgae. We also evaluated three classical allometric relationships pertaining to plant vascular tissues with a diverse sampling of kelp species. Macrocystis pyrifera displays strong scaling relationships between all tested vascular parameters and agrees with our model; other species within the Laminariales display weak or inconsistent vascular allometries. The lack of universal scaling in the kelps and the presence of optimized transport anatomy in M. pyrifera raises important questions about the evolution of optimization and the possible competitive advantage conferred by optimized vascular systems to multicellular phyla. © 2015 The Author(s).

  7. Optimization theory with applications

    CERN Document Server

    Pierre, Donald A

    1987-01-01

    Optimization principles are of undisputed importance in modern design and system operation. They can be used for many purposes: optimal design of systems, optimal operation of systems, determination of performance limitations of systems, or simply the solution of sets of equations. While most books on optimization are limited to essentially one approach, this volume offers a broad spectrum of approaches, with emphasis on basic techniques from both classical and modern work.After an introductory chapter introducing those system concepts that prevail throughout optimization problems of all typ

  8. Optimization Methods in Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    L. Povoda

    2016-09-01

    Full Text Available Emotions play big role in our everyday communication and contain important information. This work describes a novel method of automatic emotion recognition from textual data. The method is based on well-known data mining techniques, novel approach based on parallel run of SVM (Support Vector Machine classifiers, text preprocessing and 3 optimization methods: sequential elimination of attributes, parameter optimization based on token groups, and method of extending train data sets during practical testing and production release final tuning. We outperformed current state of the art methods and the results were validated on bigger data sets (3346 manually labelled samples which is less prone to overfitting when compared to related works. The accuracy achieved in this work is 86.89% for recognition of 5 emotional classes. The experiments were performed in the real world helpdesk environment, was processing Czech language but the proposed methodology is general and can be applied to many different languages.

  9. Optimization and preliminary characterization of venom isolated from 3 medically important jellyfish: the box (Chironex fleckeri), Irukandji (Carukia barnesi), and blubber (Catostylus mosaicus) jellyfish.

    Science.gov (United States)

    Wiltshire, C J; Sutherland, S K; Fenner, P J; Young, A R

    2000-01-01

    To optimize venom extraction and to undertake preliminary biochemical studies of venom from the box jellyfish (Chironex fleckeri), the Irukandji jellyfish (Carukia barnesi), and the blubber jellyfish (Catostylus mosaicus). Lyophilized crude venoms from box jellyfish tentacles and whole Irukandji jellyfish were prepared in water by homogenization, sonication, and rapid freeze thawing. A second technique, consisting of grinding samples with a glass mortar and pestle and using phosphate-buffered saline, was used to prepare crude venom from isolated nematocysts of the box jellyfish, the bells of Irukandji jellyfish, and the oral lobes of blubber jellyfish. Venoms were compared by use of sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Western blot test. Toxicity of some venoms was determined by intravenous median lethal dose assay in mice. Different venom extraction techniques produced significantly different crude venoms for both box and Irukandji jellyfish. Irukandji and blubber venom SDS-PAGE protein profiles were established for the first time. Analysis of Western blot tests revealed that box jellyfish antivenin reacted specifically with the venom of each jellyfish. Toxicity was found in Irukandji jellyfish venom derived by use of the mortar-and-pestle method, but not in the lyophilized venom. Glass mortar-and-pestle grinding and use of an appropriate buffer was found to be a simple and suitable method for the preparation of venom from each jellyfish species studied. This study contributes to biochemical investigations of jellyfish venoms, particularly the venom of the Irukandji jellyfish, for which there are, to our knowledge, no published studies. It also highlights the importance of optimizing venom extraction as the first step toward understanding the complex biological effects of jellyfish venoms.

  10. Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain

    KAUST Repository

    Beck, Joakim; Dia, Ben Mansour; Espath, Luis; Long, Quan; Tempone, Raul

    2018-01-01

    derive the optimal values for the method parameters in which the average computational cost is minimized for a specified error tolerance. We use three numerical examples to demonstrate the computational efficiency of our method compared with the classical

  11. Analysis of Product Sampling for New Product Diffusion Incorporating Multiple-Unit Ownership

    Directory of Open Access Journals (Sweden)

    Zhineng Hu

    2014-01-01

    Full Text Available Multiple-unit ownership of nondurable products is an important component of sales in many product categories. Based on the Bass model, this paper develops a new model considering the multiple-unit adoptions as a diffusion process under the influence of product sampling. Though the analysis aims to determine the optimal dynamic sampling effort for a firm and the results demonstrate that experience sampling can accelerate the diffusion process, the best time to send free samples is just before the product being launched. Multiple-unit purchasing behavior can increase sales to make more profit for a firm, and it needs more samples to make the product known much better. The local sensitivity analysis shows that the increase of both external coefficients and internal coefficients has a negative influence on the sampling level, but the internal influence on the subsequent multiple-unit adoptions has little significant influence on the sampling. Using the logistic regression along with linear regression, the global sensitivity analysis gives a whole analysis of the interaction of all factors, which manifests the external influence and multiunit purchase rate are two most important factors to influence the sampling level and net present value of the new product, and presents a two-stage method to determine the sampling level.

  12. Optimization of throughput in semipreparative chiral liquid chromatography using stacked injection.

    Science.gov (United States)

    Taheri, Mohammadreza; Fotovati, Mohsen; Hosseini, Seyed-Kiumars; Ghassempour, Alireza

    2017-10-01

    An interesting mode of chromatography for preparation of pure enantiomers from pure samples is the method of stacked injection as a pseudocontinuous procedure. Maximum throughput and minimal production costs can be achieved by the use of total chiral column length in this mode of chromatography. To maximize sample loading, often touching bands of the two enantiomers is automatically achieved. Conventional equations show direct correlation between touching-band loadability and the selectivity factor of two enantiomers. The important question for one who wants to obtain the highest throughput is "How to optimize different factors including selectivity, resolution, run time, and loading of the sample in order to save time without missing the touching-band resolution?" To answer this question, tramadol and propranolol were separated on cellulose 3,5-dimethyl phenyl carbamate, as two pure racemic mixtures with low and high solubilities in mobile phase, respectively. The mobile phase composition consisted of n-hexane solvent with alcohol modifier and diethylamine as the additive. A response surface methodology based on central composite design was used to optimize separation factors against the main responses. According to the stacked injection properties, two processes were investigated for maximizing throughput: one with a poorly soluble and another with a highly soluble racemic mixture. For each case, different optimization possibilities were inspected. It was revealed that resolution is a crucial response for separations of this kind. Peak area and run time are two critical parameters in optimization of stacked injection for binary mixtures which have low solubility in the mobile phase. © 2017 Wiley Periodicals, Inc.

  13. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  14. Introduction to Continuous Optimization

    DEFF Research Database (Denmark)

    Andreasson, Niclas; Evgrafov, Anton; Patriksson, Michael

    optimal solutions for continuous optimization models. The main part of the mathematical material therefore concerns the analysis and linear algebra that underlie the workings of convexity and duality, and necessary/sufficient local/global optimality conditions for continuous optimization problems. Natural...... algorithms are then developed from these optimality conditions, and their most important convergence characteristics are analyzed. The book answers many more questions of the form “Why?” and “Why not?” than “How?”. We use only elementary mathematics in the development of the book, yet are rigorous throughout...

  15. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    Science.gov (United States)

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  16. Optimization of Inventory

    OpenAIRE

    PROKOPOVÁ, Nikola

    2017-01-01

    The subject of this thesis is optimization of inventory in selected organization. Inventory optimization is a very important topic in each organization because it reduces storage costs. At the beginning the inventory theory is presented. It shows the meaning and types of inventory, inventory control and also different methods and models of inventory control. Inventory optimization in the enterprise can be reached by using models of inventory control. In the second part the company on which is...

  17. Systematic approach to optimize a pretreatment method for ultrasensitive liquid chromatography with tandem mass spectrometry analysis of multiple target compounds in biological samples.

    Science.gov (United States)

    Togashi, Kazutaka; Mutaguchi, Kuninori; Komuro, Setsuko; Kataoka, Makoto; Yamazaki, Hiroshi; Yamashita, Shinji

    2016-08-01

    In current approaches for new drug development, highly sensitive and robust analytical methods for the determination of test compounds in biological samples are essential. These analytical methods should be optimized for every target compound. However, for biological samples that contain multiple compounds as new drug candidates obtained by cassette dosing tests, it would be preferable to develop a single method that allows the determination of all compounds at once. This study aims to establish a systematic approach that enables a selection of the most appropriate pretreatment method for multiple target compounds without the use of their chemical information. We investigated the retention times of 27 known compounds under different mobile phase conditions and determined the required pretreatment of human plasma samples using several solid-phase and liquid-liquid extractions. From the relationship between retention time and recovery in a principal component analysis, appropriate pretreatments were categorized into several types. Based on the category, we have optimized a pretreatment method for the identification of three calcium channel blockers in human plasma. Plasma concentrations of these drugs in a cassette-dose clinical study at microdose level were successfully determined with a lower limit of quantitation of 0.2 pg/mL for diltiazem, 1 pg/mL for nicardipine, and 2 pg/mL for nifedipine. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. An Optimization-Based Reconfigurable Design for a 6-Bit 11-MHz Parallel Pipeline ADC with Double-Sampling S&H

    Directory of Open Access Journals (Sweden)

    Wilmar Carvajal

    2012-01-01

    Full Text Available This paper presents a 6 bit, 11 MS/s time-interleaved pipeline A/D converter design. The specification process, from block level to elementary circuits, is gradually covered to draw a design methodology. Both power consumption and mismatch between the parallel chain elements are intended to be reduced by using some techniques such as double and bottom-plate sampling, fully differential circuits, RSD digital correction, and geometric programming (GP optimization of the elementary analog circuits (OTAs and comparators design. Prelayout simulations of the complete ADC are presented to characterize the designed converter, which consumes 12 mW while sampling a 500 kHz input signal. Moreover, the block inside the ADC with the most stringent requirements in power, speed, and precision was sent to fabrication in a CMOS 0.35 μm AMS technology, and some postlayout results are shown.

  19. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  20. Development and optimization of a novel sample preparation method cored on functionalized nanofibers mat-solid-phase extraction for the simultaneous efficient extraction of illegal anionic and cationic dyes in foods.

    Science.gov (United States)

    Qi, Feifei; Jian, Ningge; Qian, Liangliang; Cao, Weixin; Xu, Qian; Li, Jian

    2017-09-01

    A simple and efficient three-step sample preparation method was developed and optimized for the simultaneous analysis of illegal anionic and cationic dyes (acid orange 7, metanil yellow, auramine-O, and chrysoidine) in food samples. A novel solid-phase extraction (SPE) procedure based on nanofibers mat (NFsM) was proposed after solvent extraction and freeze-salting out purification. The preferred SPE sorbent was selected from five functionalized NFsMs by orthogonal experimental design, and the optimization of SPE parameters was achieved through response surface methodology (RSM) based on the Box-Behnken design (BBD). Under the optimal conditions, the target analytes could be completely adsorbed by polypyrrole-functionalized polyacrylonitrile NFsM (PPy/PAN NFsM), and the eluent was directly analyzed by high-performance liquid chromatography-diode array detection (HPLC-DAD). The limits of detection (LODs) were between 0.002 and 0.01 mg kg -1 , and satisfactory linearity with correlation coefficients (R > 0.99) for each dye in all samples was achieved. Compared with the Chinese standard method and the published methods, the proposed method was simplified greatly with much lower requirement of sorbent (5.0 mg) and organic solvent (2.8 mL) and higher sample preparation speed (10 min/sample), while higher recovery (83.6-116.5%) and precision (RSDs < 7.1%) were obtained. With this developed method, we have successfully detected illegal ionic dyes in three common representative foods: yellow croaker, soybean products, and chili seasonings. Graphical abstract Schematic representation of the process of the three-step sample preparation.

  1. Survey of statistical and sampling needs for environmental monitoring of commercial low-level radioactive waste disposal facilities

    International Nuclear Information System (INIS)

    Eberhardt, L.L.; Thomas, J.M.

    1986-07-01

    This project was designed to develop guidance for implementing 10 CFR Part 61 and to determine the overall needs for sampling and statistical work in characterizing, surveying, monitoring, and closing commercial low-level waste sites. When cost-effectiveness and statistical reliability are of prime importance, then double sampling, compositing, and stratification (with optimal allocation) are identified as key issues. If the principal concern is avoiding questionable statistical practice, then the applicability of kriging (for assessing spatial pattern), methods for routine monitoring, and use of standard textbook formulae in reporting monitoring results should be reevaluated. Other important issues identified include sampling for estimating model parameters and the use of data from left-censored (less than detectable limits) distributions

  2. An online method for lithium-ion battery remaining useful life estimation using importance sampling and neural networks

    International Nuclear Information System (INIS)

    Wu, Ji; Zhang, Chenbin; Chen, Zonghai

    2016-01-01

    Highlights: • An online RUL estimation method for lithium-ion battery is proposed. • RUL is described by the difference among battery terminal voltage curves. • A feed forward neural network is employed for RUL estimation. • Importance sampling is utilized to select feed forward neural network inputs. - Abstract: An accurate battery remaining useful life (RUL) estimation can facilitate the design of a reliable battery system as well as the safety and reliability of actual operation. A reasonable definition and an effective prediction algorithm are indispensable for the achievement of an accurate RUL estimation result. In this paper, the analysis of battery terminal voltage curves under different cycle numbers during charge process is utilized for RUL definition. Moreover, the relationship between RUL and charge curve is simulated by feed forward neural network (FFNN) for its simplicity and effectiveness. Considering the nonlinearity of lithium-ion charge curve, importance sampling (IS) is employed for FFNN input selection. Based on these results, an online approach using FFNN and IS is presented to estimate lithium-ion battery RUL in this paper. Experiments and numerical comparisons are conducted to validate the proposed method. The results show that the FFNN with IS is an accurate estimation method for actual operation.

  3. Optimal control for chemical engineers

    CERN Document Server

    Upreti, Simant Ranjan

    2013-01-01

    Optimal Control for Chemical Engineers gives a detailed treatment of optimal control theory that enables readers to formulate and solve optimal control problems. With a strong emphasis on problem solving, the book provides all the necessary mathematical analyses and derivations of important results, including multiplier theorems and Pontryagin's principle.The text begins by introducing various examples of optimal control, such as batch distillation and chemotherapy, and the basic concepts of optimal control, including functionals and differentials. It then analyzes the notion of optimality, de

  4. Do women's voices provide cues of the likelihood of ovulation? The importance of sampling regime.

    Directory of Open Access Journals (Sweden)

    Julia Fischer

    Full Text Available The human voice provides a rich source of information about individual attributes such as body size, developmental stability and emotional state. Moreover, there is evidence that female voice characteristics change across the menstrual cycle. A previous study reported that women speak with higher fundamental frequency (F0 in the high-fertility compared to the low-fertility phase. To gain further insights into the mechanisms underlying this variation in perceived attractiveness and the relationship between vocal quality and the timing of ovulation, we combined hormone measurements and acoustic analyses, to characterize voice changes on a day-to-day basis throughout the menstrual cycle. Voice characteristics were measured from free speech as well as sustained vowels. In addition, we asked men to rate vocal attractiveness from selected samples. The free speech samples revealed marginally significant variation in F0 with an increase prior to and a distinct drop during ovulation. Overall variation throughout the cycle, however, precluded unequivocal identification of the period with the highest conception risk. The analysis of vowel samples revealed a significant increase in degree of unvoiceness and noise-to-harmonic ratio during menstruation, possibly related to an increase in tissue water content. Neither estrogen nor progestogen levels predicted the observed changes in acoustic characteristics. The perceptual experiments revealed a preference by males for voice samples recorded during the pre-ovulatory period compared to other periods in the cycle. While overall we confirm earlier findings in that women speak with a higher and more variable fundamental frequency just prior to ovulation, the present study highlights the importance of taking the full range of variation into account before drawing conclusions about the value of these cues for the detection of ovulation.

  5. On the Sampling

    OpenAIRE

    Güleda Doğan

    2017-01-01

    This editorial is on statistical sampling, which is one of the most two important reasons for editorial rejection from our journal Turkish Librarianship. The stages of quantitative research, the stage in which we are sampling, the importance of sampling for a research, deciding on sample size and sampling methods are summarised briefly.

  6. Respiratory motion sampling in 4DCT reconstruction for radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Chi Yuwei; Liang Jian; Qin Xu; Yan Di [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States); Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, Michigan 48073 (United States)

    2012-04-15

    Purpose: Phase-based and amplitude-based sorting techniques are commonly used in four-dimensional CT (4DCT) reconstruction. However, effect of these sorting techniques on 4D dose calculation has not been explored. In this study, the authors investigated a candidate 4DCT sorting technique by comparing its 4D dose calculation accuracy with that for phase-based and amplitude-based sorting techniques.Method: An optimization model was formed using organ motion probability density function (PDF) in the 4D dose convolution. The objective function for optimization was defined as the maximum difference between the expected 4D dose in organ of interest and the 4D dose calculated using a 4DCT sorted by a candidate sampling method. Sorting samples, as optimization variables, were selected on the respiratory motion PDF assessed during the CT scanning. Breathing curves obtained from patients' 4DCT scanning, as well as 3D dose distribution from treatment planning, were used in the study. Given the objective function, a residual error analysis was performed, and k-means clustering was found to be an effective sampling scheme to improve the 4D dose calculation accuracy and independent with the patient-specific dose distribution. Results: Patient data analysis demonstrated that the k-means sampling was superior to the conventional phase-based and amplitude-based sorting and comparable to the optimal sampling results. For phase-based sorting, the residual error in 4D dose calculations may not be further reduced to an acceptable accuracy after a certain number of phases, while for amplitude-based sorting, k-means sampling, and the optimal sampling, the residual error in 4D dose calculations decreased rapidly as the number of 4DCT phases increased to 6.Conclusion: An innovative phase sorting method (k-means method) is presented in this study. The method is dependent only on tumor motion PDF. It could provide a way to refine the phase sorting in 4DCT reconstruction and is effective

  7. Optimism on quality of life in Portuguese chronic patients: moderator/mediator?

    Science.gov (United States)

    Vilhena, Estela; Pais-Ribeiro, José; Silva, Isabel; Pedro, Luísa; Meneses, Rute F; Cardoso, Helena; Silva, António Martins da; Mendonça, Denisa

    2014-07-01

    optimism is an important variable that has consistently been shown to affect adjustment to quality of life in chronic diseases. This study aims to clarify if dispositional optimism exerts a moderating or a mediating influence on the personality traits-quality of life association, in Portuguese chronic patients. multiple regression models were used to test the moderation and mediation effects of dispositional optimism in quality of life. A sample of 729 patients was recruited in Portugal's main hospitals and completed self-reported questionnaires assessing socio-demographic and clinical variables, personality, dispositional optimism, quality of life (QoL) and subjective well-being (SWB). the results of the regression models showed that dispositional optimism did not moderate the relationships between personality traits and quality of life. After controlling for gender, age, education level and severity of disease perception, the effects of personality traits on QoL and in SWB were mediated by dispositional optimism (partially and completely), except for the links between neuroticism/openness to experience and physical health. dispositional optimism is more likely to play a mediating, rather than a moderating role in personality traits-quality of life pathway in Portuguese chronic patients, suggesting that "the expectation that good things will happen" contributes to a better quality of life and subjective well-being.

  8. Stiffened Composite Fuselage Barrel Optimization

    Science.gov (United States)

    Movva, R. G.; Mittal, A.; Agrawal, K.; Upadhyay, C. S.

    2012-07-01

    In a typical commercial transport aircraft, Stiffened skin panels and frames contribute around 40% of the fuselage weight. In the current study a stiffened composite fuselage skin panel optimization engine is developed for optimization of the layups of composite panels and stringers using Genetic Algorithm (GA). The skin and stringers of the fuselage section are optimized for the strength and the stability requirements. The selection of the GA parameters considered for the optimization is arrived by performing case studies on selected problems. The optimization engine facilitates in carrying out trade studies for selection of the optimum ply layup and material combination for the configuration being analyzed. The optimization process is applied on a sample model and the results are presented.

  9. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    Science.gov (United States)

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling

  10. Optimal Design of Gravity Pipeline Systems Using Genetic Algorithm and Mathematical Optimization

    Directory of Open Access Journals (Sweden)

    maryam rohani

    2015-03-01

    Full Text Available In recent years, the optimal design of pipeline systems has become increasingly important in the water industry. In this study, the two methods of genetic algorithm and mathematical optimization were employed for the optimal design of pipeline systems with the objective of avoiding the water hammer effect caused by valve closure. The problem of optimal design of a pipeline system is a constrained one which should be converted to an unconstrained optimization problem using an external penalty function approach in the mathematical programming method. The quality of the optimal solution greatly depends on the value of the penalty factor that is calculated by the iterative method during the optimization procedure such that the computational effort is simultaneously minimized. The results obtained were used to compare the GA and mathematical optimization methods employed to determine their efficiency and capabilities for the problem under consideration. It was found that the mathematical optimization method exhibited a slightly better performance compared to the GA method.

  11. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  12. Standardization and optimization of arthropod inventories-the case of Iberian spiders

    DEFF Research Database (Denmark)

    Bondoso Cardoso, Pedro Miguel

    2009-01-01

    and optimization of sampling protocols, especially for mega-diverse arthropod taxa. This study had two objectives: (1) propose guidelines and statistical methods to improve the standardization and optimization of arthropod inventories, and (2) to propose a standardized and optimized protocol for Iberian spiders......, by finding common results between the optimal options for the different sites. The steps listed were successfully followed in the determination of a sampling protocol for Iberian spiders. A protocol with three sub-protocols of varying degrees of effort (24, 96 and 320 h of sampling) is proposed. I also...

  13. Large portfolio risk management and optimal portfolio allocation with dynamic elliptical copulas

    Directory of Open Access Journals (Sweden)

    Jin Xisong

    2018-02-01

    Full Text Available Previous research has focused on the importance of modeling the multivariate distribution for optimal portfolio allocation and active risk management. However, existing dynamic models are not easily applied to high-dimensional problems due to the curse of dimensionality. In this paper, we extend the framework of the Dynamic Conditional Correlation/Equicorrelation and an extreme value approach into a series of Dynamic Conditional Elliptical Copulas. We investigate risk measures such as Value at Risk (VaR and Expected Shortfall (ES for passive portfolios and dynamic optimal portfolios using Mean-Variance and ES criteria for a sample of US stocks over a period of 10 years. Our results suggest that (1 Modeling the marginal distribution is important for dynamic high-dimensional multivariate models. (2 Neglecting the dynamic dependence in the copula causes over-aggressive risk management. (3 The DCC/DECO Gaussian copula and t-copula work very well for both VaR and ES. (4 Grouped t-copulas and t-copulas with dynamic degrees of freedom further match the fat tail. (5 Correctly modeling the dependence structure makes an improvement in portfolio optimization with respect to tail risk. (6 Models driven by multivariate t innovations with exogenously given degrees of freedom provide a flexible and applicable alternative for optimal portfolio risk management.

  14. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  15. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  16. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    Science.gov (United States)

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  17. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  18. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  19. Risk-Based Sampling: I Don't Want to Weight in Vain.

    Science.gov (United States)

    Powell, Mark R

    2015-12-01

    Recently, there has been considerable interest in developing risk-based sampling for food safety and animal and plant health for efficient allocation of inspection and surveillance resources. The problem of risk-based sampling allocation presents a challenge similar to financial portfolio analysis. Markowitz (1952) laid the foundation for modern portfolio theory based on mean-variance optimization. However, a persistent challenge in implementing portfolio optimization is the problem of estimation error, leading to false "optimal" portfolios and unstable asset weights. In some cases, portfolio diversification based on simple heuristics (e.g., equal allocation) has better out-of-sample performance than complex portfolio optimization methods due to estimation uncertainty. Even for portfolios with a modest number of assets, the estimation window required for true optimization may imply an implausibly long stationary period. The implications for risk-based sampling are illustrated by a simple simulation model of lot inspection for a small, heterogeneous group of producers. © 2015 Society for Risk Analysis.

  20. Some optimizations of the animal code

    International Nuclear Information System (INIS)

    Fletcher, W.T.

    1975-01-01

    Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step

  1. Simulative Investigation on Spectral Efficiency of Unipolar Codes based OCDMA System using Importance Sampling Technique

    Science.gov (United States)

    Farhat, A.; Menif, M.; Rezig, H.

    2013-09-01

    This paper analyses the spectral efficiency of Optical Code Division Multiple Access (OCDMA) system using Importance Sampling (IS) technique. We consider three configurations of OCDMA system namely Direct Sequence (DS), Spectral Amplitude Coding (SAC) and Fast Frequency Hopping (FFH) that exploits the Fiber Bragg Gratings (FBG) based encoder/decoder. We evaluate the spectral efficiency of the considered system by taking into consideration the effect of different families of unipolar codes for both coherent and incoherent sources. The results show that the spectral efficiency of OCDMA system with coherent source is higher than the incoherent case. We demonstrate also that DS-OCDMA outperforms both others in terms of spectral efficiency in all conditions.

  2. Different minimally important clinical difference (MCID) scores lead to different clinical prediction rules for the Oswestry disability index for the same sample of patients.

    Science.gov (United States)

    Schwind, Julie; Learman, Kenneth; O'Halloran, Bryan; Showalter, Christopher; Cook, Chad

    2013-05-01

    Minimal clinically important difference (MCID) scores for outcome measures are frequently used evidence-based guides to gage meaningful changes. There are numerous outcome instruments used for analyzing pain, disability, and dysfunction of the low back; perhaps the most common of these is the Oswestry disability index (ODI). A single agreed-upon MCID score for the ODI has yet to be established. What is also unknown is whether selected baseline variables will be universal predictors regardless of the MCID used for a particular outcome measure. To explore the relationship between predictive models and the MCID cutpoint on the ODI. Data were collected from 16 outpatient physical therapy clinics in 10 states. Secondary database analysis using backward stepwise deletion logistic regression of data from a randomized controlled trial (RCT) to create prognostic clinical prediction rules (CPR). One hundred and forty-nine patients with low back pain (LBP) were enrolled in the RCT. All were treated with manual therapy, with a majority also receiving spine-strengthening exercises. The resultant predictive models were dependent upon the MCID used and baseline sample characteristics. All CPR were statistically significant (P < 001). All six MCID cutpoints used resulted in completely different significant predictor variables with no predictor significant across all models. The primary limitations include sub-optimal sample size and study design. There is extreme variability among predictive models created using different MCIDs on the ODI within the same patient population. Our findings highlight the instability of predictive modeling, as these models are significantly affected by population baseline characteristics along with the MCID used. Clinicians must be aware of the fragility of CPR prior to applying each in clinical practice.

  3. Optimizing sample pretreatment for compound-specific stable carbon isotopic analysis of amino sugars in marine sediment

    Science.gov (United States)

    Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.

    2014-09-01

    Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.

  4. Optimization of HPV DNA detection in urine by improving collection, storage, and extraction.

    Science.gov (United States)

    Vorsters, A; Van den Bergh, J; Micalessi, I; Biesmans, S; Bogers, J; Hens, A; De Coster, I; Ieven, M; Van Damme, P

    2014-11-01

    The benefits of using urine for the detection of human papillomavirus (HPV) DNA have been evaluated in disease surveillance, epidemiological studies, and screening for cervical cancers in specific subgroups. HPV DNA testing in urine is being considered for important purposes, notably the monitoring of HPV vaccination in adolescent girls and young women who do not wish to have a vaginal examination. The need to optimize and standardize sampling, storage, and processing has been reported.In this paper, we examined the impact of a DNA-conservation buffer, the extraction method, and urine sampling on the detection of HPV DNA and human DNA in urine provided by 44 women with a cytologically normal but HPV DNA-positive cervical sample. Ten women provided first-void and midstream urine samples. DNA analysis was performed using real-time PCR to allow quantification of HPV and human DNA.The results showed that an optimized method for HPV DNA detection in urine should (a) prevent DNA degradation during extraction and storage, (b) recover cell-free HPV DNA in addition to cell-associated DNA, (c) process a sufficient volume of urine, and (d) use a first-void sample.In addition, we found that detectable human DNA in urine may not be a good internal control for sample validity. HPV prevalence data that are based on urine samples collected, stored, and/or processed under suboptimal conditions may underestimate infection rates.

  5. Optimization control of LNG regasification plant using Model Predictive Control

    Science.gov (United States)

    Wahid, A.; Adicandra, F. F.

    2018-03-01

    Optimization of liquified natural gas (LNG) regasification plant is important to minimize costs, especially operational costs. Therefore, it is important to choose optimum LNG regasification plant design and maintaining the optimum operating conditions through the implementation of model predictive control (MPC). Optimal tuning parameter for MPC such as P (prediction horizon), M (control of the horizon) and T (sampling time) are achieved by using fine-tuning method. The optimal criterion for design is the minimum amount of energy used and for control is integral of square error (ISE). As a result, the optimum design is scheme 2 which is developed by Devold with an energy savings of 40%. To maintain the optimum conditions, required MPC with P, M and T as follows: tank storage pressure: 90, 2, 1; product pressure: 95, 2, 1; temperature vaporizer: 65, 2, 2; and temperature heater: 35, 6, 5, with ISE value at set point tracking respectively 0.99, 1792.78, 34.89 and 7.54, or improvement of control performance respectively 4.6%, 63.5%, 3.1% and 58.2% compared to PI controller performance. The energy savings that MPC controllers can make when there is a disturbance in temperature rise 1°C of sea water is 0.02 MW.

  6. Optimization conditions of samples saponification for tocopherol analysis.

    Science.gov (United States)

    Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto

    2014-09-01

    A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Quality assurance for high dose rate brachytherapy treatment planning optimization: using a simple optimization to verify a complex optimization

    International Nuclear Information System (INIS)

    Deufel, Christopher L; Furutani, Keith M

    2014-01-01

    As dose optimization for high dose rate brachytherapy becomes more complex, it becomes increasingly important to have a means of verifying that optimization results are reasonable. A method is presented for using a simple optimization as quality assurance for the more complex optimization algorithms typically found in commercial brachytherapy treatment planning systems. Quality assurance tests may be performed during commissioning, at regular intervals, and/or on a patient specific basis. A simple optimization method is provided that optimizes conformal target coverage using an exact, variance-based, algebraic approach. Metrics such as dose volume histogram, conformality index, and total reference air kerma agree closely between simple and complex optimizations for breast, cervix, prostate, and planar applicators. The simple optimization is shown to be a sensitive measure for identifying failures in a commercial treatment planning system that are possibly due to operator error or weaknesses in planning system optimization algorithms. Results from the simple optimization are surprisingly similar to the results from a more complex, commercial optimization for several clinical applications. This suggests that there are only modest gains to be made from making brachytherapy optimization more complex. The improvements expected from sophisticated linear optimizations, such as PARETO methods, will largely be in making systems more user friendly and efficient, rather than in finding dramatically better source strength distributions. (paper)

  8. Determination of As, Cd, and Pb in Tap Water and Bottled Water Samples by Using Optimized GFAAS System with Pd-Mg and Ni as Matrix Modifiers

    Directory of Open Access Journals (Sweden)

    Sezgin Bakırdere

    2013-01-01

    Full Text Available Arsenic, lead, and cadmium were determined in tap and bottled water samples consumed in the west part of Turkey at trace levels. Graphite furnace atomic absorption spectrometry (GFAAS was used in all detections. All of the system parameters for each element were optimized to increase sensitivity. Pd-Mg mixture was selected as the best matrix modifier for As, while the highest signals were obtained for Pb and Cd in the case of Ni used as matrix modifier. Detection limits for As, Cd, and Pb were found to be 2.0, 0.036, and 0.25 ng/mL, respectively. 78 tap water and 17 different brands of bottled water samples were analyzed for their As, Cd, and Pb contents under the optimized conditions. In all water samples, concentration of cadmium was found to be lower than detection limits. Lead concentration in the samples analyzed varied between N.D. and 12.66 ± 0.68 ng/mL. The highest concentration of arsenic was determined as 11.54 ± 2.79 ng/mL. Accuracy of the methods was verified by using a certified reference material, namely, Trace Element in Water, 1643e. Results found for As, Cd, and Pb in reference materials were in satisfactory agreement with the certified values.

  9. Workshop on Computational Optimization

    CERN Document Server

    2015-01-01

    Our everyday life is unthinkable without optimization. We try to minimize our effort and to maximize the achieved profit. Many real world and industrial problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks. This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2013. It presents recent advances in computational optimization. The volume includes important real life problems like parameter settings for controlling processes in bioreactor, resource constrained project scheduling, problems arising in transport services, error correcting codes, optimal system performance and energy consumption and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others.

  10. Use of Instrumental Neutron Activation Analysis for Determination of Some Trace Elements of Biological Importance in Different Jute(Corchorus Capsularis) Seed Samples

    International Nuclear Information System (INIS)

    Metwally, E.; Abd-El-Khalik, H.; El-Sweify, F.H.; El-Sweify, A.H.H.

    2004-01-01

    Instrumental neutron activation analysis technique was used to determine some trace elements in seeds of jute (corchorus capsularis). The seed samples were obtained from Agricultural Research Center (ARC), Giza, (EG). The analyzed seed samples were produced from cultivation of three different strains, namely: St. DC 1105, st. JRC 7447 and St. PADMA. These strains were imported from Bangladesh. The jute plant was cultivated in sandy soil in Ismailaya research station farm at may on two seasons 1999 and 2000. The plant was irrigated with water from Ismailaya canal. The study was carried out to compare the influence of applying different kinds of fertilizers of different rates, i.e. mineral fertilizer and biofertilizer, on the uptake of some biologically important trace elements and to determine their concentration in the analyzed jute seed samples. These elements were; Co,Cr,Fe,Zn and others eight elements were analyzed quantitatively

  11. Active load sharing technique for on-line efficiency optimization in DC microgrids

    DEFF Research Database (Denmark)

    Sanseverino, E. Riva; Zizzo, G.; Boscaino, V.

    2017-01-01

    Recently, DC power distribution is gaining more and more importance over its AC counterpart achieving increased efficiency, greater flexibility, reduced volumes and capital cost. In this paper, a 24-120-325V two-level DC distribution system for home appliances, each including three parallel DC......-DC converters, is modeled. An active load sharing technique is proposed for the on-line optimization of the global efficiency of the DC distribution network. The algorithm aims at the instantaneous efficiency optimization of the whole DC network, based on the on-line load current sampling. A Look Up Table......, is created to store the real efficiencies of the converters taking into account components tolerances. A MATLAB/Simulink model of the DC distribution network has been set up and a Genetic Algorithm has been employed for the global efficiency optimization. Simulation results are shown to validate the proposed...

  12. Radar Doppler Processing with Nonuniform Sampling.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    Conventional signal processing to estimate radar Doppler frequency often assumes uniform pulse/sample spacing. This is for the convenience of t he processing. More recent performance enhancements in processor capability allow optimally processing nonuniform pulse/sample spacing, thereby overcoming some of the baggage that attends uniform sampling, such as Doppler ambiguity and SNR losses due to sidelobe control measures.

  13. Determination of copper in powdered chocolate samples by slurry-sampling flame atomic-absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Walter N.L. dos; Silva, Erik G.P. da; Fernandes, Marcelo S.; Araujo, Rennan G.O.; Costa, Anto' ' enio C.S.; Ferreira, Sergio L.C. [Nucleo de Excelencia em Quimica Analitica da Bahia, Universidade Federal da Bahia, Instituto de Quimica, Salvador, Bahia (Brazil); Vale, M.G.R. [Instituto de Quimica, Universidade Federal da Bahia do Rio Grande do Sul, Porto Alegre, Rio Grande do Sul (Brazil)

    2005-06-01

    Chocolate is a complex sample with a high content of organic compounds and its analysis generally involves digestion procedures that might include the risk of losses and/or contamination. The determination of copper in chocolate is important because copper compounds are extensively used as fungicides in the farming of cocoa. In this paper, a slurry-sampling flame atomic-absorption spectrometric method is proposed for determination of copper in powdered chocolate samples. Optimization was carried out using univariate methodology involving the variables nature and concentration of the acid solution for slurry preparation, sonication time, and sample mass. The recommended conditions include a sample mass of 0.2 g, 2.0 mol L{sup -1} hydrochloric acid solution, and a sonication time of 15 min. The calibration curve was prepared using aqueous copper standards in 2.0 mol L{sup -1} hydrochloric acid. This method allowed determination of copper in chocolate with a detection limit of 0.4 {mu}g g{sup -1} and precision, expressed as relative standard deviation (RSD), of 2.5% (n=10) for a copper content of approximately 30 {mu}g g{sup -1}, using a chocolate mass of 0.2 g. The accuracy was confirmed by analyzing the certified reference materials NIST SRM 1568a rice flour and NIES CRM 10-b rice flour. The proposed method was used for determination of copper in three powdered chocolate samples, the copper content of which varied between 26.6 and 31.5 {mu}g g{sup -1}. The results showed no significant differences with those obtained after complete digestion, using a t-test for comparison. (orig.)

  14. Microporous Carbon Spheres Solid Phase Membrane Tip Extraction for the Analysis of Nitrosamines in Water Samples

    International Nuclear Information System (INIS)

    Mohammed Salisu Musa; Wan Aini Wan Ibrahim

    2015-01-01

    A simple solid phase membrane tip extraction (SPMTE) utilizing microporous carbon spheres (MCS) was developed for the analysis of nitrosamines in aqueous samples. The method termed MCS-SPMTE was optimized for various important extraction parameters namely conditioning organic solvent, extraction time, effects of salt addition and pH change, desorption time, desorption solvent and sample volume. Under the optimized conditions, the method indicated good linearity in the range of 10-100 μg/ L with coefficients of determination, r 2 ≥0.9984. The method also demonstrated good reproducibility with % RSDs values ranging from 2.2 - 8.9 (n = 3). Limit of detection (LOD) and limit of quantification (LOQ) for the method ranged from 3.2 - 4.8 μg/ L and 10.9 - 15.9 μg/L respectively. Recoveries for both tap-water and lake water samples spiked at 10 μg/L were in the range of 83.2 - 107.5 %. (author)

  15. Improved orientation sampling for indexing diffraction patterns of polycrystalline materials

    DEFF Research Database (Denmark)

    Larsen, Peter Mahler; Schmidt, Søren

    2017-01-01

    to that of optimally distributing points on a four‐dimensional sphere. In doing so, the number of orientation samples needed to achieve a desired indexing accuracy is significantly reduced. Orientation sets at a range of sizes are generated in this way for all Laue groups and are made available online for easy use.......Orientation mapping is a widely used technique for revealing the microstructure of a polycrystalline sample. The crystalline orientation at each point in the sample is determined by analysis of the diffraction pattern, a process known as pattern indexing. A recent development in pattern indexing...... in the presence of noise, it has very high computational requirements. In this article, the computational burden is reduced by developing a method for nearly optimal sampling of orientations. By using the quaternion representation of orientations, it is shown that the optimal sampling problem is equivalent...

  16. Exploring the importance of different items as reasons for leaving emergency medical services between fully compensated, partially compensated, and non-compensated/volunteer samples.

    Science.gov (United States)

    Blau, Gary; Chapman, Susan; Gibson, Gregory; Bentley, Melissa A

    2011-01-01

    The purpose of our study was to investigate the importance of different items as reasons for leaving the Emergency Medical Service (EMS) profession. An exit survey was returned by three distinct EMS samples: 127 full compensated, 45 partially compensated and 72 non-compensated/volunteer respondents, who rated the importance of 17 different items for affecting their decision to leave EMS. Unfortunately, there were a high percentage of "not applicable" responses for 10 items. We focused on those seven items that had a majority of useable responses across the three samples. Results showed that the desire for better pay and benefits was a more important reason for leaving EMS for the partially compensated versus fully compensated respondents. Perceived lack of advancement opportunity was a more important reason for leaving for the partially compensated and volunteer groups versus the fully compensated group. Study limitations are discussed and suggestions for future research offered.

  17. An optimal guarding scheme for thermal conductivity measurement using a guarded cut-bar technique, part 1 experimental study

    International Nuclear Information System (INIS)

    Xing, Changhu

    2014-01-01

    In the guarded cut-bar technique, a guard surrounding the measured sample and reference (meter) bars is temperature controlled to carefully regulate heat losses from the sample and reference bars. Guarding is typically carried out by matching the temperature profiles between the guard and the test stack of sample and meter bars. Problems arise in matching the profiles, especially when the thermal conductivities of the meter bars and of the sample differ, as is usually the case. In a previous numerical study, the applied guarding condition (guard temperature profile) was found to be an important factor in measurement accuracy. Different from the linear-matched or isothermal schemes recommended in literature, the optimal guarding condition is dependent on the system geometry and thermal conductivity ratio of sample to meter bar. To validate the numerical results, an experimental study was performed to investigate the resulting error under different guarding conditions using stainless steel 304 as both the sample and meter bars. The optimal guarding condition was further verified on a certified reference material, pyroceram 9606, and 99.95% pure iron whose thermal conductivities are much smaller and much larger, respectively, than that of the stainless steel meter bars. Additionally, measurements are performed using three different inert gases to show the effect of the insulation effective thermal conductivity on measurement error, revealing low conductivity, argon gas, gives the lowest error sensitivity when deviating from the optimal condition. The result of this study provides a general guideline for the specific measurement method and for methods requiring optimal guarding or insulation

  18. Optimal sample size of signs for classification of radiational and oily soils

    International Nuclear Information System (INIS)

    Babayev, M.P.; Iskenderov, S.M.; Aghayev, R.A.

    2012-01-01

    Full text : This article tells about classification of radiational and oily soils that should be in essence a compact intelligence system which contains maximum information on classes of soil objects in the accepted feature space. The stored experience shows that the volume of the most informative soil signs can make up maximum 7-8 indexes. More correct approach to our opinion for a sample of the most informative (most important) indexes is the method of testing and mistakes, that is the experimental method, allowing to make use a wide experience and intuition of the researcher, or group of the researchers, engaged for many years in the field of soil science. At this operational stage of the formal device of soils classification, to say more concrete, the assessment section of selfdescriptiveness of soil signs of this formal device, in our opinion, is purely mathematized and in some cases even not reflect the true picture. In this case it will be calculated 21 pair of correlative elements between the selected soil signs as a measure of the linear communication. The volume of the correlative row will be equal to 6, as the increase in volume of the correlative row can sharply increase the volume calculation. Pertinently to note that, it is the first time an attempt is made to create correlative matrixes of the most important signs of radiation and oily soils

  19. Information sampling behavior with explicit sampling costs

    Science.gov (United States)

    Juni, Mordechai Z.; Gureckis, Todd M.; Maloney, Laurence T.

    2015-01-01

    The decision to gather information should take into account both the value of information and its accrual costs in time, energy and money. Here we explore how people balance the monetary costs and benefits of gathering additional information in a perceptual-motor estimation task. Participants were rewarded for touching a hidden circular target on a touch-screen display. The target’s center coincided with the mean of a circular Gaussian distribution from which participants could sample repeatedly. Each “cue” — sampled one at a time — was plotted as a dot on the display. Participants had to repeatedly decide, after sampling each cue, whether to stop sampling and attempt to touch the hidden target or continue sampling. Each additional cue increased the participants’ probability of successfully touching the hidden target but reduced their potential reward. Two experimental conditions differed in the initial reward associated with touching the hidden target and the fixed cost per cue. For each condition we computed the optimal number of cues that participants should sample, before taking action, to maximize expected gain. Contrary to recent claims that people gather less information than they objectively should before taking action, we found that participants over-sampled in one experimental condition, and did not significantly under- or over-sample in the other. Additionally, while the ideal observer model ignores the current sample dispersion, we found that participants used it to decide whether to stop sampling and take action or continue sampling, a possible consequence of imperfect learning of the underlying population dispersion across trials. PMID:27429991

  20. Laser-induced breakdown spectroscopy for detection of heavy metals in environmental samples

    Science.gov (United States)

    Wisbrun, Richard W.; Schechter, Israel; Niessner, Reinhard; Schroeder, Hartmut

    1993-03-01

    The application of LIBS technology as a sensor for heavy metals in solid environmental samples has been studied. This specific application introduces some new problems in the LIBS analysis. Some of them are related to the particular distribution of contaminants in the grained samples. Other problems are related to mechanical properties of the samples and to general matrix effects, like the water and organic fibers content of the sample. An attempt has been made to optimize the experimental set-up for the various involved parameters. The understanding of these factors has enabled the adjustment of the technique to the substrates of interest. The special importance of the grain size and of the laser-induced aerosol production is pointed out. Calibration plots for the analysis of heavy metals in diverse sand and soil samples have been carried out. The detection limits are shown to be usually below the recent regulation restricted concentrations.

  1. MEMS resonant load cells for micro-mechanical test frames: feasibility study and optimal design

    Science.gov (United States)

    Torrents, A.; Azgin, K.; Godfrey, S. W.; Topalli, E. S.; Akin, T.; Valdevit, L.

    2010-12-01

    This paper presents the design, optimization and manufacturing of a novel micro-fabricated load cell based on a double-ended tuning fork. The device geometry and operating voltages are optimized for maximum force resolution and range, subject to a number of manufacturing and electromechanical constraints. All optimizations are enabled by analytical modeling (verified by selected finite elements analyses) coupled with an efficient C++ code based on the particle swarm optimization algorithm. This assessment indicates that force resolutions of ~0.5-10 nN are feasible in vacuum (~1-50 mTorr), with force ranges as large as 1 N. Importantly, the optimal design for vacuum operation is independent of the desired range, ensuring versatility. Experimental verifications on a sub-optimal device fabricated using silicon-on-glass technology demonstrate a resolution of ~23 nN at a vacuum level of ~50 mTorr. The device demonstrated in this article will be integrated in a hybrid micro-mechanical test frame for unprecedented combinations of force resolution and range, displacement resolution and range, optical (or SEM) access to the sample, versatility and cost.

  2. The Importance of Contamination Knowledge in Curation - Insights into Mars Sample Return

    Science.gov (United States)

    Harrington, A. D.; Calaway, M. J.; Regberg, A. B.; Mitchell, J. L.; Fries, M. D.; Zeigler, R. A.; McCubbin, F. M.

    2018-01-01

    The Astromaterials Acquisition and Curation Office at NASA Johnson Space Center (JSC), in Houston, TX (henceforth Curation Office) manages the curation of extraterrestrial samples returned by NASA missions and shared collections from international partners, preserving their integrity for future scientific study while providing the samples to the international community in a fair and unbiased way. The Curation Office also curates flight and non-flight reference materials and other materials from spacecraft assembly (e.g., lubricants, paints and gases) of sample return missions that would have the potential to cross-contaminate a present or future NASA astromaterials collection.

  3. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    .7-12.9) hours. Schedule A1 consistently performed the best, and schedule A4 the worst, both for the individual patient estimates and for the populations generated with the bootstrapping algorithm. In both cases, the differences between the reference and alternative schedules decreased as half-life increased. In the simulation study, 24-hourly sampling performed the worst, and six-hourly sampling the best. The simulation study confirmed that more dense parasite sampling schedules are required to accurately estimate half-life for profiles with short half-life (≤three hours) and/or low initial parasite density (≤10,000 per μL). Among schedules in the simulation study with six or fewer measurements in the first 48 hours, a schedule with measurements at times (time windows) of 0 (0–2), 6 (4–8), 12 (10–14), 24 (22–26), 36 (34–36) and 48 (46–50) hours, or at times 6, 7 (two samples in time window 5–8), 24, 25 (two samples during time 23–26), and 48, 49 (two samples during time 47–50) hours, until negative most accurately estimated the “true” half-life. For a given schedule, continuing sampling after two days had little effect on the estimation of half-life, provided that adequate sampling was performed in the first two days and the half-life was less than three hours. If the measured parasitaemia at two days exceeded 1,000 per μL, continued sampling for at least once a day was needed for accurate half-life estimates. Conclusions This study has revealed important insights on sampling schedules for accurate and reliable estimation of Plasmodium falciparum half-life following treatment with an artemisinin derivative (alone or in combination with a partner drug). Accurate measurement of short half-lives (rapid clearance) requires more dense sampling schedules (with more than twice daily sampling). A more intensive sampling schedule is, therefore, recommended in locations where P. falciparum susceptibility to artemisinins is not known and the necessary

  4. Application and optimization of microwave-assisted extraction and dispersive liquid-liquid microextraction followed by high-performance liquid chromatography for sensitive determination of polyamines in turkey breast meat samples.

    Science.gov (United States)

    Bashiry, Moein; Mohammadi, Abdorreza; Hosseini, Hedayat; Kamankesh, Marzieh; Aeenehvand, Saeed; Mohammadi, Zaniar

    2016-01-01

    A novel method based on microwave-assisted extraction and dispersive liquid-liquid microextraction (MAE-DLLME) followed by high-performance liquid chromatography (HPLC) was developed for the determination of three polyamines from turkey breast meat samples. Response surface methodology (RSM) based on central composite design (CCD) was used to optimize the effective factors in DLLME process. The optimum microextraction efficiency was obtained under optimized conditions. The calibration graphs of the proposed method were linear in the range of 20-200 ng g(-1), with the coefficient determination (R(2)) higher than 0.9914. The relative standard deviations were 6.72-7.30% (n = 7). The limits of detection were in the range of 0.8-1.4 ng g(-1). The recoveries of these compounds in spiked turkey breast meat samples were from 95% to 105%. The increased sensitivity in using the MAE-DLLME-HPLC-UV has been demonstrated. Compared with previous methods, the proposed method is an accurate, rapid and reliable sample-pretreatment method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Optimization of microwave assisted digestion procedure for the determination of zinc, copper and nickel in tea samples employing flame atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Soylak, Mustafa; Tuzen, Mustafa; Souza, Anderson Santos; Korn, Maria das Gracas Andrade; Ferreira, Sergio Luis Costa

    2007-01-01

    The present paper describes the development of a microwave assisted digestion procedure for the determination of zinc, copper and nickel in tea samples employing flame atomic absorption spectrometry (FAAS). The optimization step was performed using a full factorial design (2 3 ) involving the factors: composition of the acid mixture (CMA), microwave power (MP) and radiation time (RT). The experiments of this factorial were carried out using a certified reference material of tea GBW 07605 furnished by National Research Centre for Certified Reference Materials, China, being the metal recoveries considered as response. The relative standard deviations of the method were found below 8% for the three elements. The procedure proposed was used for the determination of copper, zinc and nickel in several samples of tea from Turkey. For 10 tea samples analyzed, the concentration achieved for copper, zinc and nickel varied at 6.4-13.1, 7.0-16.5 and 3.1-5.7 (μg g -1 ), respectively

  6. Model Risk in Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    David Stefanovits

    2014-08-01

    Full Text Available We consider a one-period portfolio optimization problem under model uncertainty. For this purpose, we introduce a measure of model risk. We derive analytical results for this measure of model risk in the mean-variance problem assuming we have observations drawn from a normal variance mixture model. This model allows for heavy tails, tail dependence and leptokurtosis of marginals. The results show that mean-variance optimization is seriously compromised by model uncertainty, in particular, for non-Gaussian data and small sample sizes. To mitigate these shortcomings, we propose a method to adjust the sample covariance matrix in order to reduce model risk.

  7. Optimism on quality of life in Portuguese chronic patients: moderator/mediator?

    Directory of Open Access Journals (Sweden)

    Estela Vilhena

    2014-07-01

    Full Text Available Objective: optimism is an important variable that has consistently been shown to affect adjustment to quality of life in chronic diseases. This study aims to clarify if dispositional optimism exerts a moderating or a mediating influence on the personality traits-quality of life association, in Portuguese chronic patients. Methods: multiple regression models were used to test the moderation and mediation effects of dispositional optimism in quality of life. A sample of 729 patients was recruited in Portugal's main hospitals and completed self-reported questionnaires assessing socio-demographic and clinical variables, personality, dispositional optimism, quality of life (QoL and subjective well-being (SWB. Results: the results of the regression models showed that dispositional optimism did not moderate the relationships between personality traits and quality of life. After controlling for gender, age, education level and severity of disease perception, the effects of personality traits on QoL and in SWB were mediated by dispositional optimism (partially and completely, except for the links between neuroticism/openness to experience and physical health. Conclusion: dispositional optimism is more likely to play a mediating, rather than a moderating role in personality traits-quality of life pathway in Portuguese chronic patients, suggesting that "the expectation that good things will happen" contributes to a better quality of life and subjective well-being.

  8. Grasp Algorithms For Optotactile Robotic Sample Acquisition, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Robotic sample acquisition is basically grasping. Multi-finger robot sample grasping devices are controlled to securely pick up samples. While optimal grasps for...

  9. Truss systems and shape optimization

    Science.gov (United States)

    Pricop, Mihai Victor; Bunea, Marian; Nedelcu, Roxana

    2017-07-01

    Structure optimization is an important topic because of its benefits and wide applicability range, from civil engineering to aerospace and automotive industries, contributing to a more green industry and life. Truss finite elements are still in use in many research/industrial codesfor their simple stiffness matrixand are naturally matching the requirements for cellular materials especially considering various 3D printing technologies. Optimality Criteria combined with Solid Isotropic Material with Penalization is the optimization method of choice, particularized for truss systems. Global locked structures areobtainedusinglocally locked lattice local organization, corresponding to structured or unstructured meshes. Post processing is important for downstream application of the method, to make a faster link to the CAD systems. To export the optimal structure in CATIA, a CATScript file is automatically generated. Results, findings and conclusions are given for two and three-dimensional cases.

  10. Determination of selenium in urine by inductively coupled plasma mass spectrometry: interferences and optimization

    DEFF Research Database (Denmark)

    Gammelgaard, Bente; Jons, O.

    1999-01-01

    when the nebulizer gas flow rate was optimized for each solute. The influences of sample uptake rate, nebulizer flow rate and rf power were examined in multivariate experiments. The nebulizer gas flow rate and rf power were found to be interdependent, but the sample pump flow rate was independent......, ethanol, propanol, butanol, glycerol, acetonitrile and acetic acid) were examined for their sensitivity enhancement effect. Enhancement factors up to six were obtained and were dependent on the nebulizer gas flow and rf power. There was no important difference in the enhancement effects of these solutes...

  11. Ultrasound assisted extraction of Maxilon Red GRL dye from water samples using cobalt ferrite nanoparticles loaded on activated carbon as sorbent: Optimization and modeling.

    Science.gov (United States)

    Mehrabi, Fatemeh; Vafaei, Azam; Ghaedi, Mehrorang; Ghaedi, Abdol Mohammad; Alipanahpour Dil, Ebrahim; Asfaram, Arash

    2017-09-01

    In this research, a selective, simple and rapid ultrasound assisted dispersive solid-phase micro-microextraction (UA-DSPME) was developed using cobalt ferrite nanoparticles loaded on activated carbon (CoFe 2 O 4 -NPs-AC) as an efficient sorbent for the preconcentration and determination of Maxilon Red GRL (MR-GRL) dye. The properties of sorbent are characterized by X-ray diffraction (XRD), Transmission Electron Microscopy (TEM), Vibrating sample magnetometers (VSM), Fourier transform infrared spectroscopy (FTIR), Particle size distribution (PSD) and Scanning Electron Microscope (SEM) techniques. The factors affecting on the determination of MR-GRL dye were investigated and optimized by central composite design (CCD) and artificial neural networks based on genetic algorithm (ANN-GA). CCD and ANN-GA were used for optimization. Using ANN-GA, optimum conditions were set at 6.70, 1.2mg, 5.5min and 174μL for pH, sorbent amount, sonication time and volume of eluent, respectively. Under the optimized conditions obtained from ANN-GA, the method exhibited a linear dynamic range of 30-3000ngmL -1 with a detection limit of 5.70ngmL -1 . The preconcentration factor and enrichment factor were 57.47 and 93.54, respectively with relative standard deviations (RSDs) less than 4.0% (N=6). The interference effect of some ions and dyes was also investigated and the results show a good selectivity for this method. Finally, the method was successfully applied to the preconcentration and determination of Maxilon Red GRL in water and wastewater samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  13. Sampling optimization trade-offs for long-term monitoring of gamma dose rates

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhöfel, C.J.W.; Stöhlker, U.

    2008-01-01

    This paper applies a recently developed optimization method to examine the design of networks that monitor radiation under routine conditions. Annual gamma dose rates were modelled by combining regression with interpolation of the regression residuals using spatially exhaustive predictors and an

  14. Optimal experimental design with R

    CERN Document Server

    Rasch, Dieter; Verdooren, L R; Gebhardt, Albrecht

    2011-01-01

    Experimental design is often overlooked in the literature of applied and mathematical statistics: statistics is taught and understood as merely a collection of methods for analyzing data. Consequently, experimenters seldom think about optimal design, including prerequisites such as the necessary sample size needed for a precise answer for an experimental question. Providing a concise introduction to experimental design theory, Optimal Experimental Design with R: Introduces the philosophy of experimental design Provides an easy process for constructing experimental designs and calculating necessary sample size using R programs Teaches by example using a custom made R program package: OPDOE Consisting of detailed, data-rich examples, this book introduces experimenters to the philosophy of experimentation, experimental design, and data collection. It gives researchers and statisticians guidance in the construction of optimum experimental designs using R programs, including sample size calculations, hypothesis te...

  15. Optimized sampling of hydroperoxides and investigations of the water vapour dependence of hydroperoxide formation during ozonolysis of alkenes; Optimierung der Probenahme von Hydroperoxiden und Untersuchungen zur Wasserdampfabhaengigkeit der Bildung von Hydroperoxiden bei der Ozonolyse von Alkenen

    Energy Technology Data Exchange (ETDEWEB)

    Becker, K.H.; Plagens, H.

    1997-06-01

    There are several sampling methods for hydroperoxides none of which is particularly reliable. The authors therefore tested three new methods in order to optimize hydroperoxide sampling and, using the optimized sampling procedure, to investigate the water vapour dependence of hydroperoxide formation during ozonolysis of alkenes. (orig.) [Deutsch] Fuer die Probenahme von Hydroperoxiden existieren verschiedene Verfahren, von denen bisher keines als besonders zuverlaessig angesehen werden konnte. Daher wurden in dieser Arbeit drei Verfahren getestet, um die Probenahme von Hydroperoxiden zu optimieren und mit dem entsprechenden Verfahren die Wasserdampfabhaengigkeit der Bildung von Hydroperoxiden bei der Ozonolyse von Alkenen zu untersuchen. (orig.)

  16. A Mathematical Optimization Problem in Bioinformatics

    Science.gov (United States)

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  17. Supporting Sampling and Sample Preparation Tools for Isotope and Nuclear Analysis

    International Nuclear Information System (INIS)

    2016-03-01

    Nuclear and related techniques can help develop climate-smart agricultural practices by optimizing water and nutrient use efficiency, assessing organic carbon sequestration in soil, and assisting in the evaluation of soil erosion control measures. Knowledge on the behaviour of radioactive materials in soil, water and foodstuffs is also essential in enhancing nuclear emergency preparedness and response. Appropriate sampling and sample preparation are the first steps to ensure the quality and effective use of the measurements and this publication provides comprehensive detail on the necessary steps

  18. Dynamic Optimization of UV Flash Processes

    DEFF Research Database (Denmark)

    Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Jørgensen, John Bagterp

    2017-01-01

    UV ash processes, also referred to as isoenergetic-isochoric ash processes, occur for dynamic simulation and optimization of vapor-liquid equilibrium processes. Dynamic optimization and nonlinear model predictive control of distillation columns, certain two-phase ow problems, as well as oil reser...... that the optimization solver, the compiler, and high-performance linear algebra software are all important for e_cient dynamic optimization of UV ash processes....

  19. Workshop on Computational Optimization

    CERN Document Server

    2016-01-01

    This volume is a comprehensive collection of extended contributions from the Workshop on Computational Optimization 2014, held at Warsaw, Poland, September 7-10, 2014. The book presents recent advances in computational optimization. The volume includes important real problems like parameter settings for controlling processes in bioreactor and other processes, resource constrained project scheduling, infection distribution, molecule distance geometry, quantum computing, real-time management and optimal control, bin packing, medical image processing, localization the abrupt atmospheric contamination source and so on. It shows how to develop algorithms for them based on new metaheuristic methods like evolutionary computation, ant colony optimization, constrain programming and others. This research demonstrates how some real-world problems arising in engineering, economics, medicine and other domains can be formulated as optimization tasks.

  20. The importance and realization of values in relation to the subjective emotional well-being in the Slovenian and British sample

    Directory of Open Access Journals (Sweden)

    Jana Strniša

    2007-07-01

    Full Text Available In the study we examined the relationship between the importance and realization of values and subjective emotional well being of Slovenian and British subjects. The overall results were in concordance with telic and hedonistic theory of subjective emotional well being within both samples. Also the correlations between subjective emotional well being and fulfilled value orientation were in both samples substantially higher than the correlation between subjective emotional well being and value orientation itself. The finding of profound similarities in the relation between subjective emotional well being and the realization of general value orientation in Slovenian and British sample is interesting and deserves special attention and further research. The fulfillment of hedonic or dionisic values, respectively, was found to be the strongest predictor of subjective emotional well being of Slovenian and British subjects.

  1. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  2. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  3. Optimization of a neutron transmission beamline applied to materials science for the CAB linear accelerator

    International Nuclear Information System (INIS)

    Ramirez, S; Santisteban, J.R

    2009-01-01

    The Neutrons and Reactors Laboratory (NYR) of CAB (Centro Atomico Bariloche) is equipped with a linear electron accelerator (LINAC - Linear particle accelerator). This LINAC is used as a neutron source from which two beams are extracted to perform neutron transmission and dispersion experiments. Through these experiments, structural and dynamic properties of materials can be studied. The neutron transmission experiments consist in a collimated neutron beam which interacts with a sample and a detector behind the sample. Important information about the microstructural characteristics of the material can be obtained from the comparison between neutron spectra before and after the interaction with the sample. In the NYR Laboratory, cylindrical samples of one inch of diameter have been traditionally studied. Nonetheless, there is a great motivation for doing systematic research on smaller and with different geometries samples; particularly sheets and samples for tensile tests. Hence, in the NYR Laboratory it has been considered the possibility of incorporating a neutron guide into the existent transmission line. According to all mentioned above, the main objective of this work consisted in the optimization of the flight transmission tube optics of neutrons. This optimization not only improved the existent line but also contributed to an election criterion for the neutron guide acquisition. [es

  4. RANKED SET SAMPLING FOR ECOLOGICAL RESEARCH: ACCOUNTING FOR THE TOTAL COSTS OF SAMPLING

    Science.gov (United States)

    Researchers aim to design environmental studies that optimize precision and allow for generalization of results, while keeping the costs of associated field and laboratory work at a reasonable level. Ranked set sampling is one method to potentially increase precision and reduce ...

  5. Feasibility of Stochastic Voltage/VAr Optimization Considering Renewable Energy Resources for Smart Grid

    Science.gov (United States)

    Momoh, James A.; Salkuti, Surender Reddy

    2016-06-01

    This paper proposes a stochastic optimization technique for solving the Voltage/VAr control problem including the load demand and Renewable Energy Resources (RERs) variation. The RERs often take along some inputs like stochastic behavior. One of the important challenges i. e., Voltage/VAr control is a prime source for handling power system complexity and reliability, hence it is the fundamental requirement for all the utility companies. There is a need for the robust and efficient Voltage/VAr optimization technique to meet the peak demand and reduction of system losses. The voltages beyond the limit may damage costly sub-station devices and equipments at consumer end as well. Especially, the RERs introduces more disturbances and some of the RERs are not even capable enough to meet the VAr demand. Therefore, there is a strong need for the Voltage/VAr control in RERs environment. This paper aims at the development of optimal scheme for Voltage/VAr control involving RERs. In this paper, Latin Hypercube Sampling (LHS) method is used to cover full range of variables by maximally satisfying the marginal distribution. Here, backward scenario reduction technique is used to reduce the number of scenarios effectively and maximally retain the fitting accuracy of samples. The developed optimization scheme is tested on IEEE 24 bus Reliability Test System (RTS) considering the load demand and RERs variation.

  6. The clustering of the SDSS-IV extended Baryon Oscillation Spectroscopic Survey DR14 quasar sample: anisotropic Baryon Acoustic Oscillations measurements in Fourier-space with optimal redshift weights

    Science.gov (United States)

    Wang, Dandan; Zhao, Gong-Bo; Wang, Yuting; Percival, Will J.; Ruggeri, Rossana; Zhu, Fangzhou; Tojeiro, Rita; Myers, Adam D.; Chuang, Chia-Hsun; Baumgarten, Falk; Zhao, Cheng; Gil-Marín, Héctor; Ross, Ashley J.; Burtin, Etienne; Zarrouk, Pauline; Bautista, Julian; Brinkmann, Jonathan; Dawson, Kyle; Brownstein, Joel R.; de la Macorra, Axel; Schneider, Donald P.; Shafieloo, Arman

    2018-06-01

    We present a measurement of the anisotropic and isotropic Baryon Acoustic Oscillations (BAO) from the extended Baryon Oscillation Spectroscopic Survey Data Release 14 quasar sample with optimal redshift weights. Applying the redshift weights improves the constraint on the BAO dilation parameter α(zeff) by 17 per cent. We reconstruct the evolution history of the BAO distance indicators in the redshift range of 0.8 < z < 2.2. This paper is part of a set that analyses the eBOSS DR14 quasar sample.

  7. Optimization and analysis of a quantitative real-time PCR-based technique to determine microRNA expression in formalin-fixed paraffin-embedded samples

    Directory of Open Access Journals (Sweden)

    Reis Patricia P

    2010-06-01

    Full Text Available Abstract Background MicroRNAs (miRs are non-coding RNA molecules involved in post-transcriptional regulation, with diverse functions in tissue development, differentiation, cell proliferation and apoptosis. miRs may be less prone to degradation during formalin fixation, facilitating miR expression studies in formalin-fixed paraffin-embedded (FFPE tissue. Results Our study demonstrates that the TaqMan Human MicroRNA Array v1.0 (Early Access platform is suitable for miR expression analysis in FFPE tissue with a high reproducibility (correlation coefficients of 0.95 between duplicates, p 35, we show that reproducibility between technical replicates, equivalent dilutions, and FFPE vs. frozen samples is best in the high abundance stratum. We also demonstrate that the miR expression profiles of FFPE samples are comparable to those of fresh-frozen samples, with a correlation of up to 0.87 (p Conclusion Our study thus demonstrates the utility, reproducibility, and optimization steps needed in miR expression studies using FFPE samples on a high-throughput quantitative PCR-based miR platform, opening up a realm of research possibilities for retrospective studies.

  8. Primal and dual approaches to adjustable robust optimization

    NARCIS (Netherlands)

    de Ruiter, Frans

    2018-01-01

    Robust optimization has become an important paradigm to deal with optimization under uncertainty. Adjustable robust optimization is an extension that deals with multistage problems. This thesis starts with a short but comprehensive introduction to adjustable robust optimization. Then the two

  9. Feasible sampling plan for Bemisia tabaci control decision-making in watermelon fields.

    Science.gov (United States)

    Lima, Carlos Ho; Sarmento, Renato A; Pereira, Poliana S; Galdino, Tarcísio Vs; Santos, Fábio A; Silva, Joedna; Picanço, Marcelo C

    2017-11-01

    The silverleaf whitefly Bemisia tabaci is one of the most important pests of watermelon fields worldwide. Conventional sampling plans are the starting point for the generation of decision-making systems of integrated pest management programs. The aim of this study was to determine a conventional sampling plan for B. tabaci in watermelon fields. The optimal leaf for B. tabaci adult sampling was the 6 th most apical leaf. Direct counting was the best pest sampling technique. Crop pest densities fitted the negative binomial distribution and had a common aggregation parameter (K common ). The sampling plan consisted of evaluating 103 samples per plot. This sampling plan was conducted for 56 min, costing US$ 2.22 per sampling and with a 10% maximum evaluation error. The sampling plan determined in this study can be adopted by farmers because it enables the adequate evaluation of B. tabaci populations in watermelon fields (10% maximum evaluation error) and is a low-cost (US$ 2.22 per sampling), fast (56 min per sampling) and feasible (because it may be used in a standardized way throughout the crop cycle) technique. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  10. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Directory of Open Access Journals (Sweden)

    Thadeous J Kacmarczyk

    Full Text Available Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads. Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  11. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    Science.gov (United States)

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  12. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Science.gov (United States)

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  13. Efficient AUC optimization for classification

    NARCIS (Netherlands)

    Calders, T.; Jaroszewicz, S.; Kok, J.N.; Koronacki, J.; Lopez de Mantaras, R.; Matwin, S.; Mladenic, D.; Skowron, A.

    2007-01-01

    In this paper we show an efficient method for inducing classifiers that directly optimize the area under the ROC curve. Recently, AUC gained importance in the classification community as a mean to compare the performance of classifiers. Because most classification methods do not optimize this

  14. A new hybrid model optimized by an intelligent optimization algorithm for wind speed forecasting

    International Nuclear Information System (INIS)

    Su, Zhongyue; Wang, Jianzhou; Lu, Haiyan; Zhao, Ge

    2014-01-01

    Highlights: • A new hybrid model is developed for wind speed forecasting. • The model is based on the Kalman filter and the ARIMA. • An intelligent optimization method is employed in the hybrid model. • The new hybrid model has good performance in western China. - Abstract: Forecasting the wind speed is indispensable in wind-related engineering studies and is important in the management of wind farms. As a technique essential for the future of clean energy systems, reducing the forecasting errors related to wind speed has always been an important research subject. In this paper, an optimized hybrid method based on the Autoregressive Integrated Moving Average (ARIMA) and Kalman filter is proposed to forecast the daily mean wind speed in western China. This approach employs Particle Swarm Optimization (PSO) as an intelligent optimization algorithm to optimize the parameters of the ARIMA model, which develops a hybrid model that is best adapted to the data set, increasing the fitting accuracy and avoiding over-fitting. The proposed method is subsequently examined on the wind farms of western China, where the proposed hybrid model is shown to perform effectively and steadily

  15. Optimization of Sample Preparation processes of Bone Material for Raman Spectroscopy.

    Science.gov (United States)

    Chikhani, Madelen; Wuhrer, Richard; Green, Hayley

    2018-03-30

    Raman spectroscopy has recently been investigated for use in the calculation of postmortem interval from skeletal material. The fluorescence generated by samples, which affects the interpretation of Raman data, is a major limitation. This study compares the effectiveness of two sample preparation techniques, chemical bleaching and scraping, in the reduction of fluorescence from bone samples during testing with Raman spectroscopy. Visual assessment of Raman spectra obtained at 1064 nm excitation following the preparation protocols indicates an overall reduction in fluorescence. Results demonstrate that scraping is more effective at resolving fluorescence than chemical bleaching. The scraping of skeletonized remains prior to Raman analysis is a less destructive method and allows for the preservation of a bone sample in a state closest to its original form, which is beneficial in forensic investigations. It is recommended that bone scraping supersedes chemical bleaching as the preferred method for sample preparation prior to Raman spectroscopy. © 2018 American Academy of Forensic Sciences.

  16. Optimal estimation and control in nuclear power plants

    International Nuclear Information System (INIS)

    Purviance, J.E.; Tylee, J.L.

    1982-08-01

    Optimal estimation and control theories offer the potential for more precise control and diagnosis of nuclear power plants. The important element of these theories is that a mathematical plant model is used in conjunction with the actual plant data to optimize some performance criteria. These criteria involve important plant variables and incorporate a sense of the desired plant performance. Several applications of optimal estimation and control to nuclear systems are discussed

  17. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  18. Protein import into isolated pea root leucoplasts

    OpenAIRE

    Chu, Chiung-Chih; Li, Hsou-min

    2015-01-01

    Leucoplasts are important organelles for the synthesis and storage of starch, lipids and proteins. However, molecular mechanism of protein import into leucoplasts and how it differs from that of import into chloroplasts remain unknown. We used pea seedlings for both chloroplast and leucoplast isolations to compare within the same species. We further optimized the isolation and import conditions to improve import efficiency and to permit a quantitative comparison between the two plastid types....

  19. Serum Dried Samples to Detect Dengue Antibodies: A Field Study

    Directory of Open Access Journals (Sweden)

    Angelica Maldonado-Rodríguez

    2017-01-01

    Full Text Available Background. Dried blood and serum samples are useful resources for detecting antiviral antibodies. The conditions for elution of the sample need to be optimized for each disease. Dengue is a widespread disease in Mexico which requires continuous surveillance. In this study, we standardized and validated a protocol for the specific detection of dengue antibodies from dried serum spots (DSSs. Methods. Paired serum and DSS samples from 66 suspected cases of dengue were collected in a clinic in Veracruz, Mexico. Samples were sent to our laboratory, where the conditions for optimal elution of DSSs were established. The presence of anti-dengue antibodies was determined in the paired samples. Results. DSS elution conditions were standardized as follows: 1 h at 4°C in 200 µl of DNase-, RNase-, and protease-free PBS (1x. The optimal volume of DSS eluate to be used in the IgG assay was 40 µl. Sensitivity of 94%, specificity of 93.3%, and kappa concordance of 0.87 were obtained when comparing the antidengue reactivity between DSSs and serum samples. Conclusion. DSS samples are useful for detecting anti-dengue IgG antibodies in the field.

  20. Optimized methods for total nucleic acid extraction and quantification of the bat white-nose syndrome fungus, Pseudogymnoascus destructans, from swab and environmental samples.

    Science.gov (United States)

    Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S

    2016-03-01

    The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. © 2016 The Author(s).