WorldWideScience

Sample records for optimal sampling intervals

  1. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...... in a Fisherian sense, is given. The solution is investigated by a simulation study. It is shown that if the experimental length T1 is fixed it may be useful to sample the record at a high sampling rate, since more measurements from the system are then collected. No optimal sampling interval exists....... But if the total number of sample points N is fixed an optimal sampling interval exists. Then it is far worse to use a too large sampling interval than a too small one since the information losses increase rapidly when the sampling interval increases from the optimal value....

  2. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  3. Estimation of the optimal statistical quality control sampling time intervals using a residual risk measure.

    Directory of Open Access Journals (Sweden)

    Aristides T Hatjimihail

    Full Text Available BACKGROUND: An open problem in clinical chemistry is the estimation of the optimal sampling time intervals for the application of statistical quality control (QC procedures that are based on the measurement of control materials. This is a probabilistic risk assessment problem that requires reliability analysis of the analytical system, and the estimation of the risk caused by the measurement error. METHODOLOGY/PRINCIPAL FINDINGS: Assuming that the states of the analytical system are the reliability state, the maintenance state, the critical-failure modes and their combinations, we can define risk functions based on the mean time of the states, their measurement error and the medically acceptable measurement error. Consequently, a residual risk measure rr can be defined for each sampling time interval. The rr depends on the state probability vectors of the analytical system, the state transition probability matrices before and after each application of the QC procedure and the state mean time matrices. As optimal sampling time intervals can be defined those minimizing a QC related cost measure while the rr is acceptable. I developed an algorithm that estimates the rr for any QC sampling time interval of a QC procedure applied to analytical systems with an arbitrary number of critical-failure modes, assuming any failure time and measurement error probability density function for each mode. Furthermore, given the acceptable rr, it can estimate the optimal QC sampling time intervals. CONCLUSIONS/SIGNIFICANCE: It is possible to rationally estimate the optimal QC sampling time intervals of an analytical system to sustain an acceptable residual risk with the minimum QC related cost. For the optimization the reliability analysis of the analytical system and the risk analysis of the measurement error are needed.

  4. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  5. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  6. Global Optimization using Interval Analysis: Interval Optimization for Aerospace Applications

    NARCIS (Netherlands)

    Van Kampen, E.

    2010-01-01

    Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in adapti

  7. An Optimization-Based Approach to Calculate Confidence Interval on Mean Value with Interval Data

    Directory of Open Access Journals (Sweden)

    Kais Zaman

    2014-01-01

    Full Text Available In this paper, we propose a methodology for construction of confidence interval on mean values with interval data for input variable in uncertainty analysis and design optimization problems. The construction of confidence interval with interval data is known as a combinatorial optimization problem. Finding confidence bounds on the mean with interval data has been generally considered an NP hard problem, because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the confidence interval on mean values with interval data. With numerical experimentation, we show that the proposed confidence bound algorithms are scalable in polynomial time with respect to increasing number of intervals. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the current practice for the design optimization with interval data that typically implements the constraints on interval variables through the computation of bounds on mean values from the sampled data, the proposed approach of construction of confidence interval enables more complete implementation of design optimization under interval uncertainty.

  8. Intervals in evolutionary algorithms for global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Patil, R.B.

    1995-05-01

    Optimization is of central concern to a number of disciplines. Interval Arithmetic methods for global optimization provide us with (guaranteed) verified results. These methods are mainly restricted to the classes of objective functions that are twice differentiable and use a simple strategy of eliminating a splitting larger regions of search space in the global optimization process. An efficient approach that combines the efficient strategy from Interval Global Optimization Methods and robustness of the Evolutionary Algorithms is proposed. In the proposed approach, search begins with randomly created interval vectors with interval widths equal to the whole domain. Before the beginning of the evolutionary process, fitness of these interval parameter vectors is defined by evaluating the objective function at the center of the initial interval vectors. In the subsequent evolutionary process the local optimization process returns an estimate of the bounds of the objective function over the interval vectors. Though these bounds may not be correct at the beginning due to large interval widths and complicated function properties, the process of reducing interval widths over time and a selection approach similar to simulated annealing helps in estimating reasonably correct bounds as the population evolves. The interval parameter vectors at these estimated bounds (local optima) are then subjected to crossover and mutation operators. This evolutionary process continues for predetermined number of generations in the search of the global optimum.

  9. Determination of the Optimal Sampling Interval for Cyclostratigraphic Analysis by Using Sampling Theorem and Accumulation Rates%利用采样定理与沉积速率确定旋回分析最佳采样间隔

    Institute of Scientific and Technical Information of China (English)

    赵庆乐; 吴怀春; 李海燕; 张世红

    2011-01-01

    旋回地层学方法近年来被成功应用于年代确定及重大地质事件天文影响因素的判别.采样是旋回分析中最重要的一步,目前大多使用地球物理、地球化学替代性指标.采样频率过高.会大大增加测量和计算的工作量,同时也会增加随机干扰或其他非气候因素的干扰;采样频率过低,可能识别不出其中所包含的米兰柯维奇旋回成分.为确定一个最佳的采样间隔,通过对80~100Ma理论日照量曲线及两个实测剖面3种采样间隔(密集采样问隔与约等于一个岁差周期沉积厚度四分之一和一半的采样间隔)数据分别进行谱估计并比较谱估计结果.发现在满足采样定理的前提下,以一个岁差周期沉积厚度的约一半作为采样间隔,既可以分析出全部的米兰柯维奇旋回信号,又具有最少的工作量,是旋回分析的最佳采样间隔.实际采样中需根据平均沉积速率来确定这个最佳采样间隔.%In recent years, cyclostratigraphy has been successfully applied to dating strata and recongnizing the possible astronomical forcing on major geological events. Sampling is one of the most important routines in cyclostratigraphic analysis to get the suitable geophysical or geochemical paleoclimate proxies. However, the workload will be significantly increased and random noises or other non-climatic noises will be introduced if the sampling frequency is too high; on the contrary, a lower sampling frequency may make it difficult to recognize Milankovitch signals in successions. In order to identify an optimal sampling interval, we used theoretic daily insolation data of time intervals of 80-100 Ma and two geological datasets to estimate each power spectra at three sampling intervals (high resolution, one quarter and half of one precession cycle), and then compared corresponding spectra analysis results. As a result, under the condition of satisfying the sampling theorem, sampling interval which

  10. A novel algorithm for spectral interval combination optimization.

    Science.gov (United States)

    Song, Xiangzhong; Huang, Yue; Yan, Hong; Xiong, Yanmei; Min, Shungeng

    2016-12-15

    In this study, a new wavelength interval selection algorithm named as interval combination optimization (ICO) was proposed under the framework of model population analysis (MPA). In this method, the full spectra are divided into a fixed number of equal-width intervals firstly. Then the optimal interval combination is searched iteratively under the guide of MPA in a soft shrinkage manner, among which weighted bootstrap sampling (WBS) is employed as random sampling method. Finally, local search is conducted to optimize the widths of selected intervals. Three NIR datasets were used to validate the performance of ICO algorithm. Results show that ICO can select fewer wavelengths with better prediction performance when compared with other four wavelength selection methods, including VISSA, VISSA-iPLS, iVISSA and GA-iPLS. In addition, the computational intensity of ICO is also economical, benefit from fewer tune parameters and faster convergence speed. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Better Confidence Intervals for Importance Sampling

    OpenAIRE

    HALIS SAK; WOLFGANG HÖRMANN; JOSEF LEYDOLD

    2010-01-01

    It is well known that for highly skewed distributions the standard method of using the t statistic for the confidence interval of the mean does not give robust results. This is an important problem for importance sampling (IS) as its final distribution is often skewed due to a heavy tailed weight distribution. In this paper, we first explain Hall's transformation and its variants to correct the confidence interval of the mean and then evaluate the performance of these methods for two numerica...

  12. Optimal prediction intervals of wind power generation

    OpenAIRE

    Wan, Can; Wu, Zhao; Pinson, Pierre; Dong, Zhao Yang; Wong, Kit Po

    2014-01-01

    Accurate and reliable wind power forecasting is essential to power system operation. Given significant uncertainties involved in wind generation, probabilistic interval forecasting provides a unique solution to estimate and quantify the potential impacts and risks facing system operation with wind penetration beforehand. This paper proposes a novel hybrid intelligent algorithm approach to directly formulate optimal prediction intervals of wind power generation based on extreme learning machin...

  13. Optimal Approximation of Quadratic Interval Functions

    Science.gov (United States)

    Koshelev, Misha; Taillibert, Patrick

    1997-01-01

    Measurements are never absolutely accurate, as a result, after each measurement, we do not get the exact value of the measured quantity; at best, we get an interval of its possible values, For dynamically changing quantities x, the additional problem is that we cannot measure them continuously; we can only measure them at certain discrete moments of time t(sub 1), t(sub 2), ... If we know that the value x(t(sub j)) at a moment t(sub j) of the last measurement was in the interval [x-(t(sub j)), x + (t(sub j))], and if we know the upper bound D on the rate with which x changes, then, for any given moment of time t, we can conclude that x(t) belongs to the interval [x-(t(sub j)) - D (t - t(sub j)), x + (t(sub j)) + D (t - t(sub j))]. This interval changes linearly with time, an is, therefore, called a linear interval function. When we process these intervals, we get an expression that is quadratic and higher order w.r.t. time t, Such "quadratic" intervals are difficult to process and therefore, it is necessary to approximate them by linear ones. In this paper, we describe an algorithm that gives the optimal approximation of quadratic interval functions by linear ones.

  14. Optimal prediction intervals of wind power generation

    DEFF Research Database (Denmark)

    Wan, Can; Wu, Zhao; Pinson, Pierre

    2014-01-01

    Accurate and reliable wind power forecasting is essential to power system operation. Given significant uncertainties involved in wind generation, probabilistic interval forecasting provides a unique solution to estimate and quantify the potential impacts and risks facing system operation with wind...... penetration beforehand. This paper proposes a novel hybrid intelligent algorithm approach to directly formulate optimal prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization. Prediction intervals with Associated confidence levels are generated through...... conducted. Comparing with benchmarks applied, experimental results demonstrate the high efficiency and reliability of the developed approach. It is therefore convinced that the proposed method provides a new generalized framework for probabilistic wind power forecasting with high reliability and flexibility...

  15. On the concept of optimality interval

    Directory of Open Access Journals (Sweden)

    Lluís Bibiloni

    2002-01-01

    best approximations to the numbers they converge to in two ways known as the first and the second kind. This property of continued fractions provides a solution to Gosper's problem of the batting average: if the batting average of a baseball player is 0.334, what is the minimum number of times he has been at bat? In this paper, we tackle somehow the inverse question: given a rational number P/Q, what is the set of all numbers for which P/Q is a best approximation of one or the other kind? We prove that in both cases these optimality sets are intervals and we give a precise description of their endpoints.

  16. DYNAMIC OPTIMIZATION FOR UNCERTAIN STRUCTURES USING INTERVAL METHOD

    Institute of Scientific and Technical Information of China (English)

    ChertSub-A-; WuJie; LiuChun

    2003-01-01

    An interval optimization method for the dynamic response of structures with interval parameters is presented. The matrices of structures with interval parameters are given. Combining the interval extension with the perturbation, the method for interval dynamic response analysis is derived. The interval optimization problem is transformed into a corresponding deterministic one. Because the mean values and the uncertainties of the interval parameters can be elected design variables, more information of the optimization results can be obtained by the present method than that obtained by the deterministic one. The present method is implemented for a truss structure. The numerical results show that the method is effective.

  17. Sampling Theorem in Terms of the Bandwidth and Sampling Interval

    Science.gov (United States)

    Dean, Bruce H.

    2011-01-01

    An approach has been developed for interpolating non-uniformly sampled data, with applications in signal and image reconstruction. This innovation generalizes the Whittaker-Shannon sampling theorem by emphasizing two assumptions explicitly (definition of a band-limited function and construction by periodic extension). The Whittaker- Shannon sampling theorem is thus expressed in terms of two fundamental length scales that are derived from these assumptions. The result is more general than what is usually reported, and contains the Whittaker- Shannon form as a special case corresponding to Nyquist-sampled data. The approach also shows that the preferred basis set for interpolation is found by varying the frequency component of the basis functions in an optimal way.

  18. Sampling-interval-dependent stability for linear sampled-data systems with non-uniform sampling

    Science.gov (United States)

    Shao, Hanyong; Lam, James; Feng, Zhiguang

    2016-09-01

    This paper is concerned with the sampling-interval-dependent stability of linear sampled-data systems with non-uniform sampling. A new Lyapunov-like functional is constructed to derive sampling-interval-dependent stability results. The Lyapunov-like functional has three features. First, it depends on time explicitly. Second, it may be discontinuous at the sampling instants. Third, it is not required to be positive definite between sampling instants. Moreover, the new Lyapunov-like functional can make use of the information fully of the sampled-data system, including that of both ends of the sampling interval. By making a new proposition for the Lyapunov-like functional, a sampling-interval-dependent stability criterion with reduced conservatism is derived. The new sampling-interval-dependent stability criterion is further extended to linear sampled-data systems with polytopic uncertainties. Finally, examples are given to illustrate the reduced conservatism of the stability criteria.

  19. Design of optimized Interval Arithmetic Multiplier

    Directory of Open Access Journals (Sweden)

    Rajashekar B.Shettar

    2011-07-01

    Full Text Available Many DSP and Control applications that require the user to know how various numericalerrors(uncertainty affect the result. This uncertainty is eliminated by replacing non-interval values withintervals. Since most DSPs operate in real time environments, fast processors are required to implementinterval arithmetic. The goal is to develop a platform in which Interval Arithmetic operations areperformed at the same computational speed as present day signal processors. So we have proposed thedesign and implementation of Interval Arithmetic multiplier, which operates with IEEE 754 numbers. Theproposed unit consists of a floating point CSD multiplier, Interval operation selector. This architectureimplements an algorithm which is faster than conventional algorithm of Interval multiplier . The costoverhead of the proposed unit is 30% with respect to a conventional floating point multiplier. Theperformance of proposed architecture is better than that of a conventional CSD floating-point multiplier,as it can perform both interval multiplication and floating-point multiplication as well as Intervalcomparisons

  20. Discrete-time optimal control and games on large intervals

    CERN Document Server

    Zaslavski, Alexander J

    2017-01-01

    Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...

  1. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments.

  2. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  3. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...

  4. Optimal sampling of paid content

    OpenAIRE

    Halbheer, Daniel; Stahl, Florian; Koenigsberg, Oded; Lehmann, Donald R

    2011-01-01

    This paper analyzes optimal sampling and pricing of paid content for publishers of news websites. Publishers offer free content samples both to disclose journalistic quality to consumers and to generate online advertising revenues. We examine sampling where the publisher sets the number of free sample articles and consumers select the articles of their choice. Consumerslearn from the free samples in a Bayesian fashion and base their subscription decisions on posterior quality expectations. We...

  5. An uncertain multidisciplinary design optimization method using interval convex models

    Science.gov (United States)

    Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong

    2013-06-01

    This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.

  6. Relativistic rise measurements with very fine sampling intervals

    Energy Technology Data Exchange (ETDEWEB)

    Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.

    1980-01-01

    The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements.

  7. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process...

  8. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  9. A new method for wavelength interval selection that intelligently optimizes the locations, widths and combinations of the intervals.

    Science.gov (United States)

    Deng, Bai-Chuan; Yun, Yong-Huan; Ma, Pan; Lin, Chen-Chen; Ren, Da-Bing; Liang, Yi-Zeng

    2015-03-21

    In this study, a new algorithm for wavelength interval selection, known as interval variable iterative space shrinkage approach (iVISSA), is proposed based on the VISSA algorithm. It combines global and local searches to iteratively and intelligently optimize the locations, widths and combinations of the spectral intervals. In the global search procedure, it inherits the merit of soft shrinkage from VISSA to search the locations and combinations of informative wavelengths, whereas in the local search procedure, it utilizes the information of continuity in spectroscopic data to determine the widths of wavelength intervals. The global and local search procedures are carried out alternatively to realize wavelength interval selection. This method was tested using three near infrared (NIR) datasets. Some high-performing wavelength selection methods, such as synergy interval partial least squares (siPLS), moving window partial least squares (MW-PLS), competitive adaptive reweighted sampling (CARS), genetic algorithm PLS (GA-PLS) and interval random frog (iRF), were used for comparison. The results show that the proposed method is very promising with good results both on prediction capability and stability. The MATLAB codes for implementing iVISSA are freely available on the website: .

  10. Binomial Distribution Sample Confidence Intervals Estimation 6. Excess Risk

    Directory of Open Access Journals (Sweden)

    Sorana BOLBOACĂ

    2004-02-01

    Full Text Available We present the problem of the confidence interval estimation for excess risk (Y/n-X/m fraction, a parameter which allows evaluating of the specificity of an association between predisposing or causal factors and disease in medical studies. The parameter is computes based on 2x2 contingency table and qualitative variables. The aim of this paper is to introduce four new methods of computing confidence intervals for excess risk called DAC, DAs, DAsC, DBinomial, and DBinomialC and to compare theirs performance with the asymptotic method called here DWald.In order to assess the methods, we use the PHP programming language and a PHP program was creates. The performance of each method for different sample sizes and different values of binomial variables were assess using a set of criterions. First, the upper and lower boundaries for a given X, Y and a specified sample size for choused methods were compute. Second, were assessed the average and standard deviation of the experimental errors, and the deviation relative to imposed significance level α = 5%. Four methods were assessed on random numbers for binomial variables and for sample sizes from 4 to 1000 domain.The experiments show that the DAC methods obtain performances in confidence intervals estimation for excess risk.

  11. Optimal interval for major maintenance actions in electricity distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Louit, Darko; Pascual, Rodrigo [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna MacKenna, 4860 Santiago (Chile); Banjevic, Dragan [Centre for Maintenance Optimization and Reliability Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ontario (Canada)

    2009-09-15

    Many systems require the periodic undertaking of major (preventive) maintenance actions (MMAs) such as overhauls in mechanical equipment, reconditioning of train lines, resurfacing of roads, etc. In the long term, these actions contribute to achieving a lower rate of occurrence of failures, though in many cases they increase the intensity of the failure process shortly after performed, resulting in a non-monotonic trend for failure intensity. Also, in the special case of distributed assets such as communications and energy networks, pipelines, etc., it is likely that the maintenance action takes place sequentially over an extended period of time, implying that different sections of the network underwent the MMAs at different periods. This forces the development of a model based on a relative time scale (i.e. time since last major maintenance event) and the combination of data from different sections of a grid, under a normalization scheme. Additionally, extended maintenance times and sequential execution of the MMAs make it difficult to identify failures occurring before and after the preventive maintenance action. This results in the loss of important information for the characterization of the failure process. A simple model is introduced to determine the optimal MMA interval considering such restrictions. Furthermore, a case study illustrates the optimal tree trimming interval around an electricity distribution network. (author)

  12. The influence of sampling interval on the accuracy of trail impact assessment

    Science.gov (United States)

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  13. Flood frequency analysis using multi-objective optimization based interval estimation approach

    Science.gov (United States)

    Kasiviswanathan, K. S.; He, Jianxun; Tay, Joo-Hwa

    2017-02-01

    Flood frequency analysis (FFA) is a necessary tool for water resources management and water infrastructure design. Owing to the existence of variability in sample representation, distribution selection, and distribution parameter estimation, flood quantile estimation is subjected to various levels of uncertainty, which is not negligible and avoidable. Hence, alternative methods to the conventional approach of FFA are desired for quantifying the uncertainty such as in the form of prediction interval. The primary focus of the paper was to develop a novel approach to quantify and optimize the prediction interval resulted from the non-stationarity of data set, which is reflected in the distribution parameters estimated, in FFA. This paper proposed the combination of the multi-objective optimization approach and the ensemble simulation technique to determine the optimal perturbations of distribution parameters for constructing the prediction interval of flood quantiles in FFA. To demonstrate the proposed approach, annual maximum daily flow data collected from two gauge stations on the Bow River, Alberta, Canada, were used. The results suggest that the proposed method can successfully capture the uncertainty in quantile estimates qualitatively using the prediction interval, as the number of observations falling within the constructed prediction interval is approximately maximized while the prediction interval is minimized.

  14. Optimization of Allowed Outage Time and Surveillance Test Intervals

    Energy Technology Data Exchange (ETDEWEB)

    Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)

    2015-10-15

    The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general

  15. Sample Size for the "Z" Test and Its Confidence Interval

    Science.gov (United States)

    Liu, Xiaofeng Steven

    2012-01-01

    The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)

  16. Sample Size for the "Z" Test and Its Confidence Interval

    Science.gov (United States)

    Liu, Xiaofeng Steven

    2012-01-01

    The statistical power of a significance test is closely related to the length of the confidence interval (i.e. estimate precision). In the case of a "Z" test, the length of the confidence interval can be expressed as a function of the statistical power. (Contains 1 figure and 1 table.)

  17. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  18. On selection of the optimal data time interval for real-time hydrological forecasting

    Directory of Open Access Journals (Sweden)

    J. Liu

    2013-09-01

    Full Text Available With the advancement in modern telemetry and communication technologies, hydrological data can be collected with an increasingly higher sampling rate. An important issue deserving attention from the hydrological community is which suitable time interval of the model input data should be chosen in hydrological forecasting. Such a problem has long been recognised in the control engineering community but is a largely ignored topic in operational applications of hydrological forecasting. In this study, the intrinsic properties of rainfall–runoff data with different time intervals are first investigated from the perspectives of the sampling theorem and the information loss using the discrete wavelet transform tool. It is found that rainfall signals with very high sampling rates may not always improve the accuracy of rainfall–runoff modelling due to the catchment low-pass-filtering effect. To further investigate the impact of a data time interval in real-time forecasting, a real-time forecasting system is constructed by incorporating the probability distributed model (PDM with a real-time updating scheme, the autoregressive moving-average (ARMA model. Case studies are then carried out on four UK catchments with different concentration times for real-time flow forecasting using data with different time intervals of 15, 30, 45, 60, 90 and 120 min. A positive relation is found between the forecast lead time and the optimal choice of the data time interval, which is also highly dependent on the catchment concentration time. Finally, based on the conclusions from the case studies, a hypothetical pattern is proposed in three-dimensional coordinates to describe the general impact of the data time interval and to provide implications of the selection of the optimal time interval in real-time hydrological forecasting. Although nowadays most operational hydrological systems still have low data sampling rates (daily or hourly, the future is that higher

  19. On selection of the optimal data time interval for real-time hydrological forecasting

    Directory of Open Access Journals (Sweden)

    J. Liu

    2012-09-01

    Full Text Available With the advancement in modern telemetry and communication technologies, hydrological data can be collected with an increasingly higher sampling rate. An important issue deserving attention from the hydrological community is what suitable time interval of the model input data should be chosen in hydrological forecasting. Such a problem has long been recognised in the control engineering community but is a largely ignored topic in operational applications of hydrological forecasting. In this study, the intrinsic properties of rainfall-runoff data with different time intervals are first investigated from the perspectives of the sampling theorem and the information loss using the discrete wavelet decomposition tool. It is found that rainfall signals with very high sampling rates may not always improve the accuracy of rainfall-runoff modelling due to the catchment low-pass filtering effect. To further investigate the impact of data time interval in real-time forecasting, a real-time forecasting system is constructed by incorporating the Probability Distributed Model (PDM with a real-time updating scheme, the autoregressive-moving average (ARMA model. Case studies are then carried out on four UK catchments with different concentration times for real-time flow forecasting using data with different time intervals of 15 min, 30 min, 45 min, 60 min, 90 min and 120 min. A positive relation is found between the forecast lead time and the optimal choice of the data time interval, which is also highly dependent on the catchment concentration time. Finally, based on the conclusions from the case studies, a hypothetical pattern is proposed in three-dimensional coordinates to describe the general impact of the data time interval and to provide implications on the selection of the optimal time interval in real-time hydrological forecasting. Although nowadays most operational hydrological systems still have low data sampling rates (daily or hourly, the trend in

  20. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...... estimating functions under which estimators are consistent, rate optimal, and efficient under high frequency (in-fill) asymptotics. The asymptotic distributions of the estimators are shown to be normal variance-mixtures, where the mixing distribution generally depends on the full sample path of the diffusion...

  1. Sampling scheme optimization from hyperspectral data

    NARCIS (Netherlands)

    Debba, P.

    2006-01-01

    This thesis presents statistical sampling scheme optimization for geo-environ-menta] purposes on the basis of hyperspectral data. It integrates derived products of the hyperspectral remote sensing data into individual sampling schemes. Five different issues are being dealt with.First, the optimized

  2. Sampling scheme optimization from hyperspectral data

    NARCIS (Netherlands)

    Debba, P.

    2006-01-01

    This thesis presents statistical sampling scheme optimization for geo-environ-menta] purposes on the basis of hyperspectral data. It integrates derived products of the hyperspectral remote sensing data into individual sampling schemes. Five different issues are being dealt with.First, the optimized

  3. Optimizing calibration intervals for specific applications to reduce maintenance costs

    Energy Technology Data Exchange (ETDEWEB)

    Collier, Steve; Holland, Jack [Servomex Group, Crowborough (United Kingdom)

    2009-11-01

    The introduction of the Servomex MultiExact 5400 analyzer has presented an opportunity to review the cost of ownership and how improvements to an analyzer's performance may be used to reduce this. Until now, gas analyzer manufacturers have taken a conservative approach to calibration intervals based on site practices and experience covering a wide range of applications. However, if specific applications are considered, then there is an opportunity to reduce costs by increasing calibration intervals. This paper demonstrates how maintenance costs may be reduced by increasing calibration intervals for those gas analyzers used for monitoring Air Separation Units (ASUs) without detracting from their performance.(author)

  4. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  5. Field sampling scheme optimization using simulated annealing

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-10-01

    Full Text Available to derive optimal sampling schemes. 2. Hyperspectral remote sensing In the study of electro-magnetic physics, when energy in the form of light interacts with a material, part of the energy at certain wavelength is absorbed, transmitted, emitted... in order to derive optimal sampling schemes. 2. Hyperspectral remote sensing In the study of electro-magnetic physics, when energy in the form of light interacts with a material, part of the energy at certain wavelength is absorbed, transmitted, emitted...

  6. Towards an Optimal Interval for Prostate Cancer Screening

    NARCIS (Netherlands)

    van Leeuwen, Pim J.; Roobol, Monique J.; Kranse, Ries; Zappa, Marco; Carlsson, Sigrid; Bul, Meelan; Zhu, Xiaoye; Bangma, Chris H.; Schroder, Fritz H.; Hugosson, Jonas

    2012-01-01

    Background: The rate of decrease in advanced cancers is an estimate for determining prostate cancer (PCa) screening program effectiveness. Objective: Assess the effectiveness of PCa screening programs using a 2- or 4-yr screening interval. Design, setting, and participants: Men aged 55-64 yr were pa

  7. A Variable Sampling Interval Synthetic Xbar Chart for the Process Mean.

    Directory of Open Access Journals (Sweden)

    Lei Yong Lee

    Full Text Available The usual practice of using a control chart to monitor a process is to take samples from the process with fixed sampling interval (FSI. In this paper, a synthetic X control chart with the variable sampling interval (VSI feature is proposed for monitoring changes in the process mean. The VSI synthetic X chart integrates the VSI X chart and the VSI conforming run length (CRL chart. The proposed VSI synthetic X chart is evaluated using the average time to signal (ATS criterion. The optimal charting parameters of the proposed chart are obtained by minimizing the out-of-control ATS for a desired shift. Comparisons between the VSI synthetic X chart and the existing X, synthetic X, VSI X and EWMA X charts, in terms of ATS, are made. The ATS results show that the VSI synthetic X chart outperforms the other X type charts for detecting moderate and large shifts. An illustrative example is also presented to explain the application of the VSI synthetic X chart.

  8. Effects of Spatial Sampling Interval on Roughness Parameters and Microwave Backscatter over Agricultural Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Matías Ernesto Barber

    2016-06-01

    Full Text Available The spatial sampling interval, as related to the ability to digitize a soil profile with a certain number of features per unit length, depends on the profiling technique itself. From a variety of profiling techniques, roughness parameters are estimated at different sampling intervals. Since soil profiles have continuous spectral components, it is clear that roughness parameters are influenced by the sampling interval of the measurement device employed. In this work, we contributed to answer which sampling interval the profiles needed to be measured at to accurately account for the microwave response of agricultural surfaces. For this purpose, a 2-D laser profiler was built and used to measure surface soil roughness at field scale over agricultural sites in Argentina. Sampling intervals ranged from large (50 mm to small ones (1 mm, with several intermediate values. Large- and intermediate-sampling-interval profiles were synthetically derived from nominal, 1 mm ones. With these data, the effect of sampling-interval-dependent roughness parameters on backscatter response was assessed using the theoretical backscatter model IEM2M. Simulations demonstrated that variations of roughness parameters depended on the working wavelength and was less important at L-band than at C- or X-band. In any case, an underestimation of the backscattering coefficient of about 1-4 dB was observed at larger sampling intervals. As a general rule a sampling interval of 15 mm can be recommended for L-band and 5 mm for C-band.

  9. Interval Analysis: Contributions to static and dynamic optimization

    NARCIS (Netherlands)

    De Weerdt, E.

    2010-01-01

    The field of global optimization has been an active one for many years. By far the most applied methods are gradient and evolutionary based algorithms. The most appearing drawback of those types of methods is that one cannot guarantee that the global solution is found within finite time. Moreover, i

  10. RF power consumption emulation optimized with interval valued homotopies

    DEFF Research Database (Denmark)

    Musiige, Deogratius; Anton, François; Yatskevich, Vital

    2011-01-01

    the baseband and the RF system as inputs to compute the emulated power dissipation of the RF device. The emulated power, in between the measured points corresponding to the discrete values of the logical interface parameters is computed as a polynomial interpolation using polynomial basis functions......This paper presents a methodology towards the emulation of the electrical power consumption of the RF device during the cellular phone/handset transmission mode using the LTE technology. The emulation methodology takes the physical environmental variables and the logical interface between....... The evaluation of polynomial and spline curve fitting models showed a respective divergence (test error) of 8% and 0.02% from the physically measured power consumption. The precisions of the instruments used for the physical measurements have been modeled as intervals. We have been able to model the power...

  11. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  12. Analysis and optimization of weighted ensemble sampling

    CERN Document Server

    Aristoff, David

    2016-01-01

    We give a mathematical framework for weighted ensemble (WE) sampling, a binning and resampling technique for efficiently computing probabilities in molecular dynamics. We prove that WE sampling is unbiased in a very general setting that includes adaptive binning. We show that when WE is used for stationary calculations in tandem with a Markov state model (MSM), the MSM can be used to optimize the allocation of replicas in the bins.

  13. Optimality Condition and Wolfe Duality for Invex Interval-Valued Nonlinear Programming Problems

    Directory of Open Access Journals (Sweden)

    Jianke Zhang

    2013-01-01

    Full Text Available The concepts of preinvex and invex are extended to the interval-valued functions. Under the assumption of invexity, the Karush-Kuhn-Tucker optimality sufficient and necessary conditions for interval-valued nonlinear programming problems are derived. Based on the concepts of having no duality gap in weak and strong sense, the Wolfe duality theorems for the invex interval-valued nonlinear programming problems are proposed in this paper.

  14. Identification of optimal inspection interval via delay-time concept

    Directory of Open Access Journals (Sweden)

    Glauco Ricardo Simões Gomes

    2016-06-01

    Full Text Available This paper presents an application of mathematical modeling aimed at managing maintenance based on the delay-time concept. The study scenario was the manufacturing sector of an industrial unit, which operates 24 hours a day in a continuous flow of production. The main idea was to use the concepts of this approach to determine the optimal time of preventive action by the maintenance department in order to ensure the greatest availability of equipment and facilities at appropriate maintenance costs. After a brief introduction of the subject, the article presents topics that illustrate the importance of mathematical modeling in maintenance management and the delay-time concept. It also describes the characteristics of the company where the study was conducted, as well as the data related to the production process and maintenance actions. Finally, the results obtained after applying the delay-time concept are presented and discussed, as well as the limitations of the article and the proposals for future research.

  15. AN EMPIRICAL ANALYSIS OF SAMPLING INTERVAL FOR EXCHANGE RATE FORECASTING WITH NEURAL NETWORKS

    Institute of Scientific and Technical Information of China (English)

    HUANG Wei; K. K. Lai; Y. Nakamori; WANG Shouyang

    2003-01-01

    Artificial neural networks (ANNs) have been widely used as a promising alternative approach for forecast task because of their several distinguishing features. In this paper, we investigate the effect of different sampling intervals on predictive performance of ANNs in forecasting exchange rate time series. It is shown that selection of an appropriate sampling interval would permit the neural network to model adequately the financial time series. Too short or too long a sampling interval does not provide good forecasting accuracy. In addition, we discuss the effect of forecasting horizons and input nodes on the prediction performance of neural networks.

  16. Thompson Sampling: An Optimal Finite Time Analysis

    CERN Document Server

    Kaufmann, Emilie; Munos, Rémi

    2012-01-01

    The question of the optimality of Thompson Sampling for solving the stochastic multi-armed bandit problem had been open since 1933. In this paper we answer it positively for the case of Bernoulli rewards by providing the first finite-time analysis that matches the asymptotic rate given in the Lai and Robbins lower bound for the cumulative regret. The proof is accompanied by a numerical comparison with other optimal policies, experiments that have been lacking in the literature until now for the Bernoulli case.

  17. Effects of sampling interval on spatial patterns and statistics of watershed nitrogen concentration

    Science.gov (United States)

    Wu, S.-S.D.; Usery, E.L.; Finn, M.P.; Bosch, D.D.

    2009-01-01

    This study investigates how spatial patterns and statistics of a 30 m resolution, model-simulated, watershed nitrogen concentration surface change with sampling intervals from 30 m to 600 m for every 30 m increase for the Little River Watershed (Georgia, USA). The results indicate that the mean, standard deviation, and variogram sills do not have consistent trends with increasing sampling intervals, whereas the variogram ranges remain constant. A sampling interval smaller than or equal to 90 m is necessary to build a representative variogram. The interpolation accuracy, clustering level, and total hot spot areas show decreasing trends approximating a logarithmic function. The trends correspond to the nitrogen variogram and start to level at a sampling interval of 360 m, which is therefore regarded as a critical spatial scale of the Little River Watershed. Copyright ?? 2009 by Bellwether Publishing, Ltd. All right reserved.

  18. Monotonicity in the Sample Size of the Length of Classical Confidence Intervals

    CERN Document Server

    Kagan, Abram M

    2012-01-01

    It is proved that the average length of standard confidence intervals for parameters of gamma and normal distributions monotonically decrease with the sample size. The proofs are based on fine properties of the classical gamma function.

  19. Non-linear Global Optimization using Interval Arithmetic and Constraint Propagation

    DEFF Research Database (Denmark)

    Kjøller, Steffen; Kozine, Pavel; Madsen, Kaj;

    2006-01-01

    In this Chapter a new branch-and-bound method for global optimization is presented. The method combines the classical interval global optimization method with constraint propagation techniques. The latter is used for including solutions of the necessary condition f'(x)=0. The constraint propagation...

  20. The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.

    Science.gov (United States)

    Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J

    2017-09-01

    There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P V˙O2peak between sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.

  1. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  2. Optimization of Eosine Analyses in Water Samples

    Science.gov (United States)

    Kola, Liljana

    2010-01-01

    The fluorescence ability of Eosine enables its using as artificial tracer in the water system studies. The fluorescence intensity of fluorescent dyes in water samples depends on their physical and chemical properties, such as pH, temperature, presence of oxidants, etc. This paper presents the experience of the Center of Applied Nuclear Physics, Tirana, in this field. The problem is dealt with in relation to applying Eosine to trace and determine water movements within the karstic system and underground waters. We have used for this study the standard solutions of Eosine. The method we have elaborated to this purpose made it possible to optimize procedures we use to analyze samples for the presence of Eosine and measure its content, even in trace levels, by the means of a Perkin Elmer LS 55 Luminescence Spectrometer.

  3. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Dong, Liang; Sun, Lu

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered...... in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed...

  4. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    Science.gov (United States)

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties.

  5. Numerical solution of optimal control problems using multiple-interval integral Gegenbauer pseudospectral methods

    Science.gov (United States)

    Tang, Xiaojun

    2016-04-01

    The main purpose of this work is to provide multiple-interval integral Gegenbauer pseudospectral methods for solving optimal control problems. The latest developed single-interval integral Gauss/(flipped Radau) pseudospectral methods can be viewed as special cases of the proposed methods. We present an exact and efficient approach to compute the mesh pseudospectral integration matrices for the Gegenbauer-Gauss and flipped Gegenbauer-Gauss-Radau points. Numerical results on benchmark optimal control problems confirm the ability of the proposed methods to obtain highly accurate solutions.

  6. Brake squeal reduction of vehicle disc brake system with interval parameters by uncertain optimization

    Science.gov (United States)

    Lü, Hui; Yu, Dejie

    2014-12-01

    An uncertain optimization method for brake squeal reduction of vehicle disc brake system with interval parameters is presented in this paper. In the proposed method, the parameters of frictional coefficient, material properties and the thicknesses of wearing components are treated as uncertain parameters, which are described as interval variables. Attention is focused on the stability analysis of a brake system in squeal, and the stability of brake system is investigated via the complex eigenvalue analysis (CEA) method. The dominant unstable mode is extracted by performing CEA based on a linear finite element (FE) model, and the negative damping ratio corresponding to the dominant unstable mode is selected as the indicator of instability. The response surface method (RSM) is applied to approximate the implicit relationship between the unstable mode and the system parameters. A reliability-based optimization model for improving the stability of the vehicle disc brake system with interval parameters is constructed based on RSM, interval analysis and reliability analysis. The Genetic Algorithm is used to get the optimal values of design parameters from the optimization model. The stability analysis and optimization of a disc brake system are carried out, and the results show that brake squeal propensity can be reduced by using stiffer back plates. The proposed approach can be used to improve the stability of the vehicle disc brake system with uncertain parameters effectively.

  7. Simultaneous parameter and tolerance optimization of structures via probability-interval mixed reliability model

    DEFF Research Database (Denmark)

    Luo, Yangjun; Wu, Xiaoxiang; Zhou, Mingdong

    2015-01-01

    on a probability-interval mixed reliability model, the imprecision of design parameters is modeled as interval uncertainties fluctuating within allowable tolerance bounds. The optimization model is defined as to minimize the total manufacturing cost under mixed reliability index constraints, which are further...... transformed into their equivalent formulations by using the performance measure approach. The optimization problem is then solved with the sequential approximate programming. Meanwhile, a numerically stable algorithm based on the trust region method is proposed to efficiently update the target performance......Both structural sizes and dimensional tolerances strongly influence the manufacturing cost and the functional performance of a practical product. This paper presents an optimization method to simultaneously find the optimal combination of structural sizes and dimensional tolerances. Based...

  8. Scatter factor confidence interval estimate of least square maximum entropy quantile function for small samples

    Institute of Scientific and Technical Information of China (English)

    Wu Fuxian; Wen Weidong

    2016-01-01

    Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).

  9. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...

  10. Examination of histological samples from submerged carrion to aid in the determination of postmortem submersion interval.

    Science.gov (United States)

    Humphreys, Michael Keith; Panacek, Edward; Green, William; Albers, Elizabeth

    2013-03-01

    The use of histology in the as a tool for estimating postmortem intervals has rarely been explored but it has the potential for offering medical examiners an additional means for estimating the postmortem submersion interval (PMSI) during a death investigation. This study used perinatal piglets as human analogs which were submerged in freshwater for various time intervals. Each piglet was extracted from the water and underwent a necropsy examination during which histological samples were collected. The samples revealed that the necrotic tissue decomposed relatively predictably over time and that this decompositional progression may have the potential to be used via a scoring system to determine or aid in determining the PMSI. This method for calculating PMSI allows for normalization between piglets of various mass and body types. It also prevents any contamination of the remains via algae growth and animal activity that may exacerbate and possibly exaggerate PMSI calculation.

  11. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    Science.gov (United States)

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  12. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables.

    Science.gov (United States)

    Shoukri, Mohamed M; Elkum, Nasser; Walter, Stephen D

    2006-05-10

    In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI). We also discuss the issue of sample size estimation for dichotomous responses with two examples. For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary.

  13. Interval estimation and optimal design for the within-subject coefficient of variation for continuous and binary variables

    Directory of Open Access Journals (Sweden)

    Elkum Nasser

    2006-05-01

    Full Text Available Abstract Background In this paper we propose the use of the within-subject coefficient of variation as an index of a measurement's reliability. For continuous variables and based on its maximum likelihood estimation we derive a variance-stabilizing transformation and discuss confidence interval construction within the framework of a one-way random effects model. We investigate sample size requirements for the within-subject coefficient of variation for continuous and binary variables. Methods We investigate the validity of the approximate normal confidence interval by Monte Carlo simulations. In designing a reliability study, a crucial issue is the balance between the number of subjects to be recruited and the number of repeated measurements per subject. We discuss efficiency of estimation and cost considerations for the optimal allocation of the sample resources. The approach is illustrated by an example on Magnetic Resonance Imaging (MRI. We also discuss the issue of sample size estimation for dichotomous responses with two examples. Results For the continuous variable we found that the variance stabilizing transformation improves the asymptotic coverage probabilities on the within-subject coefficient of variation for the continuous variable. The maximum like estimation and sample size estimation based on pre-specified width of confidence interval are novel contribution to the literature for the binary variable. Conclusion Using the sample size formulas, we hope to help clinical epidemiologists and practicing statisticians to efficiently design reliability studies using the within-subject coefficient of variation, whether the variable of interest is continuous or binary.

  14. AV interval optimization using pressure volume loops in dual chamber pacemaker patients with maintained systolic left ventricular function.

    Science.gov (United States)

    Eberhardt, Frank; Hanke, Thorsten; Fitschen, Joern; Heringlake, Matthias; Bode, Frank; Schunkert, Heribert; Wiegand, Uwe K H

    2012-08-01

    Atrioventricular (AV) interval optimization is often deemed too time-consuming in dual-chamber pacemaker patients with maintained LV function. Thus the majority of patients are left at their default AV interval. To quantify the magnitude of hemodynamic improvement following AV interval optimization in chronically paced dual chamber pacemaker patients. A pressure volume catheter was placed in the left ventricle of 19 patients with chronic dual chamber pacing and an ejection fraction >45 % undergoing elective coronary angiography. AV interval was varied in 10 ms steps from 80 to 300 ms, and pressure volume loops were recorded during breath hold. The average optimal AV interval was 152 ± 39 ms compared to 155 ± 8 ms for the average default AV interval (range 100-240 ms). The average improvement in stroke work following AV interval optimization was 935 ± 760 mmHg/ml (range 0-2,908; p AV interval changes the average stroke work by 207 ± 162 mmHg/ml. AV interval optimization also leads to improved systolic dyssynchrony indices (17.7 ± 7.0 vs. 19.4 ± 7.1 %; p = 0.01). The overall hemodynamic effect of AV interval optimization in patients with maintained LV function is in the same range as for patients undergoing cardiac resynchronization therapy for several parameters. The positive effect of AV interval optimization also applies to patients who have been chronically paced for years.

  15. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  16. Binomial distribution sample confidence intervals estimation for positive and negative likelihood ratio medical key parameters.

    Science.gov (United States)

    Bolboacă, Sorana; Jäntschi, Lorentz

    2005-01-01

    Likelihood Ratio medical key parameters calculated on categorical results from diagnostic tests are usually express accompanied with their confidence intervals, computed using the normal distribution approximation of binomial distribution. The approximation creates known anomalies,especially for limit cases. In order to improve the quality of estimation, four new methods (called here RPAC, RPAC0, RPAC1, and RPAC2) were developed and compared with the classical method (called here RPWald), using an exact probability calculation algorithm.Computer implementations of the methods use the PHP language. We defined and implemented the functions of the four new methods and the five criterions of confidence interval assessment. The experiments run for samples sizes which vary in 14 - 34 range, 90 - 100 range (0 binomial variables (1 interval for positive and negative likelihood ratios.

  17. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  18. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Yixiong Feng

    2017-03-01

    Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.

  19. Utilization of Electrocardiographic P-wave Duration for AV Interval Optimization in Dual-Chamber Pacemakers.

    Science.gov (United States)

    Sorajja, Dan; Bhakta, Mayurkumar D; Scott, Luis Rp; Altemose, Gregory T; Srivathsan, Komandoor

    2010-09-05

    Empiric programming of the atrio-ventricular (AV) delay is commonly performed during pacemaker implantation. Transmitral flow assessment by Doppler echocardiography can be used to find the optimal AV delay by Ritter's method, but this cannot easily be performed during pacemaker implantation. We sought to determine a non-invasive surrogate for this assessment. Since electrocardiographic P-wave duration estimates atrial activation time, we hypothesized this measurement may provide a more appropriate basis for programming AV intervals. A total of 19 patients were examined at the time of dual chamber pacemaker implantation, 13 (68%) being male with a mean age of 77. Each patient had the optimal AV interval determined by Ritter's method. The P-wave duration was measured independently on electrocardiograms using MUSE® Cardiology Information System (version 7.1.1). The relationship between P-wave duration and the optimal AV interval was analyzed. The P-wave duration and optimal AV delay were related by a correlation coefficient of 0.815 and a correction factor of 1.26. The mean BMI was 27. The presence of hypertension, atrial fibrillation, and valvular heart disease was 13 (68%), 3 (16%), and 2 (11%) respectively. Mean echocardiographic parameters included an ejection fraction of 58%, left atrial index of 32 ml/m(2), and diastolic dysfunction grade 1 (out of 4). In patients with dual chamber pacemakers in AV sequentially paced mode and normal EF, electrocardiographic P-wave duration correlates to the optimal AV delay by Ritter's method by a factor of 1.26.

  20. Digital redesign of uncertain interval systems based on time-response resemblance via particle swarm optimization.

    Science.gov (United States)

    Hsu, Chen-Chien; Lin, Geng-Yu

    2009-07-01

    In this paper, a particle swarm optimization (PSO) based approach is proposed to derive an optimal digital controller for redesigned digital systems having an interval plant based on time-response resemblance of the closed-loop systems. Because of difficulties in obtaining time-response envelopes for interval systems, the design problem is formulated as an optimization problem of a cost function in terms of aggregated deviation between the step responses corresponding to extremal energies of the redesigned digital system and those of their continuous counterpart. A proposed evolutionary framework incorporating three PSOs is subsequently presented to minimize the cost function to derive an optimal set of parameters for the digital controller, so that step response sequences corresponding to the extremal sequence energy of the redesigned digital system suitably approximate those of their continuous counterpart under the perturbation of the uncertain plant parameters. Computer simulations have shown that redesigned digital systems incorporating the PSO-derived digital controllers have better system performance than those using conventional open-loop discretization methods.

  1. Biodiversity optimal sampling: an algorithmic solution

    Directory of Open Access Journals (Sweden)

    Alessandro Ferrarini

    2012-03-01

    Full Text Available Biodiversity sampling is a very serious task. When biodiversity sampling is not representative of the biodiversity spatial pattern due to few data or uncorrected sampling point locations, successive analyses, models and simulations are inevitably biased. In this work, I propose a new solution to the problem of biodiversity sampling. The proposed approach is proficient for habitats, plant and animal species, in addition it is able to answer the two pivotal questions of biodiversity sampling: 1 how many sampling points and 2 where are the sampling points.

  2. The impact of different sampling rates and calculation time intervals on ROTI values

    Directory of Open Access Journals (Sweden)

    Jacobsen Knut Stanley

    2014-01-01

    Full Text Available The ROTI (Rate of TEC index is a commonly used measure of ionospheric irregularities level. The algorithm to calculate ROTI is easily implemented, and is the same from paper to paper. However, the sample rate of the GNSS data used, and the time interval over which a value of ROTI is calculated, varies from paper to paper. When comparing ROTI values from different studies, this must be taken into account. This paper aims to show what these differences are, to increase the awareness of this issue. We have investigated the effect of different parameters for the calculation of ROTI values, using one year of data from 8 receivers at latitudes ranging from 59° N to 79° N. We have found that the ROTI values calculated using different parameter choices are strongly positively correlated. However, the ROTI values are quite different. The effect of a lower sample rate is to lower the ROTI value, due to the loss of high-frequency parts of the ROT spectrum, while the effect of a longer calculation time interval is to remove or reduce short-lived peaks due to the inherent smoothing effect. The ratio of ROTI values based on data of different sampling rate is examined in relation to the ROT power spectrum. Of relevance to statistical studies, we find that the median level of ROTI depends strongly on sample rate, strongly on latitude at auroral latitudes, and weakly on time interval. Thus, a baseline “quiet” or “noisy” level for one location or choice or parameters may not be valid for another location or choice of parameters.

  3. Time Intervals for Maintenance of Offshore Structures Based on Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Dante Tolentino

    2013-01-01

    Full Text Available With the aim of establishing adequate time intervals for maintenance of offshore structures, an approach based on multiobjective optimization for making decisions is proposed. The formulation takes into account the degradation of the mechanical properties of the structures and its influence over time on both the structural capacity and the structural demand, given a maximum wave height. The set of time intervals for maintenance corresponds to a balance between three objectives: (a structural reliability, (b damage index, and (c expected cumulative total cost. Structural reliability is expressed in terms of confidence factors as functions of time by means of closed-form mathematical expressions which consider structural deterioration. The multiobjective optimization is solved using an evolutionary genetic algorithm. The approach is applied to an offshore platform located at Campeche Bay in the Gulf of Mexico. The optimization criterion includes the reconstruction of the platform. Results indicate that if the first maintenance action is made in 5 years after installing the structure, the second repair action should be made in the following 7 to 10 years; however, if the first maintenance action is made in 6 years after installing the structure, then the second should be made in the following 5 to 8 years.

  4. Optimal Machine Tools Selection Using Interval-Valued Data FCM Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yupeng Xin

    2014-01-01

    Full Text Available Machine tool selection directly affects production rates, accuracy, and flexibility. In order to quickly and accurately select the appropriate machine tools in machining process planning, this paper proposes an optimal machine tools selection method based on interval-valued data fuzzy C-means (FCM clustering algorithm. We define the machining capability meta (MAE as the smallest unit to describe machining capacity of machine tools and establish MAE library based on the MAE information model. According to the manufacturing process requirements, the MAEs can be queried from MAE library. Subsequently, interval-valued data FCM algorithm is used to select the appropriate machine tools for manufacturing process. Through computing matching degree between manufacturing process machining constraints and MAEs, we get the most appropriate MAEs and the corresponding machine tools. Finally, a case study of an exhaust duct part of the aeroengine is presented to demonstrate the applicability of the proposed method.

  5. A Parallel Interval Computation Model for Global Optimization with Automatic Load Balancing

    Institute of Scientific and Technical Information of China (English)

    Yong Wu; Arun Kumar

    2012-01-01

    In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.

  6. Using simulated noise to define optimal QT intervals for computer analysis of ambulatory ECG.

    Science.gov (United States)

    Tikkanen, P E; Sellin, L C; Kinnunen, H O; Huikuri, H V

    1999-01-01

    The ambulatory electrocardiogram (ECG) is an important medical tool, not only for diagnosis of adverse cardiac events, but also to predict the risk of such events occurring. The 24-hour ambulatory ECG has certain problems and drawbacks because the signal is corrupted by noise from various sources and also several other conditions which may alter the ECG morphology. We have developed a Windows based program for the computer analysis of ambulatory ECG which attempts to address these problems. The software includes options for importing ECG data, different methods of waveform analysis, data-viewing, and exporting the extracted time series. In addition, the modular structure allows for flexible maintenance and expansion of the software. The ECG was recorded using a Holter device and oversampled to enhance the fidelity of the low sampling rate of the ambulatory ECG. The influence of different sampling rates on the interval variability were studied. The noise sensitivity of the implemented algorithm was tested with several types of simulated noise and the precision of the interval measurement was reported with SD values. Our simulations showed that, in most of the cases, defining the end of QT interval at the maximum of the T wave gave the most precise measurement. The definition of the onset of the ventricular repolarization duration is most precisely made on the maximum or descending maximal slope of the R wave. We also analyzed some examples of time series from patients using power spectrum estimates in order to validate the low level QT interval variability.

  7. Determining the optimal surveillance interval after a colonoscopic polypectomy for the Korean population?

    Science.gov (United States)

    Lee, Jung Lok; Lee, Hye Min; Jeon, Jung Won; Kwak, Min Seob; Yoon, Jin Young; Shin, Hyun Phil; Joo, Kwang Ro; Lee, Joung Il; Park, Dong Il

    2017-01-01

    Background/Aims Western surveillance strategies cannot be directly adapted to the Korean population. The aim of this study was to estimate the risk of metachronous neoplasia and the optimal surveillance interval in the Korean population. Methods Clinical and pathological data from index colonoscopy performed between June 2006 and July 2008 and who had surveillance colonoscopies up to May 2015 were compared between low- and high-risk adenoma (LRA and HRA) groups. The 3- and 5-year cumulative risk of metachronous colorectal neoplasia in both groups were compared. Results Among 895 eligible patients, surveillance colonoscopy was performed in 399 (44.6%). Most (83.3%) patients with LRA had a surveillance colonoscopy within 5 years and 70.2% of patients with HRA had a surveillance colonoscopy within 3 years. The cumulative risk of metachronous advanced adenoma was 3.2% within 5 years in the LRA group and only 1.7% within 3 years in the HRA group. The risk of metachronous neoplasia was similar between the surveillance interval of <5 and ≥5 years in the LRA group; however, it was slightly higher at surveillance interval of ≥3 than <3 years in the HRA group (9.4% vs. 2.4%). In multivariate analysis, age and the ≥3-year surveillance interval were significant independent risk factors for metachronous advanced adenoma (P=0.024 and P=0.030, respectively). Conclusions Patients had a surveillance colonoscopy before the recommended guidelines despite a low risk of metachronous neoplasia. However, the risk of metachronous advanced adenoma was increased in elderly patients and those with a ≥3-year surveillance interval. PMID:28239321

  8. On the Sampling Interpretation of Confidence Intervals and Hypothesis Tests in the Context of Conditional Maximum Likelihood Estimation.

    Science.gov (United States)

    Maris, E.

    1998-01-01

    The sampling interpretation of confidence intervals and hypothesis tests is discussed in the context of conditional maximum likelihood estimation. Three different interpretations are discussed, and it is shown that confidence intervals constructed from the asymptotic distribution under the third sampling scheme discussed are valid for the first…

  9. Clinical feasibility of exercise-based A-V interval optimization for cardiac resynchronization: a pilot study.

    Science.gov (United States)

    Choudhuri, Indrajit; MacCarter, Dean; Shaw, Rachael; Anderson, Steve; St Cyr, John; Niazi, Imran

    2014-11-01

    One-third of eligible patients fail to respond to cardiac resynchronization therapy (CRT). Current methods to "optimize" the atrio-ventricular (A-V) interval are performed at rest, which may limit its efficacy during daily activities. We hypothesized that low-intensity cardiopulmonary exercise testing (CPX) could identify the most favorable physiologic combination of specific gas exchange parameters reflecting pulmonary blood flow or cardiac output, stroke volume, and left atrial pressure to guide determination of the optimal A-V interval. We assessed relative feasibility of determining the optimal A-V interval by three methods in 17 patients who underwent optimization of CRT: (1) resting echocardiographic optimization (the Ritter method), (2) resting electrical optimization (intrinsic A-V interval and QRS duration), and (3) during low-intensity, steady-state CPX. Five sequential, incremental A-V intervals were programmed in each method. Assessment of cardiopulmonary stability and potential influence on the CPX-based method were assessed. CPX and determination of a physiological optimal A-V interval was successfully completed in 94.1% of patients, slightly higher than the resting echo-based approach (88.2%). There was a wide variation in the optimal A-V delay determined by each method. There was no observed cardiopulmonary instability or impact of the implant procedure that affected determination of the CPX-based optimized A-V interval. Determining optimized A-V intervals by CPX is feasible. Proposed mechanisms explaining this finding and long-term impact require further study. ©2014 Wiley Periodicals, Inc.

  10. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...

  11. Does a 4–6 Week Shoeing Interval Promote Optimal Foot Balance in the Working Equine?

    Directory of Open Access Journals (Sweden)

    Kirsty Leśniak

    2017-03-01

    Full Text Available Variation in equine hoof conformation between farriery interventions lacks research, despite associations with distal limb injuries. This study aimed to determine linear and angular hoof variations pre- and post-farriery within a four to six week shoeing/trimming interval. Seventeen hoof and distal limb measurements were drawn from lateral and anterior digital photographs from 26 horses pre- and post-farriery. Most lateral view variables changed significantly. Reductions of the dorsal wall, and weight bearing and coronary band lengths resulted in an increased vertical orientation of the hoof. The increased dorsal hoof wall angle, heel angle, and heel height illustrated this further, improving dorsopalmar alignment. Mediolateral measurements of coronary band and weight bearing lengths reduced, whilst medial and lateral wall lengths from the 2D images increased, indicating an increased vertical hoof alignment. Additionally, dorsopalmar balance improved. However, the results demonstrated that a four to six week interval is sufficient for a palmer shift in the centre of pressure, increasing the loading on acutely inclined heels, altering DIP angulation, and increasing the load on susceptible structures (e.g., DDFT. Mediolateral variable asymmetries suit the lateral hoof landing and unrollment pattern of the foot during landing. The results support regular (four to six week farriery intervals for the optimal prevention of excess loading of palmar limb structures, reducing long-term injury risks through cumulative, excessive loading.

  12. Suboptimal and optimal order policies for fixed and varying replenishment interval with declining market

    Science.gov (United States)

    Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon

    2016-06-01

    One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.

  13. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  14. Optimization and Sampling for NLP from a Unified Viewpoint

    NARCIS (Netherlands)

    Dymetman, M.; Bouchard, G.; Carter, S.; Bhattacharyya, B.; Ekbal, A.; Saha, S.; Johnson, M.; Molla-Aliod, D.; Dras, M.

    2012-01-01

    The OS* algorithm is a unified approach to exact optimization and sampling, based on incremental refinements of a functional upper bound, which combines ideas of adaptive rejection sampling and of A* optimization search. We first give a detailed description of OS*. We then explain how it can be

  15. Decision-aided sampling frequency offset compensation for reduced-guard-interval coherent optical OFDM systems.

    Science.gov (United States)

    Wang, Wei; Zhuge, Qunbi; Morsy-Osman, Mohamed; Gao, Yuliang; Xu, Xian; Chagnon, Mathieu; Qiu, Meng; Hoang, Minh Thang; Zhang, Fangyuan; Li, Rui; Plant, David V

    2014-11-03

    We propose a decision-aided algorithm to compensate the sampling frequency offset (SFO) between the transmitter and receiver for reduced-guard-interval (RGI) coherent optical (CO) OFDM systems. In this paper, we first derive the cyclic prefix (CP) requirement for preventing OFDM symbols from SFO induced inter-symbol interference (ISI). Then we propose a new decision-aided SFO compensation (DA-SFOC) algorithm, which shows a high SFO tolerance and reduces the CP requirement. The performance of DA-SFOC is numerically investigated for various situations. Finally, the proposed algorithm is verified in a single channel 28 Gbaud polarization division multiplexing (PDM) RGI CO-OFDM experiment with QPSK, 8 QAM and 16 QAM modulation formats, respectively. Both numerical and experimental results show that the proposed DA-SFOC method is highly robust against the standard SFO in optical fiber transmission.

  16. Economic Statistical Design of Variable Sampling Interval X¯$\\overline X $ Control Chart Based on Surrogate Variable Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lee Tae-Hoon

    2016-12-01

    Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

  17. Non-equal-interval direct optimizing Verhulst model that x(n) be taken as initial value and its application

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    To overcome the deficiencies of the existing Verhulst GM(1,1) model, based on the existing grey theory, a non-equal-interval direct optimum Verhulst GM(1,1) model is built which chooses a modified n-th component x(n) of X(0) as the starting condition of the grey differential model. It optimizes a modified β value and the background value, and takes two times fitting optimization. The new model extends equal intervals to non-equal-intervals and is suitable for general data modelling and estimating parameters...

  18. The cognitive mechanisms of optimal sampling.

    Science.gov (United States)

    Lea, Stephen E G; McLaren, Ian P L; Dow, Susan M; Graft, Donald A

    2012-02-01

    How can animals learn the prey densities available in an environment that changes unpredictably from day to day, and how much effort should they devote to doing so, rather than exploiting what they already know? Using a two-armed bandit situation, we simulated several processes that might explain the trade-off between exploring and exploiting. They included an optimising model, dynamic backward sampling; a dynamic version of the matching law; the Rescorla-Wagner model; a neural network model; and ɛ-greedy and rule of thumb models derived from the study of reinforcement learning in artificial intelligence. Under conditions like those used in published studies of birds' performance under two-armed bandit conditions, all models usually identified the more profitable source of reward, and did so more quickly when the reward probability differential was greater. Only the dynamic programming model switched from exploring to exploiting more quickly when available time in the situation was less. With sessions of equal length presented in blocks, a session-length effect was induced in some of the models by allowing motivational, but not memory, carry-over from one session to the next. The rule of thumb model was the most successful overall, though the neural network model also performed better than the remaining models. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. On the Influence of the Data Sampling Interval on Computer-Derived K-Indices

    Directory of Open Access Journals (Sweden)

    A Bernard

    2011-06-01

    Full Text Available The K index was devised by Bartels et al. (1939 to provide an objective monitoring of irregular geomagnetic activity. The K index was then routinely used to monitor the magnetic activity at permanent magnetic observatories as well as at temporary stations. The increasing number of digital and sometimes unmanned observatories and the creation of INTERMAGNET put the question of computer production of K at the centre of the debate. Four algorithms were selected during the Vienna meeting (1991 and endorsed by IAGA for the computer production of K indices. We used one of them (FMI algorithm to investigate the impact of the geomagnetic data sampling interval on computer produced K values through the comparison of the computer derived K values for the period 2009, January 1st to 2010, May 31st at the Port-aux-Francais magnetic observatory using magnetic data series with different sampling rates (the smaller: 1 second; the larger: 1 minute. The impact is investigated on both 3-hour range values and K indices data series, as a function of the activity level for low and moderate geomagnetic activity.

  20. Optimizing sparse sampling for 2D electronic spectroscopy

    Science.gov (United States)

    Roeding, Sebastian; Klimovich, Nikita; Brixner, Tobias

    2017-02-01

    We present a new data acquisition concept using optimized non-uniform sampling and compressed sensing reconstruction in order to substantially decrease the acquisition times in action-based multidimensional electronic spectroscopy. For this we acquire a regularly sampled reference data set at a fixed population time and use a genetic algorithm to optimize a reduced non-uniform sampling pattern. We then apply the optimal sampling for data acquisition at all other population times. Furthermore, we show how to transform two-dimensional (2D) spectra into a joint 4D time-frequency von Neumann representation. This leads to increased sparsity compared to the Fourier domain and to improved reconstruction. We demonstrate this approach by recovering transient dynamics in the 2D spectrum of a cresyl violet sample using just 25% of the originally sampled data points.

  1. Study on Different Crossover Mechanisms of Genetic Algorithm for Test Interval Optimization for Nuclear Power Plants

    Directory of Open Access Journals (Sweden)

    Molly Mehra

    2013-12-01

    Full Text Available Surveillance tests are performed periodically on standby systems of a Nuclear Power Plant (NPP, as they improve the systems’ availability on demand. High availability of safety critical systems is very essential to NPP safety, hence, careful analysis is required to schedule the surveillance activities for such systems in a cost effective way without compromising the plant safety. This forms an optimization problem wherein, two different cases can be formulated for deciding the value of Surveillance Test Interval. In one case, cost is the objective function to be minimized while unavailability is constrained to be at a given level and in another case, unavailability is minimized for a given cost level. Here, optimization is done using Genetic Algorithm (GA and real encoding has been employed as it caters well to the requirements of this problem. A detailed procedure for GA formulation is described in this paper. Two different crossover methods, arithmetical crossover and blend crossover are explored and compared in this study to arrive at the most suitable crossover method for such type of problems.

  2. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available At this presentation, the author discussed a statistical method for deriving optimal spatial sampling schemes. First I focus on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting...

  3. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  4. Optimal parallel algorithm for shortest-paths problem on interval graphs

    Institute of Scientific and Technical Information of China (English)

    MISHRA P.K.

    2004-01-01

    This paper presents an efficient parallel algorithm for the shortest-path problem in interval graph for computing shortest-paths in a weighted interval graph that runs in O(n) time with n intervals in a graph. A linear processor CRCW algorithm for determining the shortest-paths in an interval graphs is given.

  5. Smart AMS : Optimizing the measurement procedure for small radiocarbon samples

    NARCIS (Netherlands)

    Vries de, Hendrik

    2010-01-01

    Abstract In order to improve the measurement efficiency of radiocarbon samples, particularly small samples (< 300 µg C), the measurement procedure was optimized using Smart AMS, which is the name of the new control system of the AMS. The system gives the

  6. Wrapped Progressive Sampling Search for Optimizing Learning Algorithm Parameters

    NARCIS (Netherlands)

    Bosch, Antal van den

    2005-01-01

    We present a heuristic meta-learning search method for finding a set of optimized algorithmic parameters for a range of machine learning algo- rithms. The method, wrapped progressive sampling, is a combination of classifier wrapping and progressive sampling of training data. A series of experiments

  7. Wrapped Progressive Sampling Search for Optimizing Learning Algorithm Parameters

    NARCIS (Netherlands)

    Bosch, Antal van den

    2005-01-01

    We present a heuristic meta-learning search method for finding a set of optimized algorithmic parameters for a range of machine learning algo- rithms. The method, wrapped progressive sampling, is a combination of classifier wrapping and progressive sampling of training data. A series of experiments

  8. Observer Error when Measuring Safety-Related Behavior: Momentary Time Sampling versus Whole-Interval Recording

    Science.gov (United States)

    Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.

    2012-01-01

    Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…

  9. Observer Error when Measuring Safety-Related Behavior: Momentary Time Sampling versus Whole-Interval Recording

    Science.gov (United States)

    Taylor, Matthew A.; Skourides, Andreas; Alvero, Alicia M.

    2012-01-01

    Interval recording procedures are used by persons who collect data through observation to estimate the cumulative occurrence and nonoccurrence of behavior/events. Although interval recording procedures can increase the efficiency of observational data collection, they can also induce error from the observer. In the present study, 50 observers were…

  10. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    , such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...... demonstrate how this method can be used to find optimal sampling directions when imaging a sphere of a homogeneous material; in this case, only two images are often adequate for high accuracy.......The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...

  11. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  12. An Optimization-Based Sampling Scheme for Phylogenetic Trees

    Science.gov (United States)

    Misra, Navodit; Blelloch, Guy; Ravi, R.; Schwartz, Russell

    Much modern work in phylogenetics depends on statistical sampling approaches to phylogeny construction to estimate probability distributions of possible trees for any given input data set. Our theoretical understanding of sampling approaches to phylogenetics remains far less developed than that for optimization approaches, however, particularly with regard to the number of sampling steps needed to produce accurate samples of tree partition functions. Despite the many advantages in principle of being able to sample trees from sophisticated probabilistic models, we have little theoretical basis for concluding that the prevailing sampling approaches do in fact yield accurate samples from those models within realistic numbers of steps. We propose a novel approach to phylogenetic sampling intended to be both efficient in practice and more amenable to theoretical analysis than the prevailing methods. The method depends on replacing the standard tree rearrangement moves with an alternative Markov model in which one solves a theoretically hard but practically tractable optimization problem on each step of sampling. The resulting method can be applied to a broad range of standard probability models, yielding practical algorithms for efficient sampling and rigorous proofs of accurate sampling for some important special cases. We demonstrate the efficiency and versatility of the method in an analysis of uncertainty in tree inference over varying input sizes. In addition to providing a new practical method for phylogenetic sampling, the technique is likely to prove applicable to many similar problems involving sampling over combinatorial objects weighted by a likelihood model.

  13. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  14. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available to carry out a fieldwork sample is an important issue as it avoids subjective judgement and can save on time and costs in the field. STATISTICAL SAMPLING, USING DATA OBTAINED FROM REMOTE SENSING, FINDS APPLICATION IN A VARIETY OF FIELDS... M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical...

  15. Optimal linear shrinkage corrections of sample LMMSE and MVDR estimators

    OpenAIRE

    2012-01-01

    La proposició d'estimadors shrinkage òptims que corregeixen la degradació dels mètodes sample LMMSE i sample MUDR en el règim on el número de mostres és petit en comparació a la dimensió de les observacions. [ANGLÈS] This master thesis proposes optimal shrinkage estimators that counteract the performance degradation of the sample LMMSE and sample MVDR methods in the regime where the sample size is small compared to the observation dimension. [CASTELLÀ] Esta máster tesis propone estimado...

  16. Quantitative investigation of resolution increase of free-flow electrophoresis via simple interval sample injection and separation.

    Science.gov (United States)

    Shao, Jing; Fan, Liu-Yin; Cao, Cheng-Xi; Huang, Xian-Qing; Xu, Yu-Quan

    2012-07-01

    Interval free-flow zone electrophoresis (FFZE) has been used to suppress sample band broadening greatly hindering the development of free-flow electrophoresis (FFE). However, there has been still no quantitative study on the resolution increase of interval FFZE. Herein, we tried to make a comparison between bandwidths in interval FFZE and continuous one. A commercial dye with methyl green and crystal violet was well chosen to show the bandwidth. The comparative experiments were conducted under the same sample loading of the model dye (viz. 3.49, 1.75, 1.17, and 0.88 mg/h), the same running time (viz. 5, 10, 15, and 20 min), and the same flux ratio between sample and background buffer (= 10.64 × 10⁻³). Under the given conditions, the experiments demonstrated that (i) the band broadening was evidently caused by hydrodynamic factor in continuous mode, and (ii) the interval mode could clearly eliminate the hydrodynamic broadening existing in continuous mode, greatly increasing the resolution of dye separation. Finally, the interval FFZE was successfully used for the complete separation of two-model antibiotics (herein pyoluteorin and phenazine-1-carboxylic acid coexisting in fermentation broth of a new strain Pseudomonas aeruginosa M18), demonstrating the feasibility of interval FFZE mode for separation of biomolecules.

  17. Optimization of enrichment processes of pentachlorophenol (PCP) from water samples

    Institute of Scientific and Technical Information of China (English)

    LI Ping; LIU Jun-xin

    2004-01-01

    The method of enriching PCP (pentachlorophenol) from aquatic environment by solid phase extraction (SPE) was studied.Several factors affecting the recoveries of PCP, including sample pH, eluting solvent, eluting volume and flow rate of water sample, were optimized by orthogonal array design(OAD). The optimized results were sample pH 4; eluting solvent, 100% methanol; eluting solvent volume, 2 mi and flow rate of water sample, 4 mi/min. A comparison is made between SPE and liquid-liquid extraction(LLE) method. The recoveries of PCP were in the range of 87.6 % 133.6 % and 79 %- 120.3 % for S PE and LLE, respectively. Important advantages of the SPE compared with the LLE include the short extraction time and reduced consumption of organic solvents. SPE can replace LLE for isolating and concentrating PCP from water samples.

  18. Stability of optimal-wave-front-sample coupling under sample translation and rotation

    CERN Document Server

    Anderson, Benjamin R; Eilers, Hergen

    2015-01-01

    The method of wavefront shaping to control optical properties of opaque media is a promising technique for authentication applications. One of the main challenges of this technique is the sensitivity of the wavefront-sample coupling to translation and/or rotation. To better understand how translation and rotation affect the wavefront- sample coupling we perform experiments in which we first optimize reflection from an opaque surface--to obtain an optimal wavefront--and then translate or rotate the surface and measure the new reflected intensity pattern. By using the correlation between the optimized and translated or rotated patterns we determine how sensitive the wavefront-sample coupling is. These experiments are performed for different spatial-light-modulator (SLM) bin sizes, beam-spot sizes, and nanoparticle concentrations. We find that all three parameters affect the different positional changes, implying that an optimization scheme can be used to maximize the stability of the wavefront-sample coupling. ...

  19. Statistical Inference for the Parameter of Rayleigh Distribution Based on Progressively Typ e-I Interval Censored Sample

    Institute of Scientific and Technical Information of China (English)

    Abdalroof M S; Zhao Zhi-wen; Wang De-hui

    2015-01-01

    In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Rayleigh distribution is studied. Different methods of estimation are discussed. They include mid-point approximation estima-tor, the maximum likelihood estimator, moment estimator, Bayes estimator, sampling adjustment moment estimator, sampling adjustment maximum likelihood estimator and estimator based on percentile. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.

  20. Sample Size Planning for the Squared Multiple Correlation Coefficient: Accuracy in Parameter Estimation via Narrow Confidence Intervals.

    Science.gov (United States)

    Kelley, Ken

    2008-01-01

    Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size so that the expected width of the confidence interval will be sufficiently narrow. Modifications of these methods are then developed so that necessary sample size will lead to sufficiently narrow confidence intervals with no less than some desired degree of assurance. Computer routines have been developed and are included within the MBESS R package so that the methods discussed in the article can be implemented. The methods and computer routines are demonstrated using an empirical example linking innovation in the health services industry with previous innovation, personality factors, and group climate characteristics.

  1. Optimizing the atmospheric sampling sites using fuzzy mathematic methods

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    A new approach applying fuzzy mathematic theorems, including the Primary Matrix Element Theorem and the Fisher ClassificationMethod, was established to solve the optimization problem of atmospheric environmental sampling sites. According to its basis, an applicationin the optimization of sampling sites in the atmospheric environmental monitoring was discussed. The method was proven to be suitable andeffective. The results were admitted and applied by the Environmental Protection Bureau (EPB) of many cities of China. A set of computersoftware of this approach was also completely compiled and used.

  2. Variance optimal sampling based estimation of subset sums

    CERN Document Server

    Cohen, Edith; Kaplan, Haim; Lund, Carsten; Thorup, Mikkel

    2008-01-01

    From a high volume stream of weighted items, we want to maintain a generic sample of a certain limited size $k$ that we can later use to estimate the total weight of arbitrary subsets. This is the classic context of on-line reservoir sampling, thinking of the generic sample as a reservoir. We present a reservoir sampling scheme providing variance optimal estimation of subset sums. More precisely, if we have seen $n$ items of the stream, then for any subset size $m$, our scheme based on $k$ samples minimizes the average variance over all subsets of size $m$. In fact, the optimality is against any off-line sampling scheme tailored for the concrete set of items seen: no off-line scheme based on $k$ samples can perform better than our on-line scheme when it comes to average variance over any subset size. Our scheme has no positive covariances between any pair of item estimates. Also, our scheme can handle each new item of the stream in $O(\\log k)$ time, which is optimal even on the word RAM.

  3. Bootstrapping and Conditional Simulation in Kriging : Better Confidence Intervals and Optimization (Replaced by CentER DP 2014-076)

    NARCIS (Netherlands)

    Mehdad, E.; Kleijnen, Jack P.C.

    2013-01-01

    Abstract: This paper investigates two related questions: (1) How to derive a confidence interval for the output of a combination of simulation inputs not yet simulated? (2) How to select the next combination to be simulated when searching for the optimal combination? To answer these questions, the

  4. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  5. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  6. Variation Of The Tully-Fisher Relation As A Function Of The Magnitude Interval Of A Sample Of Galaxies

    CERN Document Server

    Ruelas-Mayorga, A; Trujillo-Lara, M; Nigoche-Netro, A; Echevarría, J; García, A M; Ramírez-Vélez, J

    2016-01-01

    In this paper we carry out a preliminary study of the dependence of the Tully-Fisher Relation (TFR) with the width and intensity level of the absolute magnitude interval of a limited sample of 2411 galaxies taken from Mathewson \\& Ford (1996). The galaxies in this sample do not differ significantly in morphological type, and are distributed over an $\\sim11$-magnitude interval ($-24.4 < I < -13.0$). We take as directives the papers by Nigoche-Netro et al. (2008, 2009, 2010) in which they study the dependence of the Kormendy (KR), the Fundamental Plane (FPR) and the Faber-Jackson Relations (FJR) with the magnitude interval within which the observed galaxies used to derive these relations are contained. We were able to characterise the behaviour of the TFR coefficients $(\\alpha, \\beta)$ with respect to the width of the magnitude interval as well as with the brightness of the galaxies within this magnitude interval. We concluded that the TFR for this specific sample of galaxies depends on observational ...

  7. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  8. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  9. Population Validity and Cross-Validity: Applications of Distribution Theory for Testing Hypotheses, Setting Confidence Intervals, and Determining Sample Size

    Science.gov (United States)

    Algina, James; Keselman, H. J.

    2008-01-01

    Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)

  10. A Comparison of Momentary Time Sampling and Partial-Interval Recording for Assessment of Effects of Social Skills Training

    Science.gov (United States)

    Radley, Keith C.; O'Handley, Roderick D.; Labrot, Zachary C.

    2015-01-01

    Assessment in social skills training often utilizes procedures such as partial-interval recording (PIR) and momentary time sampling (MTS) to estimate changes in duration in social engagements due to intervention. Although previous research suggests PIR to be more inaccurate than MTS in estimating levels of behavior, treatment analysis decisions…

  11. Statistical Inference for the Parameter of Pareto Distribution Based on Progressively Typ e-I Interval Censored Sample

    Institute of Scientific and Technical Information of China (English)

    Abdalroof M.S.; Zhao Zhi-wen; Wang De-hui

    2014-01-01

    In this paper, the estimation of parameters based on a progressively type-I interval censored sample from a Pareto distribution is studied. Different methods of estimation are discussed, which include mid-point approximation estimator, the maximum likelihood estimator and moment estimator. The estimation procedures are discussed in details and compared via Monte Carlo simulations in terms of their biases.

  12. Contrasting Perspectives of Anesthesiologists and Gastroenterologists on the Optimal Time Interval between Bowel Preparation and Endoscopic Sedation

    Directory of Open Access Journals (Sweden)

    Deepak Agrawal

    2015-01-01

    Full Text Available Background. The optimal time interval between the last ingestion of bowel prep and sedation for colonoscopy remains controversial, despite guidelines that sedation can be administered 2 hours after consumption of clear liquids. Objective. To determine current practice patterns among anesthesiologists and gastroenterologists regarding the optimal time interval for sedation after last ingestion of bowel prep and to understand the rationale underlying their beliefs. Design. Questionnaire survey of anesthesiologists and gastroenterologists in the USA. The questions were focused on the preferred time interval of endoscopy after a polyethylene glycol based preparation in routine cases and select conditions. Results. Responses were received from 109 anesthesiologists and 112 gastroenterologists. 96% of anesthesiologists recommended waiting longer than 2 hours until sedation, in contrast to only 26% of gastroenterologists. The main reason for waiting >2 hours was that PEG was not considered a clear liquid. Most anesthesiologists, but not gastroenterologists, waited longer in patients with history of diabetes or reflux. Conclusions. Anesthesiologists and gastroenterologists do not agree on the optimal interval for sedation after last drink of bowel prep. Most anesthesiologists prefer to wait longer than the recommended 2 hours for clear liquids. The data suggest a need for clearer guidelines on this issue.

  13. Optimal minimax designs over a prespecified interval in a heteroscedastic polynomial model.

    Science.gov (United States)

    Chen, Ray-Bing; Wong, Weng Kee; Li, Kun-Yu

    2008-09-15

    Minimax optimal designs can be useful for estimating response surface but they are notoriously difficult to study analytically. We provide formulae for three types of minimax optimal designs over a user-specified region. We focus on polynomial models with various types of heteroscedastic errors but the design strategy is applicable to other types of linear models and optimality criteria. Relationships among the three types of minimax optimal designs are discussed.

  14. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    Science.gov (United States)

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  15. Optimal allocation of point-count sampling effort

    Science.gov (United States)

    Barker, R.J.; Sauer, J.R.; Link, W.A.

    1993-01-01

    Both unlimited and fixedradius point counts only provide indices to population size. Because longer count durations lead to counting a higher proportion of individuals at the point, proper design of these surveys must incorporate both count duration and sampling characteristics of population size. Using information about the relationship between proportion of individuals detected at a point and count duration, we present a method of optimizing a pointcount survey given a fixed total time for surveying and travelling between count points. The optimization can be based on several quantities that measure precision, accuracy, or power of tests based on counts, including (1) meansquare error of estimated population change; (2) mean-square error of average count; (3) maximum expected total count; or (4) power of a test for differences in average counts. Optimal solutions depend on a function that relates count duration at a point to the proportion of animals detected. We model this function using exponential and Weibull distributions, and use numerical techniques to conduct the optimization. We provide an example of the procedure in which the function is estimated from data of cumulative number of individual birds seen for different count durations for three species of Hawaiian forest birds. In the example, optimal count duration at a point can differ greatly depending on the quantities that are optimized. Optimization of the mean-square error or of tests based on average counts generally requires longer count durations than does estimation of population change. A clear formulation of the goals of the study is a critical step in the optimization process.

  16. [Optimized sample preparation for metabolome studies on Streptomyces coelicolor].

    Science.gov (United States)

    Li, Yihong; Li, Shanshan; Ai, Guomin; Wang, Weishan; Zhang, Buchang; Yang, Keqian

    2014-04-01

    Streptomycetes produce many antibiotics and are important model microorgansims for scientific research and antibiotic production. Metabolomics is an emerging technological platform to analyze low molecular weight metabolites in a given organism qualitatively and quantitatively. Compared to other Omics platform, metabolomics has greater advantage in monitoring metabolic flux distribution and thus identifying key metabolites related to target metabolic pathway. The present work aims at establishing a rapid, accurate sample preparation protocol for metabolomics analysis in streptomycetes. In the present work, several sample preparation steps, including cell quenching time, cell separation method, conditions for metabolite extraction and metabolite derivatization were optimized. Then, the metabolic profiles of Streptomyces coelicolor during different growth stages were analyzed by GC-MS. The optimal sample preparation conditions were as follows: time of low-temperature quenching 4 min, cell separation by fast filtration, time of freeze-thaw 45 s/3 min and the conditions of metabolite derivatization at 40 degrees C for 90 min. By using this optimized protocol, 103 metabolites were finally identified from a sample of S. coelicolor, which distribute in central metabolic pathways (glycolysis, pentose phosphate pathway and citrate cycle), amino acid, fatty acid, nucleotide metabolic pathways, etc. By comparing the temporal profiles of these metabolites, the amino acid and fatty acid metabolic pathways were found to stay at a high level during stationary phase, therefore, these pathways may play an important role during the transition between the primary and secondary metabolism. An optimized protocol of sample preparation was established and applied for metabolomics analysis of S. coelicolor, 103 metabolites were identified. The temporal profiles of metabolites reveal amino acid and fatty acid metabolic pathways may play an important role in the transition from primary to

  17. Validation of genetic algorithm-based optimal sampling for ocean data assimilation

    Science.gov (United States)

    Heaney, Kevin D.; Lermusiaux, Pierre F. J.; Duda, Timothy F.; Haley, Patrick J.

    2016-08-01

    Regional ocean models are capable of forecasting conditions for usefully long intervals of time (days) provided that initial and ongoing conditions can be measured. In resource-limited circumstances, the placement of sensors in optimal locations is essential. Here, a nonlinear optimization approach to determine optimal adaptive sampling that uses the genetic algorithm (GA) method is presented. The method determines sampling strategies that minimize a user-defined physics-based cost function. The method is evaluated using identical twin experiments, comparing hindcasts from an ensemble of simulations that assimilate data selected using the GA adaptive sampling and other methods. For skill metrics, we employ the reduction of the ensemble root mean square error (RMSE) between the "true" data-assimilative ocean simulation and the different ensembles of data-assimilative hindcasts. A five-glider optimal sampling study is set up for a 400 km × 400 km domain in the Middle Atlantic Bight region, along the New Jersey shelf-break. Results are compared for several ocean and atmospheric forcing conditions.

  18. Optimal image reconstruction intervals for non-invasive coronary angiography with 64-slice CT

    Energy Technology Data Exchange (ETDEWEB)

    Leschka, Sebastian; Husmann, Lars; Desbiolles, Lotus M.; Boehm, Thomas; Marincek, Borut; Alkadhi, Hatem [University Hospital Zurich, Institute of Diagnostic Radiology, Zurich (Switzerland); Gaemperli, Oliver; Schepis, Tiziano; Koepfli, Pascal [University Hospital Zurich, Cardiovascular Center, Zurich (Switzerland); Kaufmann, Philipp A. [University Hospital Zurich, Cardiovascular Center, Zurich (Switzerland); University of Zurich, Center for Integrative Human Physiology, Zurich (Switzerland)

    2006-09-15

    The reconstruction intervals providing best image quality for non-invasive coronary angiography with 64-slice computed tomography (CT) were evaluated. Contrast-enhanced, retrospectively electrocardiography (ECG)-gated 64-slice CT coronary angiography was performed in 80 patients (47 male, 33 female; mean age 62.1{+-}10.6 years). Thirteen data sets were reconstructed in 5% increments from 20 to 80% of the R-R interval. Depending on the average heart rate during scanning, patients were grouped as <65 bpm (n=49) and {>=}65 bpm (n=31). Two blinded and independent readers assessed the image quality of each coronary segment with a diameter {>=}1.5 mm using the following scores: 1, no motion artifacts; 2, minor artifacts; 3, moderate artifacts; 4, severe artifacts; and 5, not evaluative. The average heart rate was 63.3{+-}13.1 bpm (range 38-102). Acceptable image quality (scores 1-3) was achieved in 99.1% of all coronary segments (1,162/1,172; mean image quality score 1.55{+-}0.77) in the best reconstruction interval. Best image quality was found at 60% and 65% of the R-R interval for all patients and for each heart rate subgroup, whereas motion artifacts occurred significantly more often (P<0.01) at other reconstruction intervals. At heart rates <65 bpm, acceptable image quality was found in all coronary segments at 60%. At heart rates {>=}65 bpm, the whole coronary artery tree could be visualized with acceptable image quality in 87% (27/31) of the patients at 60%, while ten segments in four patients were rated as non-diagnostic (scores 4-5) at any reconstruction interval. In conclusion, 64-slice CT coronary angiography provides best overall image quality in mid-diastole. At heart rates <65 bpm, diagnostic image quality of all coronary segments can be obtained at a single reconstruction interval of 60%. (orig.)

  19. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  20. Optimal tests for the two-sample spherical location problem

    CERN Document Server

    Ley, Christophe; Verdebout, Thomas

    2012-01-01

    We tackle the classical two-sample spherical location problem for directional data by having recourse to the Le Cam methodology, habitually used in classical "linear" multivariate analysis. More precisely we construct locally and asymptotically optimal (in the maximin sense) parametric tests, which we then turn into semi-parametric ones in two distinct ways. First, by using a studentization argument; this leads to so-called pseudo-FvML tests. Second, by resorting to the invariance principle; this leads to efficient rank-based tests. Within each construction, the semi-parametric tests inherit optimality under a given distribution (the FvML in the first case, any rotationally symmetric one in the second) from their parametric counterparts and also improve on the latter by being valid under the whole class of rotationally symmetric distributions. Asymptotic relative efficiencies are calculated and the finite-sample behavior of the proposed tests is investigated by means of a Monte Carlo simulation.

  1. ENVELOPING THEORY BASED METHOD FOR THE DETERMINATION OF PATH INTERVAL AND TOOL PATH OPTIMIZATION FOR SURFACE MACHINING

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    An enveloping theory based method for the determination of path interval in three-axis NC machining of free form surface is presented, and a practical algorithm and the measures for improving the calculating efficiency of the algorithm are given. Not only the given algorithm can be used for ball end cutter, flat end cutter, torus cutter and drum cutter, but also the proposed method can be extended to arbitrary milling cutters. Thus, the problem how to strictly calculate path interval in the occasion of three-axis NC machining of free form surfaces with non-ball end cutters has been resolved effectively. On this basis, the factors that affect path interval are analyzed, and the methods for optimizing tool path are explored.

  2. Efficient infill sampling for unconstrained robust optimization problems

    Science.gov (United States)

    Rehman, Samee Ur; Langelaar, Matthijs

    2016-08-01

    A novel infill sampling criterion is proposed for efficient estimation of the global robust optimum of expensive computer simulation based problems. The algorithm is especially geared towards addressing problems that are affected by uncertainties in design variables and problem parameters. The method is based on constructing metamodels using Kriging and adaptively sampling the response surface via a principle of expected improvement adapted for robust optimization. Several numerical examples and an engineering case study are used to demonstrate the ability of the algorithm to estimate the global robust optimum using a limited number of expensive function evaluations.

  3. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Science.gov (United States)

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  4. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kun Ren

    2014-01-01

    Full Text Available Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  5. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  6. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    Directory of Open Access Journals (Sweden)

    Xiaoling Zhang

    2013-01-01

    Full Text Available The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers’ preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  7. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    Science.gov (United States)

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  8. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  9. Optimum sampling interval for evaluating ferromanganese nodule resources in the central Indian Ocean

    Digital Repository Service at National Institute of Oceanography (India)

    Jauhari, P.; Kodagali, V.N.; Sankar, S.J.

    by progressively reducing the grid spacing. Sampling the corners of the 1 degree survey block (approximately 110-km spacing), i.e., four stations with 5-7 free-fall operations (sampling locations) in each case, indicated a nodule abundance of 3.50 kg/m sup(2...

  10. Optimal reference interval for homeostasis model assessment of insulin resistance in a Japanese population.

    Science.gov (United States)

    Yamada, Chizumi; Mitsuhashi, Toshitake; Hiratsuka, Noboru; Inabe, Fumiyo; Araida, Nami; Takahashi, Eiko

    2011-10-07

    The aim of the present study was to establish a reference interval for homeostasis model assessment of insulin resistance (HOMA-IR) in a Japanese population based on the C28-A3 document from the Clinical and Laboratory Standards Institute (CLSI). We selected healthy subjects aged 20-79 years, with fasting plasma glucose reference limits of HOMA-IR. We selected 2173 subjects as reference individuals, and 2153 subjects were used for analysis. The reference interval for HOMA-IR was established as between 0.4 and 2.4. This represents the first reference interval study for HOMA-IR that applies the stringent CLSI C28-A3 document. HOMA-IR ≥ 2.5 should be considered a reasonable indicator of insulin resistance in Japanese. (J Diabetes Invest, doi: 10.1111/j.2040-1124.2011.00113.x, 2011).

  11. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    Science.gov (United States)

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  12. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    Science.gov (United States)

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  13. Interval sampling of end-expiratory hydrogen (H2) concentrations to quantify carbohydrate malabsorption by means of lactulose standards

    DEFF Research Database (Denmark)

    Rumessen, J J; Hamberg, O; Gudmand-Høyer, E

    1990-01-01

    Lactulose H2 breath tests are widely used for quantifying carbohydrate malabsorption, but the validity of the commonly used technique (interval sampling of H2 concentrations) has not been systematically investigated. In eight healthy adults we studied the reproducibility of the technique and the ......Lactulose H2 breath tests are widely used for quantifying carbohydrate malabsorption, but the validity of the commonly used technique (interval sampling of H2 concentrations) has not been systematically investigated. In eight healthy adults we studied the reproducibility of the technique......-60%, interquartile range). This corresponded to the deviation in reproducibility of the standard dose. We suggest that individual estimates of carbohydrate malabsorption by means of H2 breath tests should be interpreted with caution if tests of reproducibility are not incorporated. Both areas under curves and peak H...

  14. Optimal time intervals between preoperative radiotherapy or chemoradiotherapy and surgery in rectal cancer?

    Directory of Open Access Journals (Sweden)

    Bengt eGlimelius

    2014-04-01

    Full Text Available Background In rectal cancer therapy, radiotherapy or chemoradiotherapy (RT/CRT is extensively used preoperatively to (i decrease local recurrence risks, (ii allow radical surgery in non-resectable tumours and (iii increase the chances of sphincter-saving surgery or (iv organ preservation. There is a growing interest among clinicians and scientists to prolong the interval from the RT/CRT to surgery to achieve maximal tumour regression and to diminish complications during surgery.Methods The pros and cons of delaying surgery depending upon the aim of the preoperative RT/CRT are critically evaluated. Results Depending upon the clinical situation, the need for a time interval prior to surgery to allow tumour regression varies. In the first and most common situation (i, no regression is needed and any delay beyond what is needed for the acute radiation reaction in surrounding tissues to wash out can potentially only be deleterious. After short-course RT (5Gyx5 with immediate surgery, the ideal time between the last radiation fraction is 2-5 days since a slightly longer interval appears to increase surgical complications. A delay beyond 4 weeks appears safe; it results in tumour regression including pathologic complete responses, but is not yet fully evaluated concerning oncologic outcome. Surgical complications do not appear to be influenced by the CRT-surgery interval within reasonable limits (about 4-12 weeks, but this has not been sufficiently explored. Maximum tumour regression may not be seen in rectal adenocarcinomas until after several months; thus, a longer than usual delay may be of benefit in well responding tumours if limited or no surgery is planned, as in (iii or (iv, otherwise not.Conclusions A longer time interval is undoubtedly of benefit in some clinical situations but may be counterproductive in most situations.

  15. Near-Optimal Random Walk Sampling in Distributed Networks

    CERN Document Server

    Sarma, Atish Das; Pandurangan, Gopal

    2012-01-01

    Performing random walks in networks is a fundamental primitive that has found numerous applications in communication networks such as token management, load balancing, network topology discovery and construction, search, and peer-to-peer membership management. While several such algorithms are ubiquitous, and use numerous random walk samples, the walks themselves have always been performed naively. In this paper, we focus on the problem of performing random walk sampling efficiently in a distributed network. Given bandwidth constraints, the goal is to minimize the number of rounds and messages required to obtain several random walk samples in a continuous online fashion. We present the first round and message optimal distributed algorithms that present a significant improvement on all previous approaches. The theoretical analysis and comprehensive experimental evaluation of our algorithms show that they perform very well in different types of networks of differing topologies. In particular, our results show h...

  16. Optimal sampling frequency in recording of resistance training exercises.

    Science.gov (United States)

    Bardella, Paolo; Carrasquilla García, Irene; Pozzo, Marco; Tous-Fajardo, Julio; Saez de Villareal, Eduardo; Suarez-Arrones, Luis

    2017-03-01

    The purpose of this study was to analyse the raw lifting speed collected during four different resistance training exercises to assess the optimal sampling frequency. Eight physically active participants performed sets of Squat Jumps, Countermovement Jumps, Squats and Bench Presses at a maximal lifting speed. A linear encoder was used to measure the instantaneous speed at a 200 Hz sampling rate. Subsequently, the power spectrum of the signal was computed by evaluating its Discrete Fourier Transform. The sampling frequency needed to reconstruct the signals with an error of less than 0.1% was f99.9 = 11.615 ± 2.680 Hz for the exercise exhibiting the largest bandwidth, with the absolute highest individual value being 17.467 Hz. There was no difference between sets in any of the exercises. Using the closest integer sampling frequency value (25 Hz) yielded a reconstruction of the signal up to 99.975 ± 0.025% of its total in the worst case. In conclusion, a sampling rate of 25 Hz or above is more than adequate to record raw speed data and compute power during resistance training exercises, even under the most extreme circumstances during explosive exercises. Higher sampling frequencies provide no increase in the recording precision and may instead have adverse effects on the overall data quality.

  17. Optimal Point-to-Point Motion Planning of Flexible Parallel Manipulator with Multi-Interval Radau Pseudospectral Method

    Directory of Open Access Journals (Sweden)

    Kong Minxiu

    2016-01-01

    Full Text Available Optimal point-to-point motion planning of flexible parallel manipulator was investigated in this paper and the 3RRR parallel manipulator is taken as the object. First, an optimal point-to-point motion planning problem was constructed with the consideration of the rigid-flexible coupling dynamic model and actuator dynamics. Then, the multi-interval Legendre–Gauss–Radau (LGR pseudospectral method was introduced to transform the optimal control problem into Nonlinear Programming (NLP problem. At last, the simulation and experiment were carried out on the flexible parallel manipulator. Compared with the line motion of quantic polynomial planning, the proposed method could constrain the flexible displacement amplitude and suppress the residue vibration.

  18. Optimal inter-stimulus interval for interpolated twitch technique when using double pulse stimulation

    OpenAIRE

    Karimpour, Rana

    2013-01-01

    Interpolated twitch technique is a method frequently used to assess voluntary activa- tion. This method uses electrically evoked twitch superimposed on the voluntary activi- ty and its comparison with the twitch in rested muscle i.e. control twitch, to evaluate completeness of muscle activation. The purpose of this study was to investigate the ef- fect of interval in paired stimulation on control twitch in young and elderly individuals with bent and flexed knee positions. Supramaximal electri...

  19. Optimal Design and Tuning of PID-Type Interval Type-2 Fuzzy Logic Controllers for Delta Parallel Robots

    Directory of Open Access Journals (Sweden)

    Xingguo Lu

    2016-05-01

    Full Text Available In this work, we propose a new method for the optimal design and tuning of a Proportional-Integral-Derivative type (PID-type interval type-2 fuzzy logic controller (IT2 FLC for Delta parallel robot trajectory tracking control. The presented methodology starts with an optimal design problem of IT2 FLC. A group of IT2 FLCs are obtained by blurring the membership functions using a variable called blurring degree. By comparing the performance of the controllers, the optimal structure of IT2 FLC is obtained. Then, a multi-objective optimization problem is formulated to tune the scaling factors of the PID-type IT2 FLC. The Non-dominated Sorting Genetic Algorithm (NSGA-II is adopted to solve the constrained nonlinear multi-objective optimization problem. Simulation results of the optimized controller are presented and discussed regarding application in the Delta parallel robot. The proposed method provides an effective way to design and tune the PID-type IT2 FLC with a desired control performance.

  20. Optimal Design and Tuning of PID-type Interval Type-2 Fuzzy Logic Controllers for Delta Parallel Robots

    Directory of Open Access Journals (Sweden)

    Xingguo Lu

    2016-05-01

    Full Text Available In this work, we propose a new method for the optimal design and tuning of a Proportional Integral-Derivative type (PID-type interval type-2 fuzzy logic controller (IT2 FLC for Delta parallel robot trajectory tracking control. The presented methodology starts with an optimal design problem of IT2 FLC. A group of IT2 FLCs are obtained by blurring the membership functions using a variable called blurring degree. By comparing the performance of the controllers, the optimal structure of IT2 FLC is obtained. Then, a multi-objective optimization problem is formulated to tune the scaling factors of the PID-type IT2 FLC. The Non-dominated Sorting Genetic Algorithm (NSGA-II is adopted to solve the constrained nonlinear multi-objective optimization problem. Simulation results of the optimized controller are presented and discussed regarding application in the Delta parallel robot. The proposed method provides an effective way to design and tune the PID-type IT2 FLC with a desired control performance.

  1. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  2. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Science.gov (United States)

    Gossner, Martin M; Struwe, Jan-Frederic; Sturm, Sarah; Max, Simeon; McCutcheon, Michelle; Weisser, Wolfgang W; Zytynska, Sharon E

    2016-01-01

    There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic). We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when genetic analysis

  3. Sampling-based Algorithms for Optimal Motion Planning

    CERN Document Server

    Karaman, Sertac

    2011-01-01

    During the last decade, sampling-based path planning algorithms, such as Probabilistic RoadMaps (PRM) and Rapidly-exploring Random Trees (RRT), have been shown to work well in practice and possess theoretical guarantees such as probabilistic completeness. However, little effort has been devoted to the formal analysis of the quality of the solution returned by such algorithms, e.g., as a function of the number of samples. The purpose of this paper is to fill this gap, by rigorously analyzing the asymptotic behavior of the cost of the solution returned by stochastic sampling-based algorithms as the number of samples increases. A number of negative results are provided, characterizing existing algorithms, e.g., showing that, under mild technical conditions, the cost of the solution returned by broadly used sampling-based algorithms converges almost surely to a non-optimal value. The main contribution of the paper is the introduction of new algorithms, namely, PRM* and RRT*, which are provably asymptotically opti...

  4. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    Science.gov (United States)

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  5. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    Science.gov (United States)

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  6. Optimal CCD readout by digital correlated double sampling

    CERN Document Server

    Alessandri, Cristobal; Guzman, Dani; Passalacqua, Ignacio; Alvarez-Fontecilla, Enrique; Guarini, Marcelo

    2015-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-dom...

  7. The Duration of Uncertain Times: Audiovisual Information about Intervals Is Integrated in a Statistically Optimal Fashion

    Science.gov (United States)

    Hartcher-O'Brien, Jess; Di Luca, Massimiliano; Ernst, Marc O.

    2014-01-01

    Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme is statistically optimal because it theoretically results in unbiased perceptual estimates with the highest precision possible. There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time. In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimation of audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies – i.e. considering the time of onset/offset of signals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates. PMID:24594578

  8. Determining the Bayesian optimal sampling strategy in a hierarchical system.

    Energy Technology Data Exchange (ETDEWEB)

    Grace, Matthew D.; Ringland, James T.; Boggs, Paul T.; Pebay, Philippe Pierre

    2010-09-01

    Consider a classic hierarchy tree as a basic model of a 'system-of-systems' network, where each node represents a component system (which may itself consist of a set of sub-systems). For this general composite system, we present a technique for computing the optimal testing strategy, which is based on Bayesian decision analysis. In previous work, we developed a Bayesian approach for computing the distribution of the reliability of a system-of-systems structure that uses test data and prior information. This allows for the determination of both an estimate of the reliability and a quantification of confidence in the estimate. Improving the accuracy of the reliability estimate and increasing the corresponding confidence require the collection of additional data. However, testing all possible sub-systems may not be cost-effective, feasible, or even necessary to achieve an improvement in the reliability estimate. To address this sampling issue, we formulate a Bayesian methodology that systematically determines the optimal sampling strategy under specified constraints and costs that will maximally improve the reliability estimate of the composite system, e.g., by reducing the variance of the reliability distribution. This methodology involves calculating the 'Bayes risk of a decision rule' for each available sampling strategy, where risk quantifies the relative effect that each sampling strategy could have on the reliability estimate. A general numerical algorithm is developed and tested using an example multicomponent system. The results show that the procedure scales linearly with the number of components available for testing.

  9. Evaluation of data loggers, sampling intervals, and editing techniques for measuring the lying behavior of dairy cattle.

    Science.gov (United States)

    Ledgerwood, D N; Winckler, C; Tucker, C B

    2010-11-01

    Lying behavior in dairy cattle can provide insight into how cows interact with their environment. Although lying behavior is a useful indicator of cow comfort, it can be time consuming to measure. In response to these time constraints, using data loggers to automate behavioral recording has become increasingly common. We tested the accuracy of the Onset Pendant G data logger (Onset Computer Corporation, Bourne, MA) for measuring lying behavior in dairy cattle (n=24 cows; 12 in each of 2 experiments). Cows wore the logger on the lateral (experiment 1) or medial (experiment 2) side of the hind leg above the metatarsophalangeal joint. Loggers recorded behavior at 4 sampling intervals (6, 30, 60, and 300 s) for at least 1.5 d. Data were smoothed using 3 editing methods to examine the effects of short, potentially erroneous readings. For this purpose, Microsoft Excel macros (Microsoft Corp., Redmond, WA) converted readings (i.e., lying events bordered by standing or vice versa) occurring singly or in consecutive runs of ≤2 or ≤6. Behavior was simultaneously recorded with digital video equipment. The logger accurately measured lying and standing. For example, predictability, sensitivity, and specificity were >99% using 30-s sampling and the single-event filter compared with continuously scored video recordings. The 6- and 30-s sampling intervals were comparable for all aspects of lying behavior when short events were filtered from the data set. Estimates of lying time generated from the 300-s interval unfiltered regimen were positively related (R(2) ≥ 0.99) to estimates of lying time from video, but this sampling regimen overestimated the number of lying bouts. This is likely because short standing and lying bouts were missed (12 and 34% of lying and standing bouts were <300 s in experiment 1 and 2, respectively). In summary, the data logger accurately measured all aspects of lying behavior when the sampling interval was ≤30 s and when short readings of lying and

  10. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities.

  11. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    OpenAIRE

    Yixiong Feng; Zhaoxi Hong; Jin Cheng; Likai Jia; Jianrong Tan

    2017-01-01

    The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1) The related...

  12. Optimal thyrotropin level: normal ranges and reference intervals are not equivalent.

    Science.gov (United States)

    Dickey, Richard A; Wartofsky, Leonard; Feld, Stanley

    2005-09-01

    This paper marshals arguments in support of a narrower, optimal or true normal range for thyrotropin (TSH) of 0.4 to 2.5 mIU/L, based on clinical results and recent information on the relatively stable and narrow range of values in patients without thyroid disease. The terminology used for TSH results is clarified in an attempt to help physicians interpret, explain, and respond to TSH test results for their patients.

  13. Online Doppler Effect Elimination Based on Unequal Time Interval Sampling for Wayside Acoustic Bearing Fault Detecting System.

    Science.gov (United States)

    Ouyang, Kesai; Lu, Siliang; Zhang, Shangbin; Zhang, Haibin; He, Qingbo; Kong, Fanrang

    2015-08-27

    The railway occupies a fairly important position in transportation due to its high speed and strong transportation capability. As a consequence, it is a key issue to guarantee continuous running and transportation safety of trains. Meanwhile, time consumption of the diagnosis procedure is of extreme importance for the detecting system. However, most of the current adopted techniques in the wayside acoustic defective bearing detector system (ADBD) are offline strategies, which means that the signal is analyzed after the sampling process. This would result in unavoidable time latency. Besides, the acquired acoustic signal would be corrupted by the Doppler effect because of high relative speed between the train and the data acquisition system (DAS). Thus, it is difficult to effectively diagnose the bearing defects immediately. In this paper, a new strategy called online Doppler effect elimination (ODEE) is proposed to remove the Doppler distortion online by the introduced unequal interval sampling scheme. The steps of proposed strategy are as follows: The essential parameters are acquired in advance. Then, the introduced unequal time interval sampling strategy is used to restore the Doppler distortion signal, and the amplitude of the signal is demodulated as well. Thus, the restored Doppler-free signal is obtained online. The proposed ODEE method has been employed in simulation analysis. Ultimately, the ODEE method is implemented in the embedded system for fault diagnosis of the train bearing. The results are in good accordance with the bearing defects, which verifies the good performance of the proposed strategy.

  14. Online Doppler Effect Elimination Based on Unequal Time Interval Sampling for Wayside Acoustic Bearing Fault Detecting System

    Directory of Open Access Journals (Sweden)

    Kesai Ouyang

    2015-08-01

    Full Text Available The railway occupies a fairly important position in transportation due to its high speed and strong transportation capability. As a consequence, it is a key issue to guarantee continuous running and transportation safety of trains. Meanwhile, time consumption of the diagnosis procedure is of extreme importance for the detecting system. However, most of the current adopted techniques in the wayside acoustic defective bearing detector system (ADBD are offline strategies, which means that the signal is analyzed after the sampling process. This would result in unavoidable time latency. Besides, the acquired acoustic signal would be corrupted by the Doppler effect because of high relative speed between the train and the data acquisition system (DAS. Thus, it is difficult to effectively diagnose the bearing defects immediately. In this paper, a new strategy called online Doppler effect elimination (ODEE is proposed to remove the Doppler distortion online by the introduced unequal interval sampling scheme. The steps of proposed strategy are as follows: The essential parameters are acquired in advance. Then, the introduced unequal time interval sampling strategy is used to restore the Doppler distortion signal, and the amplitude of the signal is demodulated as well. Thus, the restored Doppler-free signal is obtained online. The proposed ODEE method has been employed in simulation analysis. Ultimately, the ODEE method is implemented in the embedded system for fault diagnosis of the train bearing. The results are in good accordance with the bearing defects, which verifies the good performance of the proposed strategy.

  15. Global fuel consumption optimization of an open-time terminal rendezvous and docking with large-eccentricity elliptic-orbit by the method of interval analysis

    Science.gov (United States)

    Ma, Hongliang; Xu, Shijie

    2016-11-01

    By defining two open-time impulse points, the optimization of a two-impulse, open-time terminal rendezvous and docking with target spacecraft on large-eccentricity elliptical orbit is proposed in this paper. The purpose of optimization is to minimize the velocity increment for a terminal elliptic-reference-orbit rendezvous and docking. Current methods for solving this type of optimization problem include for example genetic algorithms and gradient based optimization. Unlike these methods, interval methods can guarantee that the globally best solution is found for a given parameterization of the input. The non-linear Tschauner- Hempel(TH) equations of the state transitions for a terminal elliptic target orbit are transformed form time domain to target orbital true anomaly domain. Their homogenous solutions and approximate state transition matrix for the control with a short true anomaly interval can be used to avoid interval integration. The interval branch and bound optimization algorithm is introduced for solving the presented rendezvous and docking optimization problem and optimizing two open-time impulse points and thruster pulse amplitudes, which systematically eliminates parts of the control and open-time input spaces that do not satisfy the path and final time state constraints. Several numerical examples are undertaken to validate the interval optimization algorithm. The results indicate that the sufficiently narrow spaces containing the global optimization solution for the open-time two-impulse terminal rendezvous and docking with target spacecraft on large-eccentricity elliptical orbit can be obtained by the interval algorithm (IA). Combining the gradient-based method, the global optimization solution for the discontinuous nonconvex optimization problem in the specifically remained search space can be found. Interval analysis is shown to be a useful tool and preponderant in the discontinuous nonconvex optimization problem of the terminal rendezvous and

  16. Advanced Computational Methods for Optimization of Non Periodic Inspection Intervals for Aging Infrastructure

    Science.gov (United States)

    2017-01-05

    4-1 Statistical results for different number of sample ( non -periodic, true value) Number of sample Number of inspections Number of failure Number...distribution unlimited. 51 Table 5-1 Inspection schedule of non -periodic scheme by conditional probability (true value) Inspection no. Inspection time...50 elements Fig. 5-1 Reliability for the non -periodic inspection by conditional probability DISTRIBUTION A. Approved for public release

  17. New version of Optimal Homotopy Asymptotic Method for the solution of nonlinear boundary value problems in finite and infinite intervals

    Directory of Open Access Journals (Sweden)

    Liaqat Ali

    2016-09-01

    Full Text Available In this research work a new version of Optimal Homotopy Asymptotic Method is applied to solve nonlinear boundary value problems (BVPs in finite and infinite intervals. It comprises of initial guess, auxiliary functions (containing unknown convergence controlling parameters and a homotopy. The said method is applied to solve nonlinear Riccati equations and nonlinear BVP of order two for thin film flow of a third grade fluid on a moving belt. It is also used to solve nonlinear BVP of order three achieved by Mostafa et al. for Hydro-magnetic boundary layer and micro-polar fluid flow over a stretching surface embedded in a non-Darcian porous medium with radiation. The obtained results are compared with the existing results of Runge-Kutta (RK-4 and Optimal Homotopy Asymptotic Method (OHAM-1. The outcomes achieved by this method are in excellent concurrence with the exact solution and hence it is proved that this method is easy and effective.

  18. A New Genetic Algorithm Methodology for Design Optimization of Truss Structures: Bipopulation-Based Genetic Algorithm with Enhanced Interval Search

    Directory of Open Access Journals (Sweden)

    Tugrul Talaslioglu

    2009-01-01

    Full Text Available A new genetic algorithm (GA methodology, Bipopulation-Based Genetic Algorithm with Enhanced Interval Search (BGAwEIS, is introduced and used to optimize the design of truss structures with various complexities. The results of BGAwEIS are compared with those obtained by the sequential genetic algorithm (SGA utilizing a single population, a multipopulation-based genetic algorithm (MPGA proposed for this study and other existing approaches presented in literature. This study has two goals: outlining BGAwEIS's fundamentals and evaluating the performances of BGAwEIS and MPGA. Consequently, it is demonstrated that MPGA shows a better performance than SGA taking advantage of multiple populations, but BGAwEIS explores promising solution regions more efficiently than MPGA by exploiting the feasible solutions. The performance of BGAwEIS is confirmed by better quality degree of its optimal designations compared to algorithms proposed here and described in literature.

  19. Sample size matters: Investigating the optimal sample size for a logistic regression debris flow susceptibility model

    Science.gov (United States)

    Heckmann, Tobias; Gegg, Katharina; Becht, Michael

    2013-04-01

    Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size

  20. A two-stage method to determine optimal product sampling considering dynamic potential market.

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  1. Automatic, optimized interface placement in forward flux sampling simulations

    CERN Document Server

    Kratzer, Kai; Allen, Rosalind J

    2013-01-01

    Forward flux sampling (FFS) provides a convenient and efficient way to simulate rare events in equilibrium or non-equilibrium systems. FFS ratchets the system from an initial state to a final state via a series of interfaces in phase space. The efficiency of FFS depends sensitively on the positions of the interfaces. We present two alternative methods for placing interfaces automatically and adaptively in their optimal locations, on-the-fly as an FFS simulation progresses, without prior knowledge or user intervention. These methods allow the FFS simulation to advance efficiently through bottlenecks in phase space by placing more interfaces where the probability of advancement is lower. The methods are demonstrated both for a single-particle test problem and for the crystallization of Yukawa particles. By removing the need for manual interface placement, our methods both facilitate the setting up of FFS simulations and improve their performance, especially for rare events which involve complex trajectories thr...

  2. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...... by taking the position-dependent diffusion coefficient into account, thus placing greater emphasis on regions diffusing slowly. Although some promising examples of applications of this approach exist, the practical usefulness of the method has been hindered by the difficulty in obtaining sufficiently...

  3. A multi-objective optimization model for hub network design under uncertainty: An inexact rough-interval fuzzy approach

    Science.gov (United States)

    Niakan, F.; Vahdani, B.; Mohammadi, M.

    2015-12-01

    This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.

  4. Color Tissue Doppler to Analyze Fetal Cardiac Time Intervals: Normal Values and Influence of Sample Gate Size.

    Science.gov (United States)

    Willruth, A M; Steinhard, J; Enzensberger, C; Axt-Fliedner, R; Gembruch, U; Doelle, A; Dimitriou, I; Fimmers, R; Bahlmann, F

    2016-02-04

    Purpose: To assess the time intervals of the cardiac cycle in healthy fetuses in the second and third trimester using color tissue Doppler imaging (cTDI) and to evaluate the influence of different sizes of sample gates on time interval values. Materials and Methods: Time intervals were measured from the cTDI-derived Doppler waveform using a small and large region of interest (ROI) in healthy fetuses. Results: 40 fetuses were included. The median gestational age at examination was 26 + 1 (range: 20 + 5 - 34 + 5) weeks. The median frame rate was 116/s (100 - 161/s) and the median heart rate 143 (range: 125 - 158) beats per minute (bpm). Using small and large ROIs, the second trimester right ventricular (RV) mean isovolumetric contraction times (ICTs) were 39.8 and 41.4 ms (p = 0.17), the mean ejection times (ETs) were 170.2 and 164.6 ms (p < 0.001), the mean isovolumetric relaxation times (IRTs) were 52.8 and 55.3 ms (p = 0.08), respectively. The left ventricular (LV) mean ICTs were 36.2 and 39.4 ms (p = 0.05), the mean ETs were 167.4 and 164.5 ms (p = 0.013), the mean IRTs were 53.9 and 57.1 ms (p = 0.05), respectively. The third trimester RV mean ICTs were 50.7 and 50.4 ms (p = 0.75), the mean ETs were 172.3 and 181.4 ms (p = 0.49), the mean IRTs were 50.2 and 54.6 ms (p = 0.03); the LV mean ICTs were 45.1 and 46.2 ms (p = 0.35), the mean ETs were 175.2 vs. 172.9 ms (p = 0.29), the mean IRTs were 47.1 and 50.0 ms (p = 0.01), respectively. Conclusion: Isovolumetric time intervals can be analyzed precisely and relatively independent of ROI size. In the near future, automatic time interval measurement using ultrasound systems will be feasible and the analysis of fetal myocardial function can become part of the clinical routine.

  5. Optimization for Peptide Sample Preparation for Urine Peptidomics

    Energy Technology Data Exchange (ETDEWEB)

    Sigdel, Tara K.; Nicora, Carrie D.; Hsieh, Szu-Chuan; Dai, Hong; Qian, Weijun; Camp, David G.; Sarwal, Minnie M.

    2014-02-25

    when utilizing the conventional SPE method. In conclusion, the mSPE method was found to be superior to the conventional, standard SPE method for urine peptide sample preparation when applying LC-MS peptidomics analysis due to the optimized sample clean up that provided improved experimental inference from the confidently identified peptides.

  6. Hemodynamic device-based optimization in cardiac resynchronization therapy: concordance with systematic echocardiographic assessment of AV and VV intervals

    Directory of Open Access Journals (Sweden)

    Oliveira MM

    2015-08-01

    Full Text Available Mário M Oliveira, Luisa M Branco, Ana Galrinho, Nogueira da Silva, Pedro S Cunha, Bruno Valente, Joana Feliciano, Ricardo Pimenta, Ana Sofia Delgado, Rui Cruz Ferreira Santa Marta Hospital, Lisbon, Portugal Background: Inappropriate settings of atrioventricular (AV and ventriculo-ventricular (VV intervals can be one of the factors impacting response to cardiac resynchronization therapy (CRT. Optimal concordance of AV and VV intervals between echocardiographic-based assessment and a device-based automatic programming with a hemodynamic sensor was investigated, together with left ventricle (LV reverse remodeling after 6 months of regular automatic device-based optimization.Methods: We evaluated blindly 30 systematic echocardiographic examinations during 6 months in 17 patients (12 men, 64±10 years, in sinus rhythm and New York Heart Association class III; 76% with non-ischemic dilated cardiomyopathy, LV ejection fraction [LVEF] <35%, QRS 130 milliseconds and LV dyssynchrony implanted with the SonRtip lead and a cardioverter-defibrillator device. Dyssynchrony (AV, VV, or intraventricular was evaluated by an experienced operator blinded to the device programming, using conventional echocardiography, tissue synchronization imaging, tissue Doppler imaging, radial strain, and 3D echocardiography.Results: Either no AV or VV dyssynchrony (n=11; 36.7% or a slight septal or lateral delay (n=13; 43.3% was found in most echocardiography examinations (80%. AV or VV dyssynchrony requiring further optimization was identified in one-fifth of the examinations (20%. At 6 months, 76.5% patients were responders with LV reverse remodeling, of which 69% were super-responders (LVEF >40%. A statistically significant increase in LVEF was observed between baseline and 6 months post implant (P<0.01. One patient died from non-cardiac causes.Conclusion: Concordance between echocardiographic methods and device-based hemodynamic sensor optimization was found in most

  7. An optimization based sampling approach for multiple metrics uncertainty analysis using generalized likelihood uncertainty estimation

    Science.gov (United States)

    Zhou, Rurui; Li, Yu; Lu, Di; Liu, Haixing; Zhou, Huicheng

    2016-09-01

    This paper investigates the use of an epsilon-dominance non-dominated sorted genetic algorithm II (ɛ-NSGAII) as a sampling approach with an aim to improving sampling efficiency for multiple metrics uncertainty analysis using Generalized Likelihood Uncertainty Estimation (GLUE). The effectiveness of ɛ-NSGAII based sampling is demonstrated compared with Latin hypercube sampling (LHS) through analyzing sampling efficiency, multiple metrics performance, parameter uncertainty and flood forecasting uncertainty with a case study of flood forecasting uncertainty evaluation based on Xinanjiang model (XAJ) for Qing River reservoir, China. Results obtained demonstrate the following advantages of the ɛ-NSGAII based sampling approach in comparison to LHS: (1) The former performs more effective and efficient than LHS, for example the simulation time required to generate 1000 behavioral parameter sets is shorter by 9 times; (2) The Pareto tradeoffs between metrics are demonstrated clearly with the solutions from ɛ-NSGAII based sampling, also their Pareto optimal values are better than those of LHS, which means better forecasting accuracy of ɛ-NSGAII parameter sets; (3) The parameter posterior distributions from ɛ-NSGAII based sampling are concentrated in the appropriate ranges rather than uniform, which accords with their physical significance, also parameter uncertainties are reduced significantly; (4) The forecasted floods are close to the observations as evaluated by three measures: the normalized total flow outside the uncertainty intervals (FOUI), average relative band-width (RB) and average deviation amplitude (D). The flood forecasting uncertainty is also reduced a lot with ɛ-NSGAII based sampling. This study provides a new sampling approach to improve multiple metrics uncertainty analysis under the framework of GLUE, and could be used to reveal the underlying mechanisms of parameter sets under multiple conflicting metrics in the uncertainty analysis process.

  8. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Directory of Open Access Journals (Sweden)

    Andreas Steimer

    Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing

  9. Geostatistical sampling optimization and waste characterization of contaminated premises

    Energy Technology Data Exchange (ETDEWEB)

    Desnoyers, Y.; Jeannee, N. [GEOVARIANCES, 49bis avenue Franklin Roosevelt, BP91, Avon, 77212 (France); Chiles, J.P. [Centre de geostatistique, Ecole des Mines de Paris (France); Dubot, D. [CEA DSV/FAR/USLT/SPRE/SAS (France); Lamadie, F. [CEA DEN/VRH/DTEC/SDTC/LTM (France)

    2009-06-15

    At the end of process equipment dismantling, the complete decontamination of nuclear facilities requires a radiological assessment of the building structure residual activity. From this point of view, the set up of an appropriate evaluation methodology is of crucial importance. The radiological characterization of contaminated premises can be divided into three steps. First, the most exhaustive facility analysis provides historical and qualitative information. Then, a systematic (exhaustive) control of the emergent signal is commonly performed using in situ measurement methods such as surface controls combined with in situ gamma spectrometry. Finally, in order to assess the contamination depth, samples are collected at several locations within the premises and analyzed. Combined with historical information and emergent signal maps, such data allow the definition of a preliminary waste zoning. The exhaustive control of the emergent signal with surface measurements usually leads to inaccurate estimates, because of several factors: varying position of the measuring device, subtraction of an estimate of the background signal, etc. In order to provide reliable estimates while avoiding supplementary investigation costs, there is therefore a crucial need for sampling optimization methods together with appropriate data processing techniques. The initial activity usually presents a spatial continuity within the premises, with preferential contamination of specific areas or existence of activity gradients. Taking into account this spatial continuity is essential to avoid bias while setting up the sampling plan. In such a case, Geostatistics provides methods that integrate the contamination spatial structure. After the characterization of this spatial structure, most probable estimates of the surface activity at un-sampled locations can be derived using kriging techniques. Variants of these techniques also give access to estimates of the uncertainty associated to the spatial

  10. Optimization and validation of sample preparation for metagenomic sequencing of viruses in clinical samples.

    Science.gov (United States)

    Lewandowska, Dagmara W; Zagordi, Osvaldo; Geissberger, Fabienne-Desirée; Kufner, Verena; Schmutz, Stefan; Böni, Jürg; Metzner, Karin J; Trkola, Alexandra; Huber, Michael

    2017-08-08

    Sequence-specific PCR is the most common approach for virus identification in diagnostic laboratories. However, as specific PCR only detects pre-defined targets, novel virus strains or viruses not included in routine test panels will be missed. Recently, advances in high-throughput sequencing allow for virus-sequence-independent identification of entire virus populations in clinical samples, yet standardized protocols are needed to allow broad application in clinical diagnostics. Here, we describe a comprehensive sample preparation protocol for high-throughput metagenomic virus sequencing using random amplification of total nucleic acids from clinical samples. In order to optimize metagenomic sequencing for application in virus diagnostics, we tested different enrichment and amplification procedures on plasma samples spiked with RNA and DNA viruses. A protocol including filtration, nuclease digestion, and random amplification of RNA and DNA in separate reactions provided the best results, allowing reliable recovery of viral genomes and a good correlation of the relative number of sequencing reads with the virus input. We further validated our method by sequencing a multiplexed viral pathogen reagent containing a range of human viruses from different virus families. Our method proved successful in detecting the majority of the included viruses with high read numbers and compared well to other protocols in the field validated against the same reference reagent. Our sequencing protocol does work not only with plasma but also with other clinical samples such as urine and throat swabs. The workflow for virus metagenomic sequencing that we established proved successful in detecting a variety of viruses in different clinical samples. Our protocol supplements existing virus-specific detection strategies providing opportunities to identify atypical and novel viruses commonly not accounted for in routine diagnostic panels.

  11. Interval sampling of end-expiratory hydrogen (H2) concentrations to quantify carbohydrate malabsorption by means of lactulose standards

    DEFF Research Database (Denmark)

    Rumessen, J J; Hamberg, O; Gudmand-Høyer, E

    1990-01-01

    Lactulose H2 breath tests are widely used for quantifying carbohydrate malabsorption, but the validity of the commonly used technique (interval sampling of H2 concentrations) has not been systematically investigated. In eight healthy adults we studied the reproducibility of the technique......-60%, interquartile range). This corresponded to the deviation in reproducibility of the standard dose. We suggest that individual estimates of carbohydrate malabsorption by means of H2 breath tests should be interpreted with caution if tests of reproducibility are not incorporated. Both areas under curves and peak H...... and the accuracy with which 5 g and 20 g doses of lactulose could be calculated from the H2 excretion after their ingestion by means of a 10 g lactulose standard. The influence of different lengths of the test period, different definitions of the baseline and the significance of standard meals and peak H2...

  12. Confidence intervals for similarity values determined for clonedSSU rRNA genes from environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Fields, M.W.; Schryver, J.C.; Brandt, C.C.; Yan, T.; Zhou, J.Z.; Palumbo, A.V.

    2007-04-02

    The goal of this research was to investigate the influenceof the error rate of sequence determination on the differentiation ofcloned SSU rRNA gene sequences for assessment of community structure. SSUrRNA cloned sequences from groundwater samples that represent differentbacterial divisions were sequenced multiple times with the samesequencing primer. From comparison of sequence alignments with unediteddata, confidence intervals were obtained from both a adouble binomial Tmodel of sequence comparison and by non-parametric methods. The resultsindicated that similarity values below 0.9946 arelikely derived fromdissimilar sequences at a confidence level of 0.95, and not sequencingerrors. The results confirmed that screening by direct sequencedetermination could be reliably used to differentiate at the specieslevel. However, given sequencing errors comparable to those seen in thisstudy, sequences with similarities above 0.9946 should be treated as thesame sequence if a 95 percent confidence is desired.

  13. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  14. A novel interval type-2 fractional order fuzzy PID controller: Design, performance evaluation, and its optimal time domain tuning.

    Science.gov (United States)

    Kumar, Anupam; Kumar, Vijay

    2017-05-01

    In this paper, a novel concept of an interval type-2 fractional order fuzzy PID (IT2FO-FPID) controller, which requires fractional order integrator and fractional order differentiator, is proposed. The incorporation of Takagi-Sugeno-Kang (TSK) type interval type-2 fuzzy logic controller (IT2FLC) with fractional controller of PID-type is investigated for time response measure due to both unit step response and unit load disturbance. The resulting IT2FO-FPID controller is examined on different delayed linear and nonlinear benchmark plants followed by robustness analysis. In order to design this controller, fractional order integrator-differentiator operators are considered as design variables including input-output scaling factors. A new hybridized algorithm named as artificial bee colony-genetic algorithm (ABC-GA) is used to optimize the parameters of the controller while minimizing weighted sum of integral of time absolute error (ITAE) and integral of square of control output (ISCO). To assess the comparative performance of the IT2FO-FPID, authors compared it against existing controllers, i.e., interval type-2 fuzzy PID (IT2-FPID), type-1 fractional order fuzzy PID (T1FO-FPID), type-1 fuzzy PID (T1-FPID), and conventional PID controllers. Furthermore, to show the effectiveness of the proposed controller, the perturbed processes along with the larger dead time are tested. Moreover, the proposed controllers are also implemented on multi input multi output (MIMO), coupled, and highly complex nonlinear two-link robot manipulator system in presence of un-modeled dynamics. Finally, the simulation results explicitly indicate that the performance of the proposed IT2FO-FPID controller is superior to its conventional counterparts in most of the cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Sample-Path Optimization of Buffer Allocations in a Tandem Queue - Part I : Theoretical Issues

    NARCIS (Netherlands)

    Gürkan, G.; Ozge, A.Y.

    1996-01-01

    This is the first of two papers dealing with the optimal bu er allocation problem in tandem manufacturing lines with unreliable machines.We address the theoretical issues that arise when using sample-path optimization, a simulation-based optimization method, to solve this problem.Sample-path optimiz

  16. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TGRRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  17. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  18. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  19. Sampled-Data and Discrete-Time H2 Optimal Control

    NARCIS (Netherlands)

    Trentelman, H.L.; Stoorvogel, A.A.

    1995-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  20. Detection of antibody responses against Streptococcus pneumoniae, Haemophilus influenzae, and Moraxella catarrhalis proteins in children with community-acquired pneumonia: effects of combining pneumococcal antigens, pre-existing antibody levels, sampling interval, age, and duration of illness.

    Science.gov (United States)

    Borges, I C; Andrade, D C; Vilas-Boas, A-L; Fontoura, M-S H; Laitinen, H; Ekström, N; Adrian, P V; Meinke, A; Cardoso, M-R A; Barral, A; Ruuskanen, O; Käyhty, H; Nascimento-Carvalho, C M

    2015-08-01

    We evaluated the effects of combining different numbers of pneumococcal antigens, pre-existing antibody levels, sampling interval, age, and duration of illness on the detection of IgG responses against eight Streptococcus pneumoniae proteins, three Haemophilus influenzae proteins, and five Moraxella catarrhalis proteins in 690 children aged pneumonia. Serological tests were performed on acute and convalescent serum samples with a multiplexed bead-based immunoassay. The median sampling interval was 19 days, the median age was 26.7 months, and the median duration of illness was 5 days. The rate of antibody responses was 15.4 % for at least one pneumococcal antigen, 5.8 % for H. influenzae, and 2.3 % for M. catarrhalis. The rate of antibody responses against each pneumococcal antigen varied from 3.5 to 7.1 %. By multivariate analysis, pre-existing antibody levels showed a negative association with the detection of antibody responses against pneumococcal and H. influenzae antigens; the sampling interval was positively associated with the detection of antibody responses against pneumococcal and H. influenzae antigens. A sampling interval of 3 weeks was the optimal cut-off for the detection of antibody responses against pneumococcal and H. influenzae proteins. Duration of illness was negatively associated with antibody responses against PspA. Age did not influence antibody responses against the investigated antigens. In conclusion, serological assays using combinations of different pneumococcal proteins detect a higher rate of antibody responses against S. pneumoniae compared to assays using a single pneumococcal protein. Pre-existing antibody levels and sampling interval influence the detection of antibody responses against pneumococcal and H. influenzae proteins. These factors should be considered when determining pneumonia etiology by serological methods in children.

  1. Near-Optimal Detection in MIMO Systems using Gibbs Sampling

    DEFF Research Database (Denmark)

    Hansen, Morten; Hassibi, Babak; Dimakis, Georgios Alexandros

    2009-01-01

    In this paper we study a Markov Chain Monte Carlo (MCMC) Gibbs sampler for solving the integer least-squares problem. In digital communication the problem is equivalent to preforming Maximum Likelihood (ML) detection in Multiple-Input Multiple-Output (MIMO) systems. While the use of MCMC methods...... for such problems has already been proposed, our method is novel in that we optimize the "temperature" parameter so that in steady state, i.e., after the Markov chain has mixed, there is only polynomially (rather than exponentially) small probability of encountering the optimal solution. More precisely, we obtain...

  2. 基于区间分析的汽车制动器不确定性优化%Uncertainty Optimization of Vehicle Brakes Based on Interval Analysis

    Institute of Scientific and Technical Information of China (English)

    吕辉; 于德介

    2015-01-01

    为抑制不确定参数汽车制动器的制动噪声,基于区间分析理论,将响应面法与优化技术相结合,提出了一种降低系统复模态负阻尼比以提高汽车制动器稳定性的优化方法。该方法采用拉丁超立方试验设计在设计变量和不确定参数构成的混合空间内采样,建立了包含不确定参数的制动器系统复模态负阻尼比的响应面近似模型;以系统结构参数为设计变量,以最小化系统复模态负阻尼比为优化目标,利用基于区间分析的不确定性优化方法对响应面近似模型进行优化。对某型车的浮钳盘式制动器的优化结果表明,采用该方法对汽车制动器进行优化,能在整个使用周期内有效减小制动器不稳定模态的负阻尼比,从而提高制动器的稳定性。%To suppress the noise of vehicle brakes with uncertain parameters, an optimization scheme by re-ducing the negative damping ratio of the complex mode of brake system to improve the brake stability is presented based on the theory of interval analysis and combining response surface method with optimization technique. The scheme adopts the Latin hypercube design of experiment to conduct sampling within the mixed space formed by de-sign variables and uncertain parameters and create a response surface approximation model for the negative damping ratio of the complex mode of brake system with uncertain parameters. With the structural parameters of brake system as design variables and minimizing the negative damping ratio of its complex mode as optimization objective, an opti-mization is performed on response surface approximation model with an uncertainty optimization scheme based on in-terval analysis. The results of the optimization on the float caliper disc brake of a vehicle show that the optimization on vehicle brakes using the scheme presented can effectively reduce the negative damping ratio of unstable mode of brake system in entire life

  3. Optimizing 4D cone beam computed tomography acquisition by varying the gantry velocity and projection time interval

    Science.gov (United States)

    O'Brien, Ricky T.; Cooper, Benjamin J.; Keall, Paul J.

    2013-03-01

    Four dimensional cone beam computed tomography (4DCBCT) is an emerging clinical image guidance strategy for tumour sites affected by respiratory motion. In current generation 4DCBCT techniques, both the gantry rotation speed and imaging frequency are constant and independent of the patient’s breathing which can lead to projection clustering. We present a mixed integer quadratic programming (MIQP) model for respiratory motion guided-4DCBCT (RMG-4DCBCT) which regulates the gantry velocity and projection time interval, in response to the patient’s respiratory signal, so that a full set of evenly spaced projections can be taken in a number of phase, or displacement, bins during the respiratory cycle. In each respiratory bin, an image can be reconstructed from the projections to give a 4D view of the patient’s anatomy so that the motion of the lungs, and tumour, can be observed during the breathing cycle. A solution to the full MIQP model in a practical amount of time, 10 s, is not possible with the leading commercial MIQP solvers, so a heuristic method is presented. Using parameter settings typically used on current generation 4DCBCT systems (4 min image acquisition, 1200 projections, 10 respiratory bins) and a sinusoidal breathing trace with a 4 s period, we show that the root mean square (RMS) of the angular separation between projections with displacement binning is 2.7° using existing constant gantry speed systems and 0.6° using RMG-4DCBCT. For phase based binning the RMS is 2.7° using constant gantry speed systems and 2.5° using RMG-4DCBCT. The optimization algorithm presented is a critical step on the path to developing a system for RMG-4DCBCT.

  4. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  2. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  3. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  4. Optimal two-phase sampling design for comparing accuracies of two binary classification rules.

    Science.gov (United States)

    Xu, Huiping; Hui, Siu L; Grannis, Shaun

    2014-02-10

    In this paper, we consider the design for comparing the performance of two binary classification rules, for example, two record linkage algorithms or two screening tests. Statistical methods are well developed for comparing these accuracy measures when the gold standard is available for every unit in the sample, or in a two-phase study when the gold standard is ascertained only in the second phase in a subsample using a fixed sampling scheme. However, these methods do not attempt to optimize the sampling scheme to minimize the variance of the estimators of interest. In comparing the performance of two classification rules, the parameters of primary interest are the difference in sensitivities, specificities, and positive predictive values. We derived the analytic variance formulas for these parameter estimates and used them to obtain the optimal sampling design. The efficiency of the optimal sampling design is evaluated through an empirical investigation that compares the optimal sampling with simple random sampling and with proportional allocation. Results of the empirical study show that the optimal sampling design is similar for estimating the difference in sensitivities and in specificities, and both achieve a substantial amount of variance reduction with an over-sample of subjects with discordant results and under-sample of subjects with concordant results. A heuristic rule is recommended when there is no prior knowledge of individual sensitivities and specificities, or the prevalence of the true positive findings in the study population. The optimal sampling is applied to a real-world example in record linkage to evaluate the difference in classification accuracy of two matching algorithms. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Optimization of techniques for multiple platform testing in small, precious samples such as human chorionic villus sampling.

    Science.gov (United States)

    Pisarska, Margareta D; Akhlaghpour, Marzieh; Lee, Bora; Barlow, Gillian M; Xu, Ning; Wang, Erica T; Mackey, Aaron J; Farber, Charles R; Rich, Stephen S; Rotter, Jerome I; Chen, Yii-der I; Goodarzi, Mark O; Guller, Seth; Williams, John

    2016-11-01

    Multiple testing to understand global changes in gene expression based on genetic and epigenetic modifications is evolving. Chorionic villi, obtained for prenatal testing, is limited, but can be used to understand ongoing human pregnancies. However, optimal storage, processing and utilization of CVS for multiple platform testing have not been established. Leftover CVS samples were flash-frozen or preserved in RNAlater. Modifications to standard isolation kits were performed to isolate quality DNA and RNA from samples as small as 2-5 mg. RNAlater samples had significantly higher RNA yields and quality and were successfully used in microarray and RNA-sequencing (RNA-seq). RNA-seq libraries generated using 200 versus 800-ng RNA showed similar biological coefficients of variation. RNAlater samples had lower DNA yields and quality, which improved by heating the elution buffer to 70 °C. Purification of DNA was not necessary for bisulfite-conversion and genome-wide methylation profiling. CVS cells were propagated and continue to express genes found in freshly isolated chorionic villi. CVS samples preserved in RNAlater are superior. Our optimized techniques provide specimens for genetic, epigenetic and gene expression studies from a single small sample which can be used to develop diagnostics and treatments using a systems biology approach in the prenatal period. © 2016 John Wiley & Sons, Ltd. © 2016 John Wiley & Sons, Ltd.

  6. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  7. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  8. Factorial Based Response Surface Modeling with Confidence Intervals for Optimizing Thermal Optical Transmission Analysis of Atmospheric Black Carbon

    Science.gov (United States)

    We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...

  9. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  10. A prospective, comparative trial to optimize sampling techniques in EUS-guided FNA of solid pancreatic masses.

    Science.gov (United States)

    Lee, Jun Kyu; Choi, Jong Hak; Lee, Kwang Hyuck; Kim, Kwang Min; Shin, Jae Uk; Lee, Jong Kyun; Lee, Kyu Taek; Jang, Kee-Taek

    2013-05-01

    There is no standardization of the use of suction during puncturing of a target in pancreatic EUS-guided FNA (EUS-FNA). It is also debatable whether expressing aspirates from the needle by the traditional method of reinserting the stylet is more effective than by air flushing, which is easier and safer. To optimize sampling techniques in pancreatic EUS-FNA. Prospective, comparative trial. Tertiary-care referral center. Eighty-one consecutive patients with solid pancreatic masses. Four punctures were performed for each mass in random order by a 2 × 2 factorial design. Sample quality and diagnostic yield were compared between samples with suction (S+) versus no suction (S-) and expressed by reinserting the stylet (RS) versus air flushing (AF). Sample quality by the number of diagnostic samples, cellularity, bloodiness, and air-drying artifact; diagnostic yield by accuracy, sensitivity, and specificity. The number of diagnostic samples (72.8% vs 58.6%; P = .001), cellularity (odds ratio [OR] 2.12; 95% confidence interval [CI], 1.37-3.30; P techniques. ( NCT01354795.). Copyright © 2013 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  11. A generalization of the Whittaker-Kotel'nikov-Shannon sampling theorem for continuous functions on a closed interval

    Science.gov (United States)

    Trynin, Alexandr Yu

    2009-12-01

    Classes of functions in the space of continuous functions f defined on the interval \\lbrack 0,\\pi \\rbrack and vanishing at its end-points are described for which there is pointwise and approximate uniform convergence of Lagrange-type operators \\displaystyle S_\\lambda(f,x)=\\sum_{k=0}^n\\frac{y(x,\\lambda)}{y'(x_{k,\\lambda})(x-x_{k,\\lambda})}f(x_{k,\\lambda}). These operators involve the solutions y(x,\\lambda) of the Cauchy problem for the equation \\displaystyle y''+(\\lambda-q_\\lambda(x))y=0 where q_\\lambda\\in V_{\\rho_\\lambda} \\lbrack 0,\\pi \\rbrack (here V_{\\rho_\\lambda} \\lbrack 0,\\pi \\rbrack is the ball of radius \\rho_\\lambda=o(\\sqrt\\lambda/\\ln\\lambda) in the space of functions of bounded variation vanishing at the origin, and y(x_{k,\\lambda})=0). Several modifications of this operator are proposed, which allow an arbitrary continuous function on \\lbrack 0,\\pi \\rbrack to be approximated uniformly. Bibliography: 40 titles.

  12. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    Science.gov (United States)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  13. Optimal and maximin sample sizes for multicentre cost-effectiveness trials.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2015-10-01

    This paper deals with the optimal sample sizes for a multicentre trial in which the cost-effectiveness of two treatments in terms of net monetary benefit is studied. A bivariate random-effects model, with the treatment-by-centre interaction effect being random and the main effect of centres fixed or random, is assumed to describe both costs and effects. The optimal sample sizes concern the number of centres and the number of individuals per centre in each of the treatment conditions. These numbers maximize the efficiency or power for given research costs or minimize the research costs at a desired level of efficiency or power. Information on model parameters and sampling costs are required to calculate these optimal sample sizes. In case of limited information on relevant model parameters, sample size formulas are derived for so-called maximin sample sizes which guarantee a power level at the lowest study costs. Four different maximin sample sizes are derived based on the signs of the lower bounds of two model parameters, with one case being worst compared to others. We numerically evaluate the efficiency of the worst case instead of using others. Finally, an expression is derived for calculating optimal and maximin sample sizes that yield sufficient power to test the cost-effectiveness of two treatments. © The Author(s) 2015.

  14. An Approximate Optimal Relationship in the Sampling Plan with Inspection Errors

    Institute of Scientific and Technical Information of China (English)

    YANG Ji-ping; QIU Wan-hua; Martin NEWBY

    2001-01-01

    The paper presents and proves an approximate optimal relationship between sample size n andacceptance number c in the sampling plans under imperfect inspection which minimize the Bayesian risk. Theconclusion generalizes the result obtained by A. Hald on the assumption that the inspection is perfect.

  15. A normative inference approach for optimal sample sizes in decisions from experience.

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    "Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

  16. A normative inference approach for optimal sample sizes in decisions from experience

    Directory of Open Access Journals (Sweden)

    Dirk eOstwald

    2015-09-01

    Full Text Available Decisions from experience (DFE refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experienced-based choice is the sampling paradigm, which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the optimal sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical manuscript, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for decisions from experience. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.

  17. An improved adaptive sampling and experiment design method for aerodynamic optimization

    Institute of Scientific and Technical Information of China (English)

    Huang Jiangtao; Gao Zhenghong; Zhou Zhu; Zhao Ke

    2015-01-01

    Experiment design method is a key to construct a highly reliable surrogate model for numerical optimization in large-scale project. Within the method, the experimental design criterion directly affects the accuracy of the surrogate model and the optimization efficient. According to the shortcomings of the traditional experimental design, an improved adaptive sampling method is pro-posed in this paper. The surrogate model is firstly constructed by basic sparse samples. Then the supplementary sampling position is detected according to the specified criteria, which introduces the energy function and curvature sampling criteria based on radial basis function (RBF) network. Sampling detection criteria considers both the uniformity of sample distribution and the description of hypersurface curvature so as to significantly improve the prediction accuracy of the surrogate model with much less samples. For the surrogate model constructed with sparse samples, the sample uniformity is an important factor to the interpolation accuracy in the initial stage of adaptive sam-pling and surrogate model training. Along with the improvement of uniformity, the curvature description of objective function surface gradually becomes more important. In consideration of these issues, crowdness enhance function and root mean square error (RMSE) feedback function are introduced in C criterion expression. Thus, a new sampling method called RMSE and crowd-ness enhance (RCE) adaptive sampling is established. The validity of RCE adaptive sampling method is studied through typical test function firstly and then the airfoil/wing aerodynamic opti-mization design problem, which has high-dimensional design space. The results show that RCE adaptive sampling method not only reduces the requirement for the number of samples, but also effectively improves the prediction accuracy of the surrogate model, which has a broad prospects for applications.

  18. Approximate Optimal Control of Affine Nonlinear Continuous-Time Systems Using Event-Sampled Neurodynamic Programming.

    Science.gov (United States)

    Sahoo, Avimanyu; Xu, Hao; Jagannathan, Sarangapani

    2017-03-01

    This paper presents an approximate optimal control of nonlinear continuous-time systems in affine form by using the adaptive dynamic programming (ADP) with event-sampled state and input vectors. The knowledge of the system dynamics is relaxed by using a neural network (NN) identifier with event-sampled inputs. The value function, which becomes an approximate solution to the Hamilton-Jacobi-Bellman equation, is generated by using event-sampled NN approximator. Subsequently, the NN identifier and the approximated value function are utilized to obtain the optimal control policy. Both the identifier and value function approximator weights are tuned only at the event-sampled instants leading to an aperiodic update scheme. A novel adaptive event sampling condition is designed to determine the sampling instants, such that the approximation accuracy and the stability are maintained. A positive lower bound on the minimum inter-sample time is guaranteed to avoid accumulation point, and the dependence of inter-sample time upon the NN weight estimates is analyzed. A local ultimate boundedness of the resulting nonlinear impulsive dynamical closed-loop system is shown. Finally, a numerical example is utilized to evaluate the performance of the near-optimal design. The net result is the design of an event-sampled ADP-based controller for nonlinear continuous-time systems.

  19. Confronting the ironies of optimal design: Nonoptimal sampling designs with desirable properties

    Science.gov (United States)

    Casman, Elizabeth A.; Naiman, Daniel Q.; Chamberlin, Charles E.

    1988-03-01

    Two sampling designs are developed for the improvement of parameter estimate precision in nonlinear regression, one for when there is uncertainty in the parameter values, and the other for when the correct model formulation is unknown. Although based on concepts of optimal design theory, the design criteria emphasize efficiency rather than optimality. The development is illustrated using a Streeter-Phelps dissolved oxygen-biochemical oxygen demand model.

  20. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  1. Successive RR Interval Analysis of PVC With Sinus Rhythm Using Fractal Dimension, Poincaré Plot and Sample Entropy Method

    Directory of Open Access Journals (Sweden)

    Md. Maksudul Hasan

    2013-02-01

    Full Text Available Premature ventricular contractions (PVC are premature heartbeats originating from the ventricles of the heart. These heartbeats occur before the regular heartbeat. The Fractal analysis is most mathematical models produce intractable solutions. Some studies tried to apply the fractal dimension (FD to calculate of cardiac abnormality. Based on FD change, we can identify different abnormalities present in Electrocardiogram (ECG. Present of the uses of Poincaré plot indexes and the sample entropy (SE analyses of heart rate variability (HRV from short term ECG recordings as a screening tool for PVC. Poincaré plot indexes and the SE measure used for analyzing variability and complexity of HRV. A clear reduction of standard deviation (SD projections in Poincaré plot pattern observed a significant difference of SD between healthy Person and PVC subjects. Finally, a comparison shows for FD, SE and Poincaré plot parameters.

  2. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  3. The Role of Vertex Consistency in Sampling-based Algorithms for Optimal Motion Planning

    CERN Document Server

    Arslan, Oktay

    2012-01-01

    Motion planning problems have been studied by both the robotics and the controls research communities for a long time, and many algorithms have been developed for their solution. Among them, incremental sampling-based motion planning algorithms, such as the Rapidly-exploring Random Trees (RRTs), and the Probabilistic Road Maps (PRMs) have become very popular recently, owing to their implementation simplicity and their advantages in handling high-dimensional problems. Although these algorithms work very well in practice, the quality of the computed solution is often not good, i.e., the solution can be far from the optimal one. A recent variation of RRT, namely the RRT* algorithm, bypasses this drawback of the traditional RRT algorithm, by ensuring asymptotic optimality as the number of samples tends to infinity. Nonetheless, the convergence rate to the optimal solution may still be slow. This paper presents a new incremental sampling-based motion planning algorithm based on Rapidly-exploring Random Graphs (RRG...

  4. Obesity and cell-free DNA "no calls": is there an optimal gestational age at time of sampling?

    Science.gov (United States)

    Livergood, Mary C; LeChien, Kay A; Trudell, Amanda S

    2017-04-01

    Cell-free DNA screen failures or "no calls" occur in 1-12% of samples and are frustrating for both clinician and patient. The rate of "no calls" has been shown to have an inverse relationship with gestational age. Recent studies have shown an increased risk for "no calls" among obese women. We sought to determine the optimal gestational age for cell-free DNA among obese women. We performed a retrospective cohort study of women who underwent cell-free DNA at a single tertiary care center from 2011 through 2016. Adjusted odds ratios with 95% confidence intervals for a "no call" were determined for each weight class and compared to normal-weight women. The predicted probability of a "no call" with 95% confidence intervals were determined for each week of gestation for normal-weight and obese women and compared. Among 2385 patients meeting inclusion criteria, 105 (4.4%) had a "no call". Compared to normal-weight women, the adjusted odds ratio of a "no call" increased with increasing weight class from overweight to obesity class III (respectively: adjusted odds ratio, 2.31; 95% confidence interval, 1.21-4.42 to adjusted odds ratio, 8.55; 95% confidence interval, 4.16-17.56). A cut point at 21 weeks was identified for obesity class II/III women at which there is no longer a significant difference in the probability of a "no call" for obese women compared to normal weight women. From 8-16 weeks, there is a 4.5% reduction in the probability of a "no call" for obesity class II/III women (respectively: 14.9%; 95% confidence interval, 8.95-20.78 and 10.4%; 95% confidence interval, 7.20-13.61; Ptrend DNA limits reproductive choices. However, a progressive fall in the probability of a "no call" with advancing gestational age suggests that delaying cell-free DNA for obese women is a reasonable strategy to reduce the probability of a "no call". Copyright © 2017 Elsevier Inc. All rights reserved.

  5. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  6. Optimization of Proteomic Sample Preparation Procedures for Comprehensive Protein Characterization of Pathogenic Systems

    Science.gov (United States)

    Mottaz-Brewer, Heather M.; Norbeck, Angela D.; Adkins, Joshua N.; Manes, Nathan P.; Ansong, Charles; Shi, Liang; Rikihisa, Yasuko; Kikuchi, Takane; Wong, Scott W.; Estep, Ryan D.; Heffron, Fred; Pasa-Tolic, Ljiljana; Smith, Richard D.

    2008-01-01

    Mass spectrometry-based proteomics is a powerful analytical tool for investigating pathogens and their interactions within a host. The sensitivity of such analyses provides broad proteome characterization, but the sample-handling procedures must first be optimized to ensure compatibility with the technique and to maximize the dynamic range of detection. The decision-making process for determining optimal growth conditions, preparation methods, sample analysis methods, and data analysis techniques in our laboratory is discussed herein with consideration of the balance in sensitivity, specificity, and biomass losses during analysis of host-pathogen systems. PMID:19183792

  7. Sample Subset Optimization Techniques for Imbalanced and Ensemble Learning Problems in Bioinformatics Applications.

    Science.gov (United States)

    Yang, Pengyi; Yoo, Paul D; Fernando, Juanita; Zhou, Bing B; Zhang, Zili; Zomaya, Albert Y

    2014-03-01

    Data sampling is a widely used technique in a broad range of machine learning problems. Traditional sampling approaches generally rely on random resampling from a given dataset. However, these approaches do not take into consideration additional information, such as sample quality and usefulness. We recently proposed a data sampling technique, called sample subset optimization (SSO). The SSO technique relies on a cross-validation procedure for identifying and selecting the most useful samples as subsets. In this paper, we describe the application of SSO techniques to imbalanced and ensemble learning problems, respectively. For imbalanced learning, the SSO technique is employed as an under-sampling technique for identifying a subset of highly discriminative samples in the majority class. In ensemble learning, the SSO technique is utilized as a generic ensemble technique where multiple optimized subsets of samples from each class are selected for building an ensemble classifier. We demonstrate the utilities and advantages of the proposed techniques on a variety of bioinformatics applications where class imbalance, small sample size, and noisy data are prevalent.

  8. XAFSmass: a program for calculating the optimal mass of XAFS samples

    Science.gov (United States)

    Klementiev, K.; Chernikov, R.

    2016-05-01

    We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.

  9. An Optimal Dimensionality Sampling Scheme on the Sphere for Antipodal Signals In Diffusion Magnetic Resonance Imaging

    CERN Document Server

    Bates, Alice P; Kennedy, Rodney A

    2015-01-01

    We propose a sampling scheme on the sphere and develop a corresponding spherical harmonic transform (SHT) for the accurate reconstruction of the diffusion signal in diffusion magnetic resonance imaging (dMRI). By exploiting the antipodal symmetry, we design a sampling scheme that requires the optimal number of samples on the sphere, equal to the degrees of freedom required to represent the antipodally symmetric band-limited diffusion signal in the spectral (spherical harmonic) domain. Compared with existing sampling schemes on the sphere that allow for the accurate reconstruction of the diffusion signal, the proposed sampling scheme reduces the number of samples required by a factor of two or more. We analyse the numerical accuracy of the proposed SHT and show through experiments that the proposed sampling allows for the accurate and rotationally invariant computation of the SHT to near machine precision accuracy.

  10. Algorithms for integration of stochastic differential equations using parallel optimized sampling in the Stratonovich calculus

    Science.gov (United States)

    Kiesewetter, Simon; Drummond, Peter D.

    2017-03-01

    A variance reduction method for stochastic integration of Fokker-Planck equations is derived. This unifies the cumulant hierarchy and stochastic equation approaches to obtaining moments, giving a performance superior to either. We show that the brute force method of reducing sampling error by just using more trajectories in a sampled stochastic equation is not the best approach. The alternative of using a hierarchy of moment equations is also not optimal, as it may converge to erroneous answers. Instead, through Bayesian conditioning of the stochastic noise on the requirement that moment equations are satisfied, we obtain improved results with reduced sampling errors for a given number of stochastic trajectories. The method used here converges faster in time-step than Ito-Euler algorithms. This parallel optimized sampling (POS) algorithm is illustrated by several examples, including a bistable nonlinear oscillator case where moment hierarchies fail to converge.

  11. Spatial Prediction and Optimized Sampling Design for Sodium Concentration in Groundwater.

    Science.gov (United States)

    Zahid, Erum; Hussain, Ijaz; Spöck, Gunter; Faisal, Muhammad; Shabbir, Javid; M AbdEl-Salam, Nasser; Hussain, Tajammal

    Sodium is an integral part of water, and its excessive amount in drinking water causes high blood pressure and hypertension. In the present paper, spatial distribution of sodium concentration in drinking water is modeled and optimized sampling designs for selecting sampling locations is calculated for three divisions in Punjab, Pakistan. Universal kriging and Bayesian universal kriging are used to predict the sodium concentrations. Spatial simulated annealing is used to generate optimized sampling designs. Different estimation methods (i.e., maximum likelihood, restricted maximum likelihood, ordinary least squares, and weighted least squares) are used to estimate the parameters of the variogram model (i.e, exponential, Gaussian, spherical and cubic). It is concluded that Bayesian universal kriging fits better than universal kriging. It is also observed that the universal kriging predictor provides minimum mean universal kriging variance for both adding and deleting locations during sampling design.

  12. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    two schemes for sampling points of the function: midpoint sampling and vertex sampling. The convergence of the algorithm is proved, and numerical results are presented for the two dimensional case, for which also a special initial covering is presented. (C) 2002 Elsevier Science Ltd. All rights......We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations...... are based on the sampling of function values. We propose a branch and bound algorithm based on regular simplexes. Initially, the domain in question is covered with regular simplexes, and our subdivision scheme maintains this property. The bound calculation becomes both simple and efficient, and we describe...

  13. An Optimal Spatial Sampling Design for Intra-Urban Population Exposure Assessment.

    Science.gov (United States)

    Kumar, Naresh

    2009-02-01

    This article offers an optimal spatial sampling design that captures maximum variance with the minimum sample size. The proposed sampling design addresses the weaknesses of the sampling design that Kanaroglou et al. (2005) used for identifying 100 sites for capturing population exposure to NO(2) in Toronto, Canada. Their sampling design suffers from a number of weaknesses and fails to capture the spatial variability in NO(2) effectively. The demand surface they used is spatially autocorrelated and weighted by the population size, which leads to the selection of redundant sites. The location-allocation model (LAM) available with the commercial software packages, which they used to identify their sample sites, is not designed to solve spatial sampling problems using spatially autocorrelated data. A computer application (written in C++) that utilizes spatial search algorithm was developed to implement the proposed sampling design. This design was implemented in three different urban environments - namely Cleveland, OH; Delhi, India; and Iowa City, IA - to identify optimal sample sites for monitoring airborne particulates.

  14. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  15. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  16. On the Effectiveness of Sampling for Evolutionary Optimization in Noisy Environments.

    Science.gov (United States)

    Qian, Chao; Yu, Yang; Tang, Ke; Jin, Yaochu; Yao, Xin; Zhou, Zhi-Hua

    2016-12-16

    In real-world optimization tasks, the objective (i.e., fitness) function evaluation is often disturbed by noise due to a wide range of uncertainties. Evolutionary algorithms are often employed in noisy optimization, where reducing the negative effect of noise is a crucial issue. Sampling is a popular strategy for dealing with noise: to estimate the fitness of a solution, it evaluates the fitness multiple (k) times independently and then uses the sample average to approximate the true fitness. Obviously, sampling can make the fitness estimation closer to the true value, but also increases the estimation cost. Previous studies mainly focused on empirical analysis and design of efficient sampling strategies, while the impact of sampling is unclear from a theoretical viewpoint. In this paper, we show that sampling can speed up noisy evolutionary optimization exponentially via rigorous running time analysis. For the (1+1)-EA solving the OneMax and the LeadingOnes problems under prior (e.g., one-bit) or posterior (e.g., additive Gaussian) noise, we prove that, under a high noise level, the running time can be reduced from exponential to polynomial by sampling. The analysis also shows that a gap of one on the value of k for sampling can lead to an exponential difference on the expected running time, cautioning for a careful selection of k. We further prove by using two illustrative examples that sampling can be more effective for noise handling than parent populations and threshold selection, two strategies that have shown to be robust to noise. Finally, we also show that sampling can be ineffective when noise does not bring a negative impact.

  17. 'Adaptive Importance Sampling for Performance Evaluation and Parameter Optimization of Communication Systems'

    NARCIS (Netherlands)

    Remondo Bueno, D.; Srinivasan, R.; Nicola, V.F.; van Etten, Wim; Tattje, H.E.P.

    2000-01-01

    We present new adaptive importance sampling techniques based on stochastic Newton recursions. Their applicability to the performance evaluation of communication systems is studied. Besides bit-error rate (BER) estimation, the techniques are used for system parameter optimization. Two system models

  18. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  19. 微带电路谱域Prony方法采样间隔的研究%Study on Sampling Interval in Spectral Domain Prony's Method

    Institute of Scientific and Technical Information of China (English)

    高立新; 龚主前; 李元新

    2014-01-01

    对计算微带电路S参数的改进谱域Prony方法进行研究,分析比较时域有限差分法( FDTD)中波端口激励方式和模式端口激励方式,提出采样间隔选取标准,然后通过几个实际的工程算例来讨论改进谱域Prony方法的性能。数值结果表明,在非常小的采样间隔条件下改进谱域Prony方法仍可以准确计算相位常数和S参数。%The improved spectral domain Prony’s method that calculates S-parameters for microstrip cir-cuits is investigated. Mode-port excitation technique and wave-port excitation technique in the finite difference time domain method ( FDTD) are analyzed and compared. A sampling interval selection criterion is proposed. Several practical engineering examples are provided to demonstrate the performance of the im-proved spectral domain Prony’s method. Numerical results show that the phase constant and S-parameters still can be accurately calculated under very small sampling interval condition with the improved spectral domain Prony’s method.

  20. Optimization of groundwater sampling approach under various hydrogeological conditions using a numerical simulation model

    Science.gov (United States)

    Qi, Shengqi; Hou, Deyi; Luo, Jian

    2017-09-01

    This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.

  1. Intervalo hídrico ótimo e produtividade de cultivares de soja Optimal water interval and yield of soybean cultivars

    Directory of Open Access Journals (Sweden)

    Amauri N. Beutler

    2006-09-01

    Full Text Available Determinar o intervalo hídrico ótimo e sua relação com a produtividade de soja, foi o que se objetivou com o presente estudo, razão por que se conduziu um experimento em Jaboticabal, SP, em Latossolo Vermelho de textura média. O delineamento experimental foi inteiramente casualizado, em parcelas subdivididas (seis níveis de compactação e quatro cultivares, com quatro repetições. Os níveis de compactação, foram: T0 = 0; T1* = 1; T1 = 1; T2 = 2; T4 = 4 e T6 = 6 passadas, no mesmo local, de um trator de 11 toneladas, perfazendo toda a superfície do solo. No tratamento T1* a compactação ocorreu quando o solo estava mais seco. Em dezembro de 2003 foram semeadas as cultivares de soja (Glycine max IAC Foscarin 31, MG/BR 46 (Conquista, BRS/MG 68 (Vencedora e IAC 8 - 2. Após a semeadura, coletaram-se amostras indeformadas de solo nas camadas de 0,03-0,06; 0,08-0,11; 0,15-0,18; e 0,22-0,25 m, para determinação da curva de retenção de água, de resistência do solo à penetração e do intervalo hídrico ótimo (IHO. Segundo o modelo do IHO, a densidade do solo crítica (Dsc à produtividade de soja varia entre as cultivares nos valores de 1,56 a 1,64 Mg m-3. A densidade do solo, a partir da qual a produtividade das cultivares de soja decresceu, foi superior à Dsc.The objective of this study was to determine the optimal water interval and its relationship with soybean yield. For this study, an experiment was carried out in Jaboticabal, São Paulo State, Brazil, in a Haplustox medium textured soil. The experimental design consisted of a completely randomized with subdivided plots (six compaction levels and four soybean cultivars, with four replications. The soil compaction levels were: T0 = 0, T1* = 1, T1 = 1, T2 = 2, T4 = 4 and T6 = 6 strides of a 11 ton tractor, on the same place, side by side on soil surface. In T1* the compaction occurred when the soil was dry. In December, 2003 the soybean (Glycine max, cultivars IAC Foscarin 31

  2. A Simplified Approach for Two-Dimensional Optimal Controlled Sampling Designs

    Directory of Open Access Journals (Sweden)

    Neeraj Tiwari

    2014-01-01

    Full Text Available Controlled sampling is a unique method of sample selection that minimizes the probability of selecting nondesirable combinations of units. Extending the concept of linear programming with an effective distance measure, we propose a simple method for two-dimensional optimal controlled selection that ensures zero probability to nondesired samples. Alternative estimators for population total and its variance have also been suggested. Some numerical examples have been considered to demonstrate the utility of the proposed procedure in comparison to the existing procedures.

  3. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (......) and HS-CC. AV indicated sample degradations at 90 degrees C but only small alterations between 60 and 75 degrees C. HS-GC showed increasing response with temperature and rime. Purging at 75 degrees C for 45 min was selected as the preferred sampling condition for oxidized fish oil....

  4. Optimized methods for high-throughput analysis of hair samples for American black bears (Ursus americanus

    Directory of Open Access Journals (Sweden)

    Thea V Kristensen

    2011-06-01

    Full Text Available Noninvasive sampling has revolutionized the study of species that are difficult or dangerous to study using traditional methods. Early studies were often confined to small populations as genotyping large numbers of samples was prohibitively costly and labor intensive. Here we describe optimized protocols designed to reduce the costs and effort required for microsatellite genotyping and sex determination for American black bears (Ursus americanus. We redesigned primers for six microsatellite loci, designed novel primers for the amelogenin gene for genetic determination of sex, and optimized conditions for a nine-locus multiplex PCR. Our high-throughput methods will enable researchers to include larger sample sizes in studies of black bears, providing data in a timely fashion that can be used to inform population management.

  5. Optimal staggered-grid finite-difference schemes by combining Taylor-series expansion and sampling approximation for wave equation modeling

    Science.gov (United States)

    Yan, Hongyong; Yang, Lei; Li, Xiang-Yang

    2016-12-01

    High-order staggered-grid finite-difference (SFD) schemes have been universally used to improve the accuracy of wave equation modeling. However, the high-order SFD coefficients on spatial derivatives are usually determined by the Taylor-series expansion (TE) method, which just leads to great accuracy at small wavenumbers for wave equation modeling. Some conventional optimization methods can achieve high accuracy at large wavenumbers, but they hardly guarantee the small numerical dispersion error at small wavenumbers. In this paper, we develop new optimal explicit SFD (ESFD) and implicit SFD (ISFD) schemes for wave equation modeling. We first derive the optimal ESFD and ISFD coefficients for the first-order spatial derivatives by applying the combination of the TE and the sampling approximation to the dispersion relation, and then analyze their numerical accuracy. Finally, we perform elastic wave modeling with the ESFD and ISFD schemes based on the TE method and the optimal method, respectively. When the appropriate number and interval for the sampling points are chosen, these optimal schemes have extremely high accuracy at small wavenumbers, and can also guarantee small numerical dispersion error at large wavenumbers. Numerical accuracy analyses and modeling results demonstrate the optimal ESFD and ISFD schemes can efficiently suppress the numerical dispersion and significantly improve the modeling accuracy compared to the TE-based ESFD and ISFD schemes.

  6. Optimization of sampling and counting times for gamma-ray spectrometric measurements of short-lived gamma-ray emitters in aqueous samples.

    Science.gov (United States)

    Korun, M

    2008-01-01

    A method to determine the optimal sampling and counting regimes for water monitoring is presented. It is assumed that samples are collected at a constant rate. The collection time is followed by a sample preparation time that is proportional to the sample quantity collected, and then by the counting time. In the optimal regime these times are chosen in such a way that the minimum detectable concentration is the lowest. Two cases are presented: the case when the background originates from the spectrometer background, which is constant in time and independent of the sample properties, and the case when the background originates from the radioactivity present in the sample.

  7. Sampling scheme optimization for diffuse optical tomography based on data and image space rankings

    Science.gov (United States)

    Sabir, Sohail; Kim, Changhwan; Cho, Sanghoon; Heo, Duchang; Kim, Kee Hyun; Ye, Jong Chul; Cho, Seungryong

    2016-10-01

    We present a methodology for the optimization of sampling schemes in diffuse optical tomography (DOT). The proposed method exploits singular value decomposition (SVD) of the sensitivity matrix, or weight matrix, in DOT. Two mathematical metrics are introduced to assess and determine the optimum source-detector measurement configuration in terms of data correlation and image space resolution. The key idea of the work is to weight each data measurement, or rows in the sensitivity matrix, and similarly to weight each unknown image basis, or columns in the sensitivity matrix, according to their contribution to the rank of the sensitivity matrix, respectively. The proposed metrics offer a perspective on the data sampling and provide an efficient way of optimizing the sampling schemes in DOT. We evaluated various acquisition geometries often used in DOT by use of the proposed metrics. By iteratively selecting an optimal sparse set of data measurements, we showed that one can design a DOT scanning protocol that provides essentially the same image quality at a much reduced sampling.

  8. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Time optimization of (90)Sr measurements: Sequential measurement of multiple samples during ingrowth of (90)Y.

    Science.gov (United States)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-04-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing (90)Sr by making the Cherenkov measurement of the daughter nuclide (90)Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of (90)Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21h to 6.5h, when assuming a MDA of 1Bq/L and at a background count rate of approximately 0.8cpm.

  10. Average Sample-path Optimality for Continuous-time Markov Decision Processes in Polish Spaces

    Institute of Scientific and Technical Information of China (English)

    Quan-xin ZHU

    2011-01-01

    In this paper we study the average sample-path cost (ASPC) problem for continuous-time Markov decision processes in Polish spaces.To the best of our knowledge,this paper is a first attempt to study the ASPC criterion on continuous-time MDPs with Polish state and action spaces.The corresponding transition rates are allowed to be unbounded,and the cost rates may have neither upper nor lower bounds.Under some mild hypotheses,we prove the existence of e (ε ≥ 0)-ASPC optimal stationary policies based on two different approaches:one is the “optimality equation” approach and the other is the “two optimality inequalities” approach.

  11. Optimizing Diagnostic Yield for EUS-Guided Sampling of Solid Pancreatic Lesions: A Technical Review

    Science.gov (United States)

    Weston, Brian R.

    2013-01-01

    Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has a higher diagnostic accuracy for pancreatic cancer than other techniques. This article will review the current advances and considerations for optimizing diagnostic yield for EUS-guided sampling of solid pancreatic lesions. Preprocedural considerations include patient history, confirmation of appropriate indication, review of imaging, method of sedation, experience required by the endoscopist, and access to rapid on-site cytologic evaluation. New EUS imaging techniques that may assist with differential diagnoses include contrast-enhanced harmonic EUS, EUS elastography, and EUS spectrum analysis. FNA techniques vary, and multiple FNA needles are now commercially available; however, neither techniques nor available FNA needles have been definitively compared. The need for suction depends on the lesion, and the need for a stylet is equivocal. No definitive endosonographic finding can predict the optimal number of passes for diagnostic yield. Preparation of good smears and communication with the cytopathologist are essential to optimize yield. PMID:23935542

  12. 非正态总体下的小样本区间估计问题%Small Sample Interval Estimation of Non-normal Population

    Institute of Scientific and Technical Information of China (English)

    杭国明; 祝国强

    2013-01-01

    在非正态总体的条件下,给出的样本是小样本时,根据总体的不同情况,可以用确切概率计算法、Fisher 正态近似法、切比雪夫不等式法等方法来确定总体未知参数的置信区间。%When the population is non-normal and the sample is a small one ,we can use such methods as exact probability calculation ,Fisher normal approximation and Chebyshev inequality to determine the confi-dence interval of unknow n parameters according to the difference conditions of the population .

  13. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on i-band absolute magnitude (M i ), or, for a small subset of our sample, M i and color (NUV - i). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M i and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  14. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  15. Optimal adaptive group sequential design with flexible timing of sample size determination.

    Science.gov (United States)

    Cui, Lu; Zhang, Lanju; Yang, Bo

    2017-04-26

    Flexible sample size designs, including group sequential and sample size re-estimation designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. In this work, a new representation of sample size re-estimation design suggested by Cui et al. [5,6] is introduced as an adaptive group sequential design with flexible timing of sample size determination. This generalized adaptive group sequential design allows one time sample size determination either before the start of or in the mid-course of a clinical study. The new approach leads to possible design optimization on an expanded space of design parameters. Its equivalence to sample size re-estimation design proposed by Cui et al. provides further insight on re-estimation design and helps to address common confusions and misunderstanding. Issues in designing flexible sample size trial, including design objective, performance evaluation and implementation are touched upon with an example to illustrate. Copyright © 2017. Published by Elsevier Inc.

  16. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, P; Xie, Y; Chen, Y [Washington University in Saint Louis, Saint Louis, Missouri (United States); Deasy, J [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality.

  17. Vehicle Ride Comfort Optimization Based on Interval Analysis%基于区间分析的汽车平顺性优化

    Institute of Scientific and Technical Information of China (English)

    谢慧超; 姜潮; 张智罡; 于盛

    2014-01-01

    An uncertainty optimization model for the ride comfort of vehicle suspension is built based on in-terval analysis technique. With suspension spring stiffness and shock absorber damping as design variables, minimi-zing the root mean square of car body acceleration as objective, and the stiffness and natural frequency of suspension as constraints, and by means of tolerance indicator and possibility degree of interval, the uncertainty optimization model is transformed into a certainty one, which is then solved with SQP and NSGA-II. On the premise of assuring the objective of ride comfort, the symmetric tolerance of design variables is maximized with manufacturing cost re-duced. Finally the scheme proposed is applied to the ride comfort optimization of suspension vibration system with both 2 DOF quarter car body model and 7 DOF whole car body model.%基于区间分析方法,建立了一种汽车悬架平顺性的不确定性优化模型。以悬架弹簧刚度和减振器阻尼为设计参数,车身加速度均方根值最小化为目标,悬架刚度和固有频率等为约束,并使用区间描述设计变量的制造和测量误差。利用公差指标和区间可能度,将该不确定性优化模型转化为确定性优化问题,并利用序列二次规划法和非支配排序遗传算法进行求解;在保证平顺性目标的前提下,使设计变量的对称公差最大化,以降低制造成本。最后,该方法被应用于两自由度1/4车身和7自由度整车车身悬架振动系统的平顺性优化。

  18. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  19. Optimized sample preparation of endoscopic collected pancreatic fluid for SDS-PAGE analysis.

    Science.gov (United States)

    Paulo, Joao A; Lee, Linda S; Wu, Bechien; Repas, Kathryn; Banks, Peter A; Conwell, Darwin L; Steen, Hanno

    2010-07-01

    The standardization of methods for human body fluid protein isolation is a critical initial step for proteomic analyses aimed to discover clinically relevant biomarkers. Several caveats have hindered pancreatic fluid proteomics, including the heterogeneity of samples and protein degradation. We aim to optimize sample handling of pancreatic fluid that has been collected using a safe and effective endoscopic collection method (endoscopic pancreatic function test). Using SDS-PAGE protein profiling, we investigate (i) precipitation techniques to maximize protein extraction, (ii) auto-digestion of pancreatic fluid following prolonged exposure to a range of temperatures, (iii) effects of multiple freeze-thaw cycles on protein stability, and (iv) the utility of protease inhibitors. Our experiments revealed that TCA precipitation resulted in the most efficient extraction of protein from pancreatic fluid of the eight methods we investigated. In addition, our data reveal that although auto-digestion of proteins is prevalent at 23 and 37 degrees C, incubation on ice significantly slows such degradation. Similarly, when the sample is maintained on ice, proteolysis is minimal during multiple freeze-thaw cycles. We have also determined the addition of protease inhibitors to be assay-dependent. Our optimized sample preparation strategy can be applied to future proteomic analyses of pancreatic fluid.

  20. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  1. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  2. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil.

    Science.gov (United States)

    Silvestri, Erin E; Feldhake, David; Griffin, Dale; Lisle, John; Nichols, Tonya L; Shah, Sanjiv R; Pemberton, Adin; Schaefer, Frank W

    2016-11-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries. Copyright © 2016. Published by Elsevier B.V.

  3. Dynamics of hepatitis C under optimal therapy and sampling based analysis

    Science.gov (United States)

    Pachpute, Gaurav; Chakrabarty, Siddhartha P.

    2013-08-01

    We examine two models for hepatitis C viral (HCV) dynamics, one for monotherapy with interferon (IFN) and the other for combination therapy with IFN and ribavirin. Optimal therapy for both the models is determined using the steepest gradient method, by defining an objective functional which minimizes infected hepatocyte levels, virion population and side-effects of the drug(s). The optimal therapies for both the models show an initial period of high efficacy, followed by a gradual decline. The period of high efficacy coincides with a significant decrease in the viral load, whereas the efficacy drops after hepatocyte levels are restored. We use the Latin hypercube sampling technique to randomly generate a large number of patient scenarios and study the dynamics of each set under the optimal therapy already determined. Results show an increase in the percentage of responders (indicated by drop in viral load below detection levels) in case of combination therapy (72%) as compared to monotherapy (57%). Statistical tests performed to study correlations between sample parameters and time required for the viral load to fall below detection level, show a strong monotonic correlation with the death rate of infected hepatocytes, identifying it to be an important factor in deciding individual drug regimens.

  4. Model reduction algorithms for optimal control and importance sampling of diffusions

    Science.gov (United States)

    Hartmann, Carsten; Schütte, Christof; Zhang, Wei

    2016-08-01

    We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.

  5. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Science.gov (United States)

    Ridolfi, E.; Alfonso, L.; Di Baldassarre, G.; Napolitano, F.

    2016-06-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers' cross-sectional spacing.

  6. Direct Interval Forecasting of Wind Power

    DEFF Research Database (Denmark)

    Wan, Can; Xu, Zhao; Pinson, Pierre

    2013-01-01

    This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...

  7. Reference Ranges of Amniotic Fluid Index in Late Third Trimester of Pregnancy: What Should the Optimal Interval between Two Ultrasound Examinations Be?

    Directory of Open Access Journals (Sweden)

    Shripad Hebbar

    2015-01-01

    Full Text Available Background. Amniotic fluid index (AFI is one of the major and deciding components of fetal biophysical profile and by itself it can predict pregnancy outcome. Very low values are associated with intrauterine growth restriction and renal anomalies of fetus, whereas high values may indicate fetal GI anomalies, maternal diabetes mellitus, and so forth. However, before deciding the cut-off standards for abnormal values for a local population, what constitutes a normal range for specific gestational age and the ideal interval of testing should be defined. Objectives. To establish reference standards for AFI for local population after 34 weeks of pregnancy and to decide an optimal scan interval for AFI estimation in third trimester in low risk antenatal women. Materials and Methods. A prospective estimation of AFI was done in 50 healthy pregnant women from 34 to 40 weeks at weekly intervals. The trend of amniotic fluid volume was studied with advancing gestational age. Only low risk singleton pregnancies with accurately established gestational age who were available for all weekly scan from 34 to 40 weeks were included in the study. Women with gestational or overt diabetes mellitus, hypertensive disorders of the pregnancy, prelabour rupture of membranes, and congenital anomalies in the foetus and those who delivered before 40 completed weeks were excluded from the study. For the purpose of AFI measurement, the uterine cavity was arbitrarily divided into four quadrants by a vertical and horizontal line running through umbilicus. Linear array transabdominal probe was used to measure the largest vertical pocket (in cm in perpendicular plane to the abdominal skin in each quadrant. Amniotic fluid index was obtained by adding these four measurements. Statistical analysis was done using SPSS software (Version 16, Chicago, IL. Percentile curves (5th, 50th, and 95th centiles were constructed for comparison with other studies. Cohen’s d coefficient was used

  8. An S/H circuit with parasitics optimized for IF-sampling

    Science.gov (United States)

    Xuqiang, Zheng; Fule, Li; Zhijun, Wang; Weitao, Li; Wen, Jia; Zhihua, Wang; Shigang, Yue

    2016-06-01

    An IF-sampling S/H is presented, which adopts a flip-around structure, bottom-plate sampling technique and improved input bootstrapped switches. To achieve high sampling linearity over a wide input frequency range, the floating well technique is utilized to optimize the input switches. Besides, techniques of transistor load linearization and layout improvement are proposed to further reduce and linearize the parasitic capacitance. The S/H circuit has been fabricated in 0.18-μm CMOS process as the front-end of a 14 bit, 250 MS/s pipeline ADC. For 30 MHz input, the measured SFDR/SNDR of the ADC is 94.7 dB/68. 5dB, which can remain over 84.3 dB/65.4 dB for input frequency up to 400 MHz. The ADC presents excellent dynamic performance at high input frequency, which is mainly attributed to the parasitics optimized S/H circuit. Poject supported by the Shenzhen Project (No. JSGG20150512162029307).

  9. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  10. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  11. Advanced overlay: sampling and modeling for optimized run-to-run control

    Science.gov (United States)

    Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.

    2016-03-01

    In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to

  12. Pigeons Show Near-Optimal Win-Stay/Lose-Shift Performance on a Simultaneous-Discrimination, Midsession Reversal Task with Short Intertrial Intervals

    Science.gov (United States)

    Rayburn-Reeves, Rebecca M.; Laude, Jennifer R.; Zentall, Thomas R.

    2013-01-01

    Discrimination reversal tasks have been used as a measure of species flexibility in dealing with changes in reinforcement contingency. The simultaneous-discrimination, midsession reversal task is one in which one stimulus (S1) is correct for the first 40 trials of an 80-trial session and the other stimulus (S2) is correct for the remaining trials. After many sessions of training with this task, pigeons show a curious pattern of choices. They begin to respond to S2 well before the reversal point (they make anticipatory errors) and they continue to respond to S1 well after the reversal (they make perseverative errors). That is, they appear to be using the passage of time or number of trials into the session as a cue to reverse. We tested the hypothesis that these errors resulted in part from a memory deficit (the inability to remember over the intertrial interval, ITI, both the choice on the preceding trial and the outcome of that choice) by manipulating the duration of the ITI (1.5, 5, and 10 s). We found support for the hypothesis as pigeons with a short 1.5-s ITI showed close to optimal win-stay/lose-shift accuracy. PMID:23123672

  13. Analysis and Optimization of Bulk DNA Sampling with Binary Scoring for Germplasm Characterization

    Science.gov (United States)

    Reyes-Valdés, M. Humberto; Santacruz-Varela, Amalio; Martínez, Octavio; Simpson, June; Hayano-Kanashiro, Corina; Cortés-Romero, Celso

    2013-01-01

    The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus. PMID:24260321

  14. Analysis and optimization of bulk DNA sampling with binary scoring for germplasm characterization.

    Directory of Open Access Journals (Sweden)

    M Humberto Reyes-Valdés

    Full Text Available The strategy of bulk DNA sampling has been a valuable method for studying large numbers of individuals through genetic markers. The application of this strategy for discrimination among germplasm sources was analyzed through information theory, considering the case of polymorphic alleles scored binarily for their presence or absence in DNA pools. We defined the informativeness of a set of marker loci in bulks as the mutual information between genotype and population identity, composed by two terms: diversity and noise. The first term is the entropy of bulk genotypes, whereas the noise term is measured through the conditional entropy of bulk genotypes given germplasm sources. Thus, optimizing marker information implies increasing diversity and reducing noise. Simple formulas were devised to estimate marker information per allele from a set of estimated allele frequencies across populations. As an example, they allowed optimization of bulk size for SSR genotyping in maize, from allele frequencies estimated in a sample of 56 maize populations. It was found that a sample of 30 plants from a random mating population is adequate for maize germplasm SSR characterization. We analyzed the use of divided bulks to overcome the allele dilution problem in DNA pools, and concluded that samples of 30 plants divided into three bulks of 10 plants are efficient to characterize maize germplasm sources through SSR with a good control of the dilution problem. We estimated the informativeness of 30 SSR loci from the estimated allele frequencies in maize populations, and found a wide variation of marker informativeness, which positively correlated with the number of alleles per locus.

  15. Influence of sampling, storage, processing and optimal experimental conditions on adenylate energy charge in penaeid shrimp

    Directory of Open Access Journals (Sweden)

    Robles-Romo Arlett

    2014-01-01

    Full Text Available Adenylate energy charge (AEC has been used as a practical index of the physiological status and health in several disciplines, such as ecotoxicology and aquaculture. This study standardizes several procedures for AEC determination in penaeid shrimp that are very sensitive to sampling. We concluded that shrimp can be frozen in liquid nitrogen and then stored at -76°C for up to two years for further analysis, or freshly dissected and immediately homogenized in acid. Other cooling procedures, such as immersion in cold water or placing shrimp on ice for 15 min resulted in 50% and 73% decreases in ATP levels, and 9-fold and 10-fold increases in IMP levels, respectively. Optimal values of AEC (0.9 were obtained in shrimp recently transferred from ponds to indoor conditions, but decreased to 0.77 after one month in indoor tanks when stocked at high densities; the AEC re-established to 0.85 when the shrimps were transferred to optimal conditions (lower density and dark tanks. While the levels of arginine phosphate followed the same pattern, its levels did not fully re-establish. Comparison of different devices for sample homogenization indicated that a cryogenic ball mill mixer is the more suitable procedure.

  16. Optimization of a miniaturized DBD plasma chip for mercury detection in water samples.

    Science.gov (United States)

    Abdul-Majeed, Wameath S; Parada, Jaime H Lozano; Zimmerman, William B

    2011-11-01

    In this work, an optimization study was conducted to investigate the performance of a custom-designed miniaturized dielectric barrier discharge (DBD) microplasma chip to be utilized as a radiation source for mercury determination in water samples. The experimental work was implemented by using experimental design, and the results were assessed by applying statistical techniques. The proposed DBD chip was designed and fabricated in a simple way by using a few microscope glass slides aligned together and held by a Perspex chip holder, which proved useful for miniaturization purposes. Argon gas at 75-180 mL/min was used in the experiments as a discharge gas, while AC power in the range 75-175 W at 38 kHz was supplied to the load from a custom-made power source. A UV-visible spectrometer was used, and the spectroscopic parameters were optimized thoroughly and applied in the later analysis. Plasma characteristics were determined theoretically by analysing the recorded spectroscopic data. The estimated electron temperature (T(e) = 0.849 eV) was found to be higher than the excitation temperature (T(exc) = 0.55 eV) and the rotational temperature (T(rot) = 0.064 eV), which indicates non-thermal plasma is generated in the proposed chip. Mercury cold vapour generation experiments were conducted according to experimental plan by examining four parameters (HCl and SnCl(2) concentrations, argon flow rate, and the applied power) and considering the recorded intensity for the mercury line (253.65 nm) as the objective function. Furthermore, an optimization technique and statistical approaches were applied to investigate the individual and interaction effects of the tested parameters on the system performance. The calculated analytical figures of merit (LOD = 2.8 μg/L and RSD = 3.5%) indicates a reasonable precision system to be adopted as a basis for a miniaturized portable device for mercury detection in water samples.

  17. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  18. Automation of sample preparation for mass cytometry barcoding in support of clinical research: protocol optimization.

    Science.gov (United States)

    Nassar, Ala F; Wisnewski, Adam V; Raddassi, Khadir

    2017-03-01

    Analysis of multiplexed assays is highly important for clinical diagnostics and other analytical applications. Mass cytometry enables multi-dimensional, single-cell analysis of cell type and state. In mass cytometry, the rare earth metals used as reporters on antibodies allow determination of marker expression in individual cells. Barcode-based bioassays for CyTOF are able to encode and decode for different experimental conditions or samples within the same experiment, facilitating progress in producing straightforward and consistent results. Herein, an integrated protocol for automated sample preparation for barcoding used in conjunction with mass cytometry for clinical bioanalysis samples is described; we offer results of our work with barcoding protocol optimization. In addition, we present some points to be considered in order to minimize the variability of quantitative mass cytometry measurements. For example, we discuss the importance of having multiple populations during titration of the antibodies and effect of storage and shipping of labelled samples on the stability of staining for purposes of CyTOF analysis. Data quality is not affected when labelled samples are stored either frozen or at 4 °C and used within 10 days; we observed that cell loss is greater if cells are washed with deionized water prior to shipment or are shipped in lower concentration. Once the labelled samples for CyTOF are suspended in deionized water, the analysis should be performed expeditiously, preferably within the first hour. Damage can be minimized if the cells are resuspended in phosphate-buffered saline (PBS) rather than deionized water while waiting for data acquisition.

  19. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation.

    Science.gov (United States)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A; Bouquerel, Hélène

    2016-06-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L(-1) and 10% for 10 mBq L(-1). While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L(-1), a conservative experimental estimate is rather 5 mBq L(-1), corresponding to 0.14 fg g(-1). The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported.

  20. Optimizing EUS-guided liver biopsy sampling: comprehensive assessment of needle types and tissue acquisition techniques.

    Science.gov (United States)

    Schulman, Allison R; Thompson, Christopher C; Odze, Robert; Chan, Walter W; Ryou, Marvin

    2017-02-01

    EUS-guided liver biopsy sampling using FNA and, more recently, fine-needle biopsy (FNB) needles has been reported with discrepant diagnostic accuracy, in part due to differences in methodology. We aimed to compare liver histologic yields of 4 EUS-based needles and 2 percutaneous needles to identify optimal number of needle passes and suction. Six needle types were tested on human cadaveric tissue: one 19G FNA needle, one existing 19G FNB needle, one novel 19G FNB needle, one 22G FNB needle, and two 18G percutaneous needles (18G1 and 18G2). Two needle excursion patterns (1 vs 3 fanning passes) were performed on all EUS needles. Primary outcome was number of portal tracts. Secondary outcomes were degree of fragmentation and specimen adequacy. Pairwise comparisons were performed using t tests, with a 2-sided P samplings (48 per needle type) were performed. The novel 19G FNB needle had significantly increased mean portal tracts compared with all needle types. The 22G FNB needle had significantly increased portal tracts compared with the 18G1 needle (3.8 vs 2.5, P sampling. Investigations are underway to determine whether these results can be replicated in a clinical setting. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  1. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate

    Directory of Open Access Journals (Sweden)

    Davide Brunelli

    2015-07-01

    Full Text Available Compressive sensing (CS is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs. In this work, we extensively investigate the effectiveness of compressive sensing (CS when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.

  2. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate.

    Science.gov (United States)

    Brunelli, Davide; Caione, Carlo

    2015-07-10

    Compressive sensing (CS) is a new technology in digital signal processing capable of high-resolution capture of physical signals from few measurements, which promises impressive improvements in the field of wireless sensor networks (WSNs). In this work, we extensively investigate the effectiveness of compressive sensing (CS) when real COTSresource-constrained sensor nodes are used for compression, evaluating how the different parameters can affect the energy consumption and the lifetime of the device. Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate. The quality of recovery is addressed, and several algorithms are used for reconstruction exploiting the intra- and inter-signal correlation structures. We finally define an optimal under-sampling ratio and reconstruction algorithm capable of achieving the best reconstruction at the minimum energy spent for the compression. The results are verified against a set of different kinds of sensors on several nodes used for environmental monitoring.

  3. An efficient self-optimized sampling method for rare events in nonequilibrium systems

    Institute of Scientific and Technical Information of China (English)

    JIANG HuiJun; PU MingFeng; HOU ZhongHuai

    2014-01-01

    Rare events such as nucleation processes are of ubiquitous importance in real systems.The most popular method for nonequilibrium systems,forward flux sampling(FFS),samples rare events by using interfaces to partition the whole transition process into sequence of steps along an order parameter connecting the initial and final states.FFS usually suffers from two main difficulties:low computational efficiency due to bad interface locations and even being not applicable when trapping into unknown intermediate metastable states.In the present work,we propose an approach to overcome these difficulties,by self-adaptively locating the interfaces on the fly in an optimized manner.Contrary to the conventional FFS which set the interfaces with equal distance of the order parameter,our approach determines the interfaces with equal transition probability which is shown to satisfy the optimization condition.This is done by firstly running long local trajectories starting from the current interface i to get the conditional probability distribution Pc(〉i|i),and then determining i+1by equaling Pc(i+1|i)to a give value p0.With these optimized interfaces,FFS can be run in a much more efficient way.In addition,our approach can conveniently find the intermediate metastable states by monitoring some special long trajectories that neither end at the initial state nor reach the next interface,the number of which will increase sharply from zero if such metastable states are encountered.We apply our approach to a two-state model system and a two-dimensional lattice gas Ising model.Our approach is shown to be much more efficient than the conventional FFS method without losing accuracy,and it can also well reproduce the two-step nucleation scenario of the Ising model with easy identification of the intermediate metastable state.

  4. Dynamic simulation tools for the analysis and optimization of novel collection, filtration and sample preparation systems

    Energy Technology Data Exchange (ETDEWEB)

    Clague, D; Weisgraber, T; Rockway, J; McBride, K

    2006-02-12

    The focus of research effort described here is to develop novel simulation tools to address design and optimization needs in the general class of problems that involve species and fluid (liquid and gas phases) transport through sieving media. This was primarily motivated by the heightened attention on Chem/Bio early detection systems, which among other needs, have a need for high efficiency filtration, collection and sample preparation systems. Hence, the said goal was to develop the computational analysis tools necessary to optimize these critical operations. This new capability is designed to characterize system efficiencies based on the details of the microstructure and environmental effects. To accomplish this, new lattice Boltzmann simulation capabilities where developed to include detailed microstructure descriptions, the relevant surface forces that mediate species capture and release, and temperature effects for both liquid and gas phase systems. While developing the capability, actual demonstration and model systems (and subsystems) of national and programmatic interest were targeted to demonstrate the capability. As a result, where possible, experimental verification of the computational capability was performed either directly using Digital Particle Image Velocimetry or published results.

  5. An optimal survey geometry of weak lensing survey: minimizing super-sample covariance

    CERN Document Server

    Takahashi, Ryuichi; Takada, Masahiro; Kayo, Issha

    2014-01-01

    Upcoming wide-area weak lensing surveys are expensive both in time and cost and require an optimal survey design in order to attain maximum scientific returns from a fixed amount of available telescope time. The super-sample covariance (SSC), which arises from unobservable modes that are larger than the survey size, significantly degrades the statistical precision of weak lensing power spectrum measurement even for a wide-area survey. Using the 1000 mock realizations of the log-normal model, which approximates the weak lensing field for a $\\Lambda$-dominated cold dark matter model, we study an optimal survey geometry to minimize the impact of SSC contamination. For a continuous survey geometry with a fixed survey area, a more elongated geometry such as a rectangular shape of 1:400 side-length ratio reduces the SSC effect and allows for a factor 2 improvement in the cumulative signal-to-noise ratio ($S/N$) of power spectrum measurement up to $\\ell_{\\rm max}\\simeq $ a few $10^3$, compared to compact geometries ...

  6. Fitting in a complex chi^2 landscape using an optimized hypersurface sampling

    CERN Document Server

    Pardo, L C; Busch, S; Moulin, J -F; Tamarit, J Ll

    2011-01-01

    Fitting a data set with a parametrized model can be seen geometrically as finding the global minimum of the chi^2 hypersurface, depending on a set of parameters {P_i}. This is usually done using the Levenberg-Marquardt algorithm. The main drawback of this algorithm is that despite of its fast convergence, it can get stuck if the parameters are not initialized close to the final solution. We propose a modification of the Metropolis algorithm introducing a parameter step tuning that optimizes the sampling of parameter space. The ability of the parameter tuning algorithm together with simulated annealing to find the global chi^2 hypersurface minimum, jumping across chi^2{P_i} barriers when necessary, is demonstrated with synthetic functions and with real data.

  7. Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models

    Science.gov (United States)

    Thon, Ingo

    One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.

  8. Spatially-Optimized Sequential Sampling Plan for Cabbage Aphids Brevicoryne brassicae L. (Hemiptera: Aphididae) in Canola Fields.

    Science.gov (United States)

    Severtson, Dustin; Flower, Ken; Nansen, Christian

    2016-08-01

    The cabbage aphid is a significant pest worldwide in brassica crops, including canola. This pest has shown considerable ability to develop resistance to insecticides, so these should only be applied on a "when and where needed" basis. Thus, optimized sampling plans to accurately assess cabbage aphid densities are critically important to determine the potential need for pesticide applications. In this study, we developed a spatially optimized binomial sequential sampling plan for cabbage aphids in canola fields. Based on five sampled canola fields, sampling plans were developed using 0.1, 0.2, and 0.3 proportions of plants infested as action thresholds. Average sample numbers required to make a decision ranged from 10 to 25 plants. Decreasing acceptable error from 10 to 5% was not considered practically feasible, as it substantially increased the number of samples required to reach a decision. We determined the relationship between the proportions of canola plants infested and cabbage aphid densities per plant, and proposed a spatially optimized sequential sampling plan for cabbage aphids in canola fields, in which spatial features (i.e., edge effects) and optimization of sampling effort (i.e., sequential sampling) are combined. Two forms of stratification were performed to reduce spatial variability caused by edge effects and large field sizes. Spatially optimized sampling, starting at the edge of fields, reduced spatial variability and therefore increased the accuracy of infested plant density estimates. The proposed spatially optimized sampling plan may be used to spatially target insecticide applications, resulting in cost savings, insecticide resistance mitigation, conservation of natural enemies, and reduced environmental impact.

  9. interval functions

    Directory of Open Access Journals (Sweden)

    J. A. Chatfield

    1978-01-01

    Full Text Available Suppose N is a Banach space of norm |•| and R is the set of real numbers. All integrals used are of the subdivision-refinement type. The main theorem [Theorem 3] gives a representation of TH where H is a function from R×R to N such that H(p+,p+, H(p,p+, H(p−,p−, and H(p−,p each exist for each p and T is a bounded linear operator on the space of all such functions H. In particular we show that TH=(I∫abfHdα+∑i=1∞[H(xi−1,xi−1+−H(xi−1+,xi−1+]β(xi−1+∑i=1∞[H(xi−,xi−H(xi−,xi−]Θ(xi−1,xiwhere each of α, β, and Θ depend only on T, α is of bounded variation, β and Θ are 0 except at a countable number of points, fH is a function from R to N depending on H and {xi}i=1∞ denotes the points P in [a,b]. for which [H(p,p+−H(p+,p+]≠0 or [H(p−,p−H(p−,p−]≠0. We also define an interior interval function integral and give a relationship between it and the standard interval function integral.

  10. Data-Driven Sampling Matrix Boolean Optimization for Energy-Efficient Biomedical Signal Acquisition by Compressive Sensing.

    Science.gov (United States)

    Wang, Yuhao; Li, Xin; Xu, Kai; Ren, Fengbo; Yu, Hao

    2017-04-01

    Compressive sensing is widely used in biomedical applications, and the sampling matrix plays a critical role on both quality and power consumption of signal acquisition. It projects a high-dimensional vector of data into a low-dimensional subspace by matrix-vector multiplication. An optimal sampling matrix can ensure accurate data reconstruction and/or high compression ratio. Most existing optimization methods can only produce real-valued embedding matrices that result in large energy consumption during data acquisition. In this paper, we propose an efficient method that finds an optimal Boolean sampling matrix in order to reduce the energy consumption. Compared to random Boolean embedding, our data-driven Boolean sampling matrix can improve the image recovery quality by 9 dB. Moreover, in terms of sampling hardware complexity, it reduces the energy consumption by 4.6× and the silicon area by 1.9× over the data-driven real-valued embedding.

  11. Optimization of {sup 210}Po estimation in environmental samples using an improved deposition unit

    Energy Technology Data Exchange (ETDEWEB)

    Dubey, Jay Singh; Sahoo, Sunil Kumar; Mohapatra, Swagatika; Lenka, Pradyumna; Patra, Aditi Chakravarty; Thakur, Virender Kumar; Ravi, Pazhayath Mana; Tripathi, Raj Mangal [Bhabha Atomic Research Centre, Trombay, Mumbai (India). Health Physics Div.

    2015-06-01

    Measurement of {sup 210}Po in environmental matrices is important due to its very high specific activity, present in every compartment of the environment due to a daughter product of uranium ({sup 238}U), accumulative and highly toxic in nature. Conventional method for {sup 210}Po estimation is by auto-deposition onto both sides of a silver disc followed by alpha spectrometry of both the sides. A new deposition unit having the facility to hold the silver disc and magnetic stirring bar has designed and fabricated for {sup 210}Po estimation in which only one side is counted. In the conventional method, the total activity is distributed to the both sides of the silver disc and more counting time is required whereas in the improved deposition unit, only one side contain all the activity so that one time counting is required with better statistical significance. The same has been observed in spike recovery and water sample assessment. The tracer recovery in the conventional method was 72%-88% and 70%-85% whereas for the new deposition the recovery is 87%-99% and 78%-94% for spike recovery study and environmental samples, respectively. Certified tracers were analysed for the assurance of the reliability of the method and the results were in good agreement with the recommended value with a relative error <20%. The MDA of the method is 1.5 mBq for the estimation of {sup 210}Po at 3σ confidence level, 86400 s. counting time and 100 ml of water sample, taking the detector efficiency and chemical yield into consideration. The results obtained from both the methods were compared statistically. χ{sup 2} test, repeatability parameters, relative bias measurement and linearity test was performed for both the methods. The % difference between the two methods in terms of linearity is 0.2%. From the χ{sup 2} test it can be concluded that the measured data by two methods falls within 99% confidence interval. The modified deposition unit enhance the statistical significance, reduce

  12. Does the time interval between antimüllerian hormone serum sampling and initiation of ovarian stimulation affect its predictive ability in in vitro fertilization-intracytoplasmic sperm injection cycles with a gonadotropin-releasing hormone antagonist?

    DEFF Research Database (Denmark)

    Polyzos, Nikolaos P; Nelson, Scott M; Stoop, Dominic

    2013-01-01

    To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response.......To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response....

  13. Laqueadura intraparto e de intervalo Intrapartum and interval tubal sterilization: characteristics correlated with the procedure and regret in a sample of women from a public hospital

    Directory of Open Access Journals (Sweden)

    Arlete Maria dos Santos Fernandes

    2006-10-01

    foi cesárea. Não se detectou diferença nas taxas de satisfação e arrependimento após o procedimento entre os grupos.BACKGROUND: Brazil is a country with a high prevalence of tubal ligation, which is frequently performed at the time of delivery. In recent years, an increase in tubal reversal has been noticed, primarily among young women. OBJECTIVES: To study characteristics correlated with the procedure, determine frequency of intrapartum tubal ligation, measure patient satisfaction rates and tubal sterilization regret, in a sample of post-tubal patients. METHODS: Three hundred and thirty-five women underwent tubal ligation. The variables studied were related to the procedure: age at tubal ligation, whether ligation was performed intrapartum (vaginal or cesarean section or after an interval (other than the intrapartum and puerperal period, health service performing the sterilization, medical expenses paid for the procedure, reason stated for choosing the method and causes related to satisfaction/regret: desire to become pregnant after sterilization, search for treatment and performance of tubal ligation reversal. The women were divided into two groups, a group undergoing ligation in the intrapartum period and a second group ligated after an interval, to evaluate the association between variables by using Fisher's exact test and chi-squared calculation with Yates' correction. The study was approved by the Ethics Committee of the institution. RESULTS: There was a predominance of Caucasian women over 35 years of age, married, and with a low level of education of which 43.5% had undergone sterilization before 30 years of age. Two hundred and forty-five women underwent intrapartum tubal ligation, 91.2% of them had cesarean delivery and 44.6% vaginal delivery. In both groups undergoing intrapartum tubal ligation and ligation after an interval, 82.0% and 80.8% reported satisfaction with the method. Although 14.6% expressed a desire to become pregnant at some time after

  14. A SECOND INTERVAL OPTIMIZATION METHOD ON UNCERTAINTY STRUCTURE BASED ON EPSILON METHOD%基于Epsilon算法的不确定性结构二阶区间优化方法

    Institute of Scientific and Technical Information of China (English)

    麻凯; 李鹏; 刘巧伶

    2013-01-01

    This paper presents an interval modal optimization method on uncertain structures. At first, this paper present an optimization with constraint conditions would be transformed into an optimization without constraint conditions by the Lagrange multiplier method, then an interval second Taylor expand function would be build to approximately describe the modal interval of the uncertainty structure with interval parameters. In the interval expression, the second constant term, Hessian matrix, is hardly computed by a common method normally. Therefore, The DFP method would be used to approximatively iterate it. At last, the wanted structural parameters and their interval can be computed by the interval function. During the iteration, the Epsilon method would be used to achieve the result more rapidly and accurately. The optimization method was used in an example of a plate-shell with stiffeners, which prove this method is a useful interval optimization.%该文提出一种求解不确定性结构模态的二阶区间优化算法,首先应用拉格朗日乘子法将带有约束条件的模态优化问题转化为非约束优化,再用区间扩展的二阶泰勒展开式近似表述不确定性结构的模态区间函数.由于其二阶常数项(海森矩阵)的计算十分繁琐,这里采用DFP方法(Davidon and Fletcher-Powell method)近似迭代计算该常数项,同时计算满足约束条件和优化目标的结构参数和参数不确定性区间.在结构重分析中采用Epsilon算法,从而在保证计算精度的同时节省了计算时间.通过算例计算进一步证明该方法对于板壳加筋不确定结构的优化是有效的.

  15. Minimax confidence intervals in geomagnetism

    Science.gov (United States)

    Stark, Philip B.

    1992-01-01

    The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.

  16. Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors

    Science.gov (United States)

    Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.

    2013-01-01

    Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.

  17. Optimal time interval between a single course of antenatal corticosteroids and delivery for reduction of respiratory distress syndrome in preterm twins.

    Science.gov (United States)

    Kuk, Jin-Yi; An, Jung-Ju; Cha, Hyun-Hwa; Choi, Suk-Joo; Vargas, Juan E; Oh, Soo-young; Roh, Cheong-Rae; Kim, Jong-Hwa

    2013-09-01

    The objective of the study was to investigate the effect of a single course of antenatal corticosteroid (ACS) therapy on the incidence of respiratory distress syndrome (RDS) in preterm twins according to the time interval between ACS administration and delivery. We performed a retrospective cohort study of twins born between 24 and 34 weeks of gestation from November 1995 to May 2011. Subjects were grouped on the basis of the time interval between the first ACS dose and delivery: the ACS-to-delivery interval of less than 2 days (n = 166), 2-7 days (n = 114), and more than 7 days (n = 66). Pregnancy and neonatal outcomes of each group were compared with a control group of twins who were not exposed to ACS (n = 122). Multiple logistic regression analysis was used to examine the association between the ACS-to-delivery interval and the incidence of RDS after adjusting for potential confounding variables. Compared with the ACS nonexposure group, the incidence of RDS in the group with an ACS-to-delivery interval of less than 2 days was not significantly different (adjusted odds ratio [aOR], 1.089; 95% confidence interval [CI], 0.524-2.262; P = .819). RDS occurred significantly less frequently when the ACS-to-delivery interval was between 2 and 7 days (aOR, 0.419; 95% CI, 0.181-0.968; P = .042). However, there was no significant reduction in the incidence of RDS when the ACS-to-delivery interval exceeded 7 days (aOR, 2.205; 95% CI, 0.773-6.292; P = .139). In twin pregnancies, a single course of ACS treatment was associated with a decreased rate of RDS only when the ACS-to-delivery interval was between 2 and 7 days. Copyright © 2013 Mosby, Inc. All rights reserved.

  18. 商业银行效益风险协调区间研究%The Estimation and Optimization of Returns and Risks of Commercial Bank Coordination Interval

    Institute of Scientific and Technical Information of China (English)

    高艳平; 李立新

    2015-01-01

    Based on the 16 listed banks data from 2002 to 2013, the variable model of dynamic panel instrument and multi-obje-ctive optimization method, the research is done on China's commercial bank risks and returns coordination. Firstly, by the comprehen-sive measure of the commercial bank returns and risk, using the method of comprehensive evaluation and the principal component of comprehensive sequence analysis method, this paper gets the returns and risks index and establishes the risks and returns index model. Secondly, the paper controls the intrinsic factors of the returns and risks models to get the upper limit of the risks and the lower limit of the returns. Then, the external macro economic and financial environment factors are added to regain the returns and risk models. On the basis of the two models, it gets the returns under the constraint of risks and risks under the restriction of benefits. Finally, the pa-per measures the corresponding optimal benefit and the coordination interval of commercial bank efficiency and risks in our country and get the following conclusions: the relative efficiency range, risks included, is[0.86,1.15]. Once out of the scope, they may need to in-crease the risk for the price.%文章基于2002-2013年十六家上市银行数据、 动态面板工具变量模型及多目标优化方法, 对中国商业银行风险、收益协调关系进行了研究. 首先通过对商业银行效益、 风险的全面衡量, 利用综合评价法、 时序全局主成分分析方法得到效益、风险指数, 建立了风险和效益的指数模型. 其次对效益、 风险模型分别控制其内在因素, 得到商业银行所能承受的风险的上限值和效益的下限值. 再次将外部宏观经济、 金融环境等因素加进去, 重新得到效益、 风险模型, 以这两个模型为基础, 求解得到效益约束下的风险与风险约束下的效益. 最后测度我国商业银行效益与风险二者

  19. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  20. Rapid parameter optimization of low signal-to-noise samples in NMR spectroscopy using rapid CPMG pulsing during acquisition: application to recycle delays.

    Science.gov (United States)

    Farooq, Hashim; Courtier-Murias, Denis; Soong, Ronald; Masoom, Hussain; Maas, Werner; Fey, Michael; Kumar, Rajeev; Monette, Martine; Stronks, Henry; Simpson, Myrna J; Simpson, André J

    2013-03-01

    A method is presented that combines Carr-Purcell-Meiboom-Gill (CPMG) during acquisition with either selective or nonselective excitation to produce a considerable intensity enhancement and a simultaneous loss in chemical shift information. A range of parameters can theoretically be optimized very rapidly on the basis of the signal from the entire sample (hard excitation) or spectral subregion (soft excitation) and should prove useful for biological, environmental, and polymer samples that often exhibit highly dispersed and broad spectral profiles. To demonstrate the concept, we focus on the application of our method to T(1) determination, specifically for the slowest relaxing components in a sample, which ultimately determines the optimal recycle delay in quantitative NMR. The traditional inversion recovery (IR) pulse program is combined with a CPMG sequence during acquisition. The slowest relaxing components are selected with a shaped pulse, and then, low-power CPMG echoes are applied during acquisition with intervals shorter than chemical shift evolution (RCPMG) thus producing a single peak with an SNR commensurate with the sum of the signal integrals in the selected region. A traditional (13)C IR experiment is compared with the selective (13)C IR-RCPMG sequence and yields the same T(1) values for samples of lysozyme and riverine dissolved organic matter within error. For lysozyme, the RCPMG approach is ~70 times faster, and in the case of dissolved organic matter is over 600 times faster. This approach can be adapted for the optimization of a host of parameters where chemical shift information is not necessary, such as cross-polarization/mixing times and pulse lengths.

  1. Organ sample generator for expected treatment dose construction and adaptive inverse planning optimization

    Energy Technology Data Exchange (ETDEWEB)

    Nie Xiaobo; Liang Jian; Yan Di [Department of Radiation Oncology, Beaumont Health System, Royal Oak, Michigan 48073 (United States)

    2012-12-15

    Purpose: To create an organ sample generator (OSG) for expected treatment dose construction and adaptive inverse planning optimization. The OSG generates random samples of organs of interest from a distribution obeying the patient specific organ variation probability density function (PDF) during the course of adaptive radiotherapy. Methods: Principle component analysis (PCA) and a time-varying least-squares regression (LSR) method were used on patient specific geometric variations of organs of interest manifested on multiple daily volumetric images obtained during the treatment course. The construction of the OSG includes the determination of eigenvectors of the organ variation using PCA, and the determination of the corresponding coefficients using time-varying LSR. The coefficients can be either random variables or random functions of the elapsed treatment days depending on the characteristics of organ variation as a stationary or a nonstationary random process. The LSR method with time-varying weighting parameters was applied to the precollected daily volumetric images to determine the function form of the coefficients. Eleven h and n cancer patients with 30 daily cone beam CT images each were included in the evaluation of the OSG. The evaluation was performed using a total of 18 organs of interest, including 15 organs at risk and 3 targets. Results: Geometric variations of organs of interest during h and n cancer radiotherapy can be represented using the first 3 {approx} 4 eigenvectors. These eigenvectors were variable during treatment, and need to be updated using new daily images obtained during the treatment course. The OSG generates random samples of organs of interest from the estimated organ variation PDF of the individual. The accuracy of the estimated PDF can be improved recursively using extra daily image feedback during the treatment course. The average deviations in the estimation of the mean and standard deviation of the organ variation PDF for h

  2. Quality analysis of salmon calcitonin in a polymeric bioadhesive pharmaceutical formulation: sample preparation optimization by DOE.

    Science.gov (United States)

    D'Hondt, Matthias; Van Dorpe, Sylvia; Mehuys, Els; Deforce, Dieter; DeSpiegeleer, Bart

    2010-12-01

    A sensitive and selective HPLC method for the assay and degradation of salmon calcitonin, a 32-amino acid peptide drug, formulated at low concentrations (400 ppm m/m) in a bioadhesive nasal powder containing polymers, was developed and validated. The sample preparation step was optimized using Plackett-Burman and Onion experimental designs. The response functions evaluated were calcitonin recovery and analytical stability. The best results were obtained by treating the sample with 0.45% (v/v) trifluoroacetic acid at 60 degrees C for 40 min. These extraction conditions did not yield any observable degradation, while a maximum recovery for salmon calcitonin of 99.6% was obtained. The HPLC-UV/MS methods used a reversed-phase C(18) Vydac Everest column, with a gradient system based on aqueous acid and acetonitrile. UV detection, using trifluoroacetic acid in the mobile phase, was used for the assay of calcitonin and related degradants. Electrospray ionization (ESI) ion trap mass spectrometry, using formic acid in the mobile phase, was implemented for the confirmatory identification of degradation products. Validation results showed that the methodology was fit for the intended use, with accuracy of 97.4+/-4.3% for the assay and detection limits for degradants ranging between 0.5 and 2.4%. Pilot stability tests of the bioadhesive powder under different storage conditions showed a temperature-dependent decrease in salmon calcitonin assay value, with no equivalent increase in degradation products, explained by the chemical interaction between salmon calcitonin and the carbomer polymer.

  3. Towards an optimal sampling of peculiar velocity surveys for Wiener Filter reconstructions

    Science.gov (United States)

    Sorce, Jenny G.; Hoffman, Yehuda; Gottlöber, Stefan

    2017-06-01

    The Wiener Filter (WF) technique enables the reconstruction of density and velocity fields from observed radial peculiar velocities. This paper aims at identifying the optimal design of peculiar velocity surveys within the WF framework. The prime goal is to test the dependence of the reconstruction quality on the distribution and nature of data points. Mock data sets, extending to 250 h-1 Mpc, are drawn from a constrained simulation that mimics the local Universe to produce realistic mock catalogues. Reconstructed fields obtained with these mocks are compared to the reference simulation. Comparisons, including residual distributions, cell-to-cell and bulk velocities, imply that the presence of field data points is essential to properly measure the flows. The fields reconstructed from mocks that consist only of galaxy cluster data points exhibit poor-quality bulk velocities. In addition, the reconstruction quality depends strongly on the grouping of individual data points into single points to suppress virial motions in high-density regions. Conversely, the presence of a Zone of Avoidance hardly affects the reconstruction. For a given number of data points, a uniform sample does not score any better than a sample with decreasing number of data points with the distance. The best reconstructions are obtained with a grouped survey containing field galaxies: assuming no error, they differ from the simulated field by less than 100 km s-1 up to the extreme edge of the catalogues or up to a distance of three times the mean distance of data points for non-uniform catalogues. The overall conclusions hold when errors are added.

  4. Optimal satellite sampling to resolve global-scale dynamics in the I-T system

    Science.gov (United States)

    Rowland, D. E.; Zesta, E.; Connor, H. K.; Pfaff, R. F., Jr.

    2016-12-01

    The recent Decadal Survey highlighted the need for multipoint measurements of ion-neutral coupling processes to study the pathways by which solar wind energy drives dynamics in the I-T system. The emphasis in the Decadal Survey is on global-scale dynamics and processes, and in particular, mission concepts making use of multiple identical spacecraft in low earth orbit were considered for the GDC and DYNAMIC missions. This presentation will provide quantitative assessments of the optimal spacecraft sampling needed to significantly advance our knowledge of I-T dynamics on the global scale.We will examine storm time and quiet time conditions as simulated by global circulation models, and determine how well various candidate satellite constellations and satellite schemes can quantify the plasma and neutral convection patterns and global-scale distributions of plasma density, neutral density, and composition, and their response to changes in the IMF. While the global circulation models are data-starved, and do not contain all the physics that we might expect to observe with a global-scale constellation mission, they are nonetheless an excellent "starting point" for discussions of the implementation of such a mission. The result will be of great utility for the design of future missions, such as GDC, to study the global-scale dynamics of the I-T system.

  5. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  6. Optimized Sample Selection in SVM Classification by Combining with DMSP-OLS, Landsat NDVI and GlobeLand30 Products for Extracting Urban Built-Up Areas

    Directory of Open Access Journals (Sweden)

    Xiaolong Ma

    2017-03-01

    Full Text Available The accuracy of training samples used for data classification methods, such as support vector machines (SVMs, has had a considerable positive impact on the results of urban area extractions. To improve the accuracy of urban built-up area extractions, this paper presents a sample-optimized approach for classifying urban area data using a combination of the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS for nighttime light data, Landsat images, and GlobeLand30, which is a 30-m global land cover data product. The proposed approach consists of three main components: (1 initial sample generation and data classification into built-up and non-urban built-up areas based on the maximum and minimum intervals of digital numbers from the DMSP-OLS data, respectively; (2 refined sample selection and optimization by the probability threshold of each pixel based on vegetation-cover, using the Landsat-derived normalized differential vegetation index (NDVI and artificial surfaces extracted from the GlobeLand30 product as the constraints; (3 iterative classification and urban built-up area data extraction using the relationship between these three aspects of data collection together with the training sets. Experiments were conducted for several cities in western China using this proposed approach for the extraction of built-up areas, which were classified using urban construction statistical yearbooks and Landsat images and were compared with data obtained from traditional data collection methods, such as the threshold dichotomy method and the improved neighborhood focal statistics method. An analysis of the empirical results indicated that (1 the sample training process was improved using the proposed method, and the overall accuracy (OA increased from 89% to 96% for both the optimized and non-optimized sample selection; (2 the proposed method had a relative error of less than 10%, as calculated by an accuracy assessment; (3 the

  7. Reliability-Based and Cost-Oriented Product Optimization Integrating Fuzzy Reasoning Petri Nets, Interval Expert Evaluation and Cultural-Based DMOPSO Using Crowding Distance Sorting

    OpenAIRE

    Zhaoxi Hong; Yixiong Feng; Zhongkai Li; Guangdong Tian; Jianrong Tan

    2017-01-01

    In reliability-based and cost-oriented product optimization, the target product reliability is apportioned to subsystems or components to achieve the maximum reliability and minimum cost. Main challenges to conducting such optimization design lie in how to simultaneously consider subsystem division, uncertain evaluation provided by experts for essential factors, and dynamic propagation of product failure. To overcome these problems, a reliability-based and cost-oriented product optimization m...

  8. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  9. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  10. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  11. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  12. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available annealing uses the Weighted Means Shortest Distance (WMSD) criterion between sampling points. The scaled weight function intensively samples areas where an abundance of weathering mine waste occurs. A threshold is defined to constrain the sampling points...

  13. CLSI-based transference of the CALIPER database of pediatric reference intervals from Abbott to Beckman, Ortho, Roche and Siemens Clinical Chemistry Assays: direct validation using reference samples from the CALIPER cohort.

    Science.gov (United States)

    Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow

    2013-09-01

    The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  14. Optimized Optical Rectification and Electro-optic Sampling in ZnTe Crystals with Chirped Femtosecond Laser Pulses

    DEFF Research Database (Denmark)

    Erschens, Dines Nøddegaard; Turchinovich, Dmitry; Jepsen, Peter Uhd

    2011-01-01

    We report on optimization of the intensity of THz signals generated and detected by optical rectification and electro-optic sampling in dispersive, nonlinear media. Addition of a negative prechirp to the femtosecond laser pulses used in the THz generation and detection processes in 1-mm thick ZnT...

  15. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    . An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer...... measurement, estimation, and/or roundoff errors, we only know estimates of the upper bounds on the corresponding measurement errors, i.e., we only know an interval of possible values of each such error. The papers from the following chapter contain the description of the corresponding '' interval computation...

  16. Digitally available interval-specific rock-sample data compiled from historical records, Nevada National Security Site and vicinity, Nye County, Nevada

    Science.gov (United States)

    Wood, David B.

    2007-11-01

    Between 1951 and 1992, 828 underground tests were conducted on the Nevada National Security Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada National Security Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples can not be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.

  17. Digitally Available Interval-Specific Rock-Sample Data Compiled from Historical Records, Nevada Test Site and Vicinity, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    David B. Wood

    2009-10-08

    Between 1951 and 1992, underground nuclear weapons testing was conducted at 828 sites on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.

  18. Digitally Available Interval-Specific Rock-Sample Data Compiled from Historical Records, Nevada Test Site and Vicinity, Nye County, Nevada.

    Energy Technology Data Exchange (ETDEWEB)

    David B. Wood

    2007-10-24

    Between 1951 and 1992, 828 underground tests were conducted on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.

  19. Parametric optimization of selective laser melting for forming Ti6Al4V samples by Taguchi method

    Science.gov (United States)

    Sun, Jianfeng; Yang, Yongqiang; Wang, Di

    2013-07-01

    In this study, a selective laser melting experiment was carried out with Ti6Al4V alloy powders. To produce samples with maximum density, selective laser melting parameters of laser power, scanning speed, powder thickness, hatching space and scanning strategy were carefully selected. As a statistical design of experimental technique, the Taguchi method was used to optimize the selected parameters. The results were analyzed using analyses of variance (ANOVA) and the signal-to-noise (S/N) ratios by design-expert software for the optimal parameters, and a regression model was established. The regression equation revealed a linear relationship among the density, laser power, scanning speed, powder thickness and scanning strategy. From the experiments, sample with density higher than 95% was obtained. The microstructure of obtained sample was mainly composed of acicular martensite, α phase and β phase. The micro-hardness was 492 HV0.2.

  20. Sparse Recovery Optimization in Wireless Sensor Networks with a Sub-Nyquist Sampling Rate

    National Research Council Canada - National Science Library

    Brunelli, Davide; Caione, Carlo

    2015-01-01

    .... Using data from a real dataset, we compare an implementation of CS using dense encoding matrices, where samples are gathered at a Nyquist rate, with the reconstruction of signals sampled at a sub-Nyquist rate...

  1. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling.

    Science.gov (United States)

    Chaudhary, Neha; Tøndel, Kristin; Bhatnagar, Rakesh; dos Santos, Vítor A P Martins; Puchałka, Jacek

    2016-03-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential novel drug targets and biotechnologically relevant pathways. The abundance of alternate flux profiles has led to the evolution of methods to explore the complete solution space aiming to increase the accuracy of predictions. Herein we present a novel, generic algorithm to characterize the entire flux space of GSMR upon application of FBA, leading to the optimal value of the objective (the optimal flux space). Our method employs Modified Latin-Hypercube Sampling (LHS) to effectively border the optimal space, followed by Principal Component Analysis (PCA) to identify and explain the major sources of variability within it. The approach was validated with the elementary mode analysis of a smaller network of Saccharomyces cerevisiae and applied to the GSMR of Pseudomonas aeruginosa PAO1 (iMO1086). It is shown to surpass the commonly used Monte Carlo Sampling (MCS) in providing a more uniform coverage for a much larger network in less number of samples. Results show that although many fluxes are identified as variable upon fixing the objective value, majority of the variability can be reduced to several main patterns arising from a few alternative pathways. In iMO1086, initial variability of 211 reactions could almost entirely be explained by 7 alternative pathway groups. These findings imply that the possibilities to reroute greater portions of flux may be limited within metabolic networks of bacteria. Furthermore, the optimal flux space is subject to change with environmental conditions. Our method may be a useful device to validate the predictions made by FBA-based tools, by describing the optimal flux space associated with these predictions, thus to improve them.

  2. Non-uniform sampling in EPR--optimizing data acquisition for HYSCORE spectroscopy.

    Science.gov (United States)

    Nakka, K K; Tesiram, Y A; Brereton, I M; Mobli, M; Harmer, J R

    2014-08-21

    Non-uniform sampling combined with maximum entropy reconstruction is a powerful technique used in multi-dimensional NMR spectroscopy to reduce sample measurement time. We adapted this technique to the pulse EPR experiment hyperfine sublevel correlation (HYSCORE) and show that experimental times can be shortened by approximately an order of magnitude as compared to conventional linear sampling with negligible loss of information.

  3. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling

    DEFF Research Database (Denmark)

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R

    2015-01-01

    OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN: Prospec......OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN...

  4. Optimizing the interval between G-CSF therapy and F-18 FDG PET imaging in children and young adults receiving chemotherapy for sarcoma

    Energy Technology Data Exchange (ETDEWEB)

    Trout, Andrew T.; Sharp, Susan E.; Gelfand, Michael J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, Cincinnati, OH (United States); Turpin, Brian K. [Cincinnati Children' s Hospital Medical Center, Cancer and Blood Diseases Institute, Division of Oncology, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Division of Biostatistics and Epidemiology, Cincinnati, OH (United States)

    2015-07-15

    Granulocyte colony-stimulating factors (G-CSF) speed recovery from chemotherapy-induced myelosuppression but the marrow stimulation they cause can interfere with interpretation of F-18 fluorodeoxyglucose positron emission tomography (F-18 FDG PET) exams. To assess the frequency of interfering G-CSF-induced bone marrow activity on FDG PET imaging in children and young adults with Ewing sarcoma and rhabdomyosarcoma and to define an interval between G-CSF administration and FDG PET imaging that limits marrow interference. Blinded, retrospective review of FDG PET exams performed in patients treated with long-acting G-CSF as part of their chemotherapeutic regimen. Exams were subjectively scored by two reviewers (R1 and R2) who assessed the level of marrow uptake of FDG and measured standardized uptake values in the marrow, liver, spleen and blood pool. FDG PET findings were correlated with time since G-CSF administration and with blood cell counts. Thirty-eight FDG PET exams performed in 17 patients were reviewed with 47.4% (18/38) of exams having marrow uptake of FDG sufficient to interfere with image interpretation. Primary predictors of marrow uptake of FDG were patient age (P = 0.0037) and time since G-CSF exposure (P = 0.0028 for subjective marrow uptake of FDG, P = 0.008 [R1] and P = 0.004 [R2] for measured maximum standardized uptake value (SUVmax)). The median interval between G-CSF administration and PET imaging in cases with marrow activity considered normal or not likely to interfere was 19.5 days (range: 7-55 days). In pediatric and young adult patients with Ewing sarcoma and rhabdomyosarcoma, an interval of 20 days between administration of the long-acting form of G-CSF and FDG PET imaging should limit interference by stimulated marrow. (orig.)

  5. Sensitivity Analysis Based Approaches for Mitigating the Effects of Reducible Interval Input Uncertainty on Single- and Multi-Disciplinary Systems Using Multi-Objective Optimization

    Science.gov (United States)

    2010-01-01

    Du and Choi, 2006; Apley et al., 2006; Li and Azarm, 2008] and Reliability Based Design Optimization ( RBDO ) [Gunawan and Papalambros, 2006; Choi et...assist designers faced with these types of decisions about reducible uncertainty. RBDO approaches have been developed that allow for the inclusion...Youn and Wang, 2008]. Other approaches combine uncertainty reduction mechanisms with RBDO techniques, either through sequentially performing

  6. Injury modality, survival interval, and sample region are critical determinants of qRT-PCR reference gene selection during long-term recovery from brain trauma.

    Science.gov (United States)

    Harris, Janna L; Reeves, Thomas M; Phillips, Linda L

    2009-10-01

    In the present study we examined expression of four real-time quantitative RT-PCR reference genes commonly applied to rodent models of brain injury. Transcripts for beta-actin, cyclophilin A, GAPDH, and 18S rRNA were assessed at 2-15 days post-injury, focusing on the period of synaptic recovery. Diffuse moderate central fluid percussion injury (FPI) was contrasted with unilateral entorhinal cortex lesion (UEC), a model of targeted deafferentation. Expression in UEC hippocampus, as well as in FPI hippocampus and parietotemporal cortex was analyzed by qRT-PCR. Within-group variability of gene expression was assessed and change in expression relative to paired controls was determined. None of the four common reference genes tested was invariant across brain region, survival time, and type of injury. Cyclophilin A appeared appropriate as a reference gene in UEC hippocampus, while beta-actin was most stable for the hippocampus subjected to FPI. However, each gene may fail as a suitable reference with certain test genes whose RNA expression is targeted for measurement. In FPI cortex, all reference genes were significantly altered over time, compromising their utility for time-course studies. Despite such temporal variability, certain genes may be appropriate references if limited to single survival times. These data provide an extended baseline for identification of appropriate reference genes in rodent studies of recovery from brain injury. In this context, we outline additional considerations for selecting a qRT-PCR normalization strategy in such studies. As previously concluded for acute post-injury intervals, we stress the importance of reference gene validation for each brain injury paradigm and each set of experimental conditions.

  7. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  8. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  9. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  10. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  11. Reference Intervals in Neonatal Hematology.

    Science.gov (United States)

    Henry, Erick; Christensen, Robert D

    2015-09-01

    The various blood cell counts of neonates must be interpreted in accordance with high-quality reference intervals based on gestational and postnatal age. Using very large sample sizes, we generated neonatal reference intervals for each element of the complete blood count (CBC). Knowledge of whether a patient has CBC values that are too high (above the upper reference interval) or too low (below the lower reference interval) provides important insights into the specific disorder involved and in many instances suggests a treatment plan. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Application of Interval Algorithm in Rural Power Network Planning

    Institute of Scientific and Technical Information of China (English)

    GU Zhuomu; ZHAO Yulin

    2009-01-01

    Rural power network planning is a complicated nonlinear optimized combination problem which based on load forecasting results, and its actual load is affected by many uncertain factors, which influenced optimization results of rural power network planning. To solve the problems, the interval algorithm was used to modify the initial search method of uncertainty load mathematics model in rural network planning. Meanwhile, the genetic/tabu search combination algorithm was adopted to optimize the initialized network. The sample analysis results showed that compared with the certainty planning, the improved method was suitable for urban medium-voltage distribution network planning with consideration of uncertainty load and the planning results conformed to the reality.

  13. Optimal protein extraction methods from diverse sample types for protein profiling by using Two-Dimensional Electrophoresis (2DE).

    Science.gov (United States)

    Tan, A A; Azman, S N; Abdul Rani, N R; Kua, B C; Sasidharan, S; Kiew, L V; Othman, N; Noordin, R; Chen, Y

    2011-12-01

    There is a great diversity of protein samples types and origins, therefore the optimal procedure for each sample type must be determined empirically. In order to obtain a reproducible and complete sample presentation which view as many proteins as possible on the desired 2DE gel, it is critical to perform additional sample preparation steps to improve the quality of the final results, yet without selectively losing the proteins. To address this, we developed a general method that is suitable for diverse sample types based on phenolchloroform extraction method (represented by TRI reagent). This method was found to yield good results when used to analyze human breast cancer cell line (MCF-7), Vibrio cholerae, Cryptocaryon irritans cyst and liver abscess fat tissue. These types represent cell line, bacteria, parasite cyst and pus respectively. For each type of samples, several attempts were made to methodically compare protein isolation methods using TRI-reagent Kit, EasyBlue Kit, PRO-PREP™ Protein Extraction Solution and lysis buffer. The most useful protocol allows the extraction and separation of a wide diversity of protein samples that is reproducible among repeated experiments. Our results demonstrated that the modified TRI-reagent Kit had the highest protein yield as well as the greatest number of total proteins spots count for all type of samples. Distinctive differences in spot patterns were also observed in the 2DE gel of different extraction methods used for each type of sample.

  14. Development of optimal liquid based cytology sample processing methods for HPV testing: minimising the 'inadequate' test result.

    Science.gov (United States)

    Peevor, R; Jones, J; Fiander, A N; Hibbitts, S

    2011-05-01

    Incorporation of HPV testing into cervical screening is anticipated and robust methods for DNA extraction from liquid based cytology (LBC) samples are required. This study compared QIAamp extraction with Proteinase K digestion and developed methods to address DNA extraction failure (β-globin PCR negative) from clinical specimens. Proteinase K and QIAamp extraction methods in paired LBC samples were comparable with adequate DNA retrieved from 93.3% of clinical specimens. An HPV prevalence cohort (n=10,000) found 7% (n=676) LBC samples tested negative for β-globin, and were classified as inadequate. This 'failure' rate is unsuitable for population screening, particularly as the sampling method is intrusive. 379/676 samples were assessed to determine the cause of test failure. Re-testing confirmed adequate DNA in 21.6% of the original extracts; re-extraction from stored material identified 56.2% samples contained adequate material; dilution to overcome sample inhibition (1:10) resolved 51.7% cases in original extracts and 28% in new extracts. A standardised approach to HPV testing with an optimal DNA concentration input rather than standard volume input is recommended. Samples failing initial DNA extraction should be repeat extracted and assessed for sample inhibition to reduce the 7% of HPV tests being reported as inadequate and reduce the need for retesting of those women to <1%.

  15. Determination of Ergot Alkaloids: Purity and Stability Assessment of Standards and Optimization of Extraction Conditions for Cereal Samples

    DEFF Research Database (Denmark)

    Krska, R.; Berthiller, F.; Schuhmacher, R.

    2008-01-01

    Results obtained from a purity study on standards of the 6 major ergot alkaloids ergometrine, ergotamine, ergosine, ergocristine, ergocryptine, and ergocornine and their corresponding epimers are discussed. The 6 ergot alkaloids studied have been defined by the European Food Safety Authority...... considerably above 98% apart from ergocristinine (94%), ergosine (96%), and ergosinine (95%). Also discussed is the optimization of extraction conditions presented in a recently published method for the quantitation of ergot alkaloids in food samples using solid-phase extraction with primary secondary amine...... (PSA) before LC/MS/MS. Based on the results obtained from these optimization studies, a mixture of acetonitrile with ammonium carbonate buffer was used as extraction solvent, as recoveries for all analyzed ergot alkaloids were significantly higher than those with the other solvents. Different sample...

  16. Optimized nested Markov chain Monte Carlo sampling: application to the liquid nitrogen Hugoniot using density functional theory

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, Milton Sam [Los Alamos National Laboratory; Coe, Joshua D [Los Alamos National Laboratory; Sewell, Thomas D [UNIV OF MISSOURI-COLUMBIA

    2009-01-01

    An optimized version of the Nested Markov Chain Monte Carlo sampling method is applied to the calculation of the Hugoniot for liquid nitrogen. The 'full' system of interest is calculated using density functional theory (DFT) with a 6-31 G* basis set for the configurational energies. The 'reference' system is given by a model potential fit to the anisotropic pair interaction of two nitrogen molecules from DFT calculations. The EOS is sampled in the isobaric-isothermal (NPT) ensemble with a trial move constructed from many Monte Carlo steps in the reference system. The trial move is then accepted with a probability chosen to give the full system distribution. The P's and T's of the reference and full systems are chosen separately to optimize the computational time required to produce the full system EOS. The method is numerically very efficient and predicts a Hugoniot in excellent agreement with experimental data.

  17. Classic Kriging versus Kriging with Bootstrapping or Conditional Simulation: Classic Kriging's Robust Confidence Intervals and Optimization (Revised version of CentER DP 2013-038)

    OpenAIRE

    Mehdad, E.; Kleijnen, Jack P.C.

    2014-01-01

    Kriging is a popular method for estimating the global optimum of a simulated system. Kriging approximates the input/output function of the simulation model. Kriging also estimates the variances of the predictions of outputs for input combinations not yet simulated. These predictions and their variances are used by efficient global optimization"(EGO), to balance local and global search. This article focuses on two related questions: (1) How to select the next combination to be simulated when s...

  18. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    DEFF Research Database (Denmark)

    Kirkeby, Carsten; Stockmarr, Anders; Bødker, Rene

    2013-01-01

    traps to sample specimens from the Culicoides obsoletus species complex on a 14 hectare field during 16 nights in 2009. FINDINGS: The large number of traps and catch nights enabled us to simulate a series of samples consisting of different numbers of traps (1-15) on each night. We also varied the number...

  19. Calculation and optimization of sample identification by laser induced breakdown spectroscopy via correlation analysis

    NARCIS (Netherlands)

    Lentjes, M.; Dickmann, K.; Meijer, J.

    2007-01-01

    Linear correlation analysis may be used as a technique for the identification of samples with a very similar chemical composition by laser induced breakdown spectroscopy. The spectrum of the “unknown” sample is correlated with a library of reference spectra. The probability of identification by

  20. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  1. Optimal decision-making model of spatial sampling for survey of China's land with remotely sensed data

    Institute of Scientific and Technical Information of China (English)

    LI Lianfa; WANG Jinfeng; LIU Jiyuan

    2005-01-01

    Abstract In the remote sensing survey of the country land, cost and accuracy are a pair of conflicts, for which spatial sampling is a preferable solution with the aim of an optimal balance between economic input and accuracy of results, or in other words, acquirement of higher accuracy at less cost. Counter to drawbacks of previous application models, e.g. lack of comprehensive and quantitative-comparison, the optimal decision-making model of spatial sampling is proposed. This model first acquires the possible accuracy-cost diagrams of multiple schemes through initial spatial exploration, then regresses them and standardizes them into a unified reference frame, and finally produces the relatively optimal sampling scheme by using the discrete decision-making function (built by this paper) and comparing them in combination with the diagrams. According to the test result in the survey of the arable land using remotely sensed data, the Sandwich model, while applied in the survey of the thin-feature and cultivated land areas with aerial photos, can better realize the goal of the best balance between investment and accuracy. With this case and other cases, it is shown that the optimal decision-making model of spatial sampling is a good choice in the survey of the farm areas using remote sensing, with its distinguished benefit of higher precision at less cost or vice versa. In order to extensively apply the model in the surveys of natural resources, including arable farm areas, this paper proposes the prototype of development using the component technology, that could considerably improve the analysis efficiency by insetting program components within the software environment of GIS and RS.

  2. Optimization of left adrenal vein sampling in primary aldosteronism: Coping with asymmetrical cortisol secretion.

    Science.gov (United States)

    Kishino, Mitsuhiro; Yoshimoto, Takanobu; Nakadate, Masashi; Katada, Yoshiaki; Kanda, Eiichiro; Nakaminato, Shuichiro; Saida, Yukihisa; Ogawa, Yoshihiro; Tateishi, Ukihide

    2017-03-31

    We evaluated the influence of catheter sampling position and size on left adrenal venous sampling (AVS) in patients with primary aldosteronism (PA) and analyzed their relationship to cortisol secretion. This retrospective study included 111 patients with a diagnosis of primary aldosteronism who underwent tetracosactide-stimulated AVS. Left AVS was obtained from two catheter positions - the central adrenal vein (CAV) and the common trunk. For common trunk sampling, 5-French catheters were used in 51 patients, and microcatheters were used in 60 patients. Autonomous cortisol secretion was evaluated with a low-dose dexamethasone suppression test in 87 patients. The adrenal/inferior vena cava cortisol concentration ratio [selectivity index (SI)] was significantly lower in samples from the left common trunk than those of the left CAV and right adrenal veins, but this difference was reduced when a microcatheter was used for common trunk sampling. Sample dilution in the common trunk of the left adrenal vein can be decreased by limiting sampling speed with the use of a microcatheter. Meanwhile, there was no significant difference in SI between the left CAV and right adrenal veins. Laterality, determined according to aldosterone/cortisol ratio (A/C ratio) based criteria, showed good reproducibility regardless of sampling position, unlike the absolute aldosterone value based criteria. However, in 11 cases with autonomous cortisol co-secretion, the cortisol hypersecreting side tended to be underestimated when using A/C ratio based criteria. Left CAV sampling enables symmetrical sampling, and may be essential when using absolute aldosterone value based criteria in cases where symmetrical cortisol secretion is uncertain.

  3. Determination of zinc in environmental samples by solid phase spectrophotometry: optimization and validation study

    OpenAIRE

    Molina, Mar??a Francisca; Nechar, Mounir; Bosque-Sendra, Juan M.

    1998-01-01

    A simple and specific solid-phase spectrophotometric (SPS) determination of zinc in ??g dm-3 level has been developed based on the reaction of Zn(II) with 4-(2-pyridylazo)resorcinol (PAR) in the presence of potassium iodide; the product was then fixed on an anionic exchanger. The absorbance of the gel, packed in a 1 mm cell, is measured directly. PAR and KI concentrations were optimized simultaneously using response surface methodology (RSM) from sequential experimental Doehlert designs. The ...

  4. Building Extraction Based on an Optimized Stacked Sparse Autoencoder of Structure and Training Samples Using LIDAR DSM and Optical Images.

    Science.gov (United States)

    Yan, Yiming; Tan, Zhichao; Su, Nan; Zhao, Chunhui

    2017-08-24

    In this paper, a building extraction method is proposed based on a stacked sparse autoencoder with an optimized structure and training samples. Building extraction plays an important role in urban construction and planning. However, some negative effects will reduce the accuracy of extraction, such as exceeding resolution, bad correction and terrain influence. Data collected by multiple sensors, as light detection and ranging (LIDAR), optical sensor etc., are used to improve the extraction. Using digital surface model (DSM) obtained from LIDAR data and optical images, traditional method can improve the extraction effect to a certain extent, but there are some defects in feature extraction. Since stacked sparse autoencoder (SSAE) neural network can learn the essential characteristics of the data in depth, SSAE was employed to extract buildings from the combined DSM data and optical image. A better setting strategy of SSAE network structure is given, and an idea of setting the number and proportion of training samples for better training of SSAE was presented. The optical data and DSM were combined as input of the optimized SSAE, and after training by an optimized samples, the appropriate network structure can extract buildings with great accuracy and has good robustness.

  5. Varieties of Confidence Intervals.

    Science.gov (United States)

    Cousineau, Denis

    2017-01-01

    Error bars are useful to understand data and their interrelations. Here, it is shown that confidence intervals of the mean (CI M s) can be adjusted based on whether the objective is to highlight differences between measures or not and based on the experimental design (within- or between-group designs). Confidence intervals (CIs) can also be adjusted to take into account the sampling mechanisms and the population size (if not infinite). Names are proposed to distinguish the various types of CIs and the assumptions underlying them, and how to assess their validity is explained. The various CIs presented here are easily obtained from a succession of multiplicative adjustments to the basic (unadjusted) CI width. All summary results should present a measure of precision, such as CIs, as this information is complementary to effect sizes.

  6. Optimized Clinical Use of RNALater and FFPE Samples for Quantitative Proteomics

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Kastaniegaard, Kenneth; Padurariu, Simona

    Introduction and Objectives The availability of patient samples is essential for clinical proteomic research. Biobanks worldwide store mainly samples stabilized in RNAlater as well as formalin-fixed and paraffin embedded (FFPE) biopsies. Biobank material is a potential source for clinical...... we compare to FFPE and frozen samples being the control. Methods From the sigmoideum of two healthy participants’ twenty-four biopsies were extracted using endoscopy. The biopsies was stabilized either by being directly frozen, RNAlater, FFPE or incubated for 30 min at room temperature prior to FFPE...

  7. COARSE: Convex Optimization based autonomous control for Asteroid Rendezvous and Sample Exploration Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Sample return missions, by nature, require high levels of spacecraft autonomy. Developments in hardware avionics have led to more capable real-time onboard computing...

  8. Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Paul H. Evangelista

    2011-05-01

    Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  9. Optimization of sample preparation for accurate results in quantitative NMR spectroscopy

    Science.gov (United States)

    Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi

    2017-04-01

    Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.

  10. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  11. Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.; Segerstrom, Suzanne C.

    2010-01-01

    Optimism is an individual difference variable that reflects the extent to which people hold generalized favorable expectancies for their future. Higher levels of optimism have been related prospectively to better subjective well-being in times of adversity or difficulty (i.e., controlling for previous well-being). Consistent with such findings, optimism has been linked to higher levels of engagement coping and lower levels of avoidance, or disengagement, coping. There is evidence that optimism is associated with taking proactive steps to protect one's health, whereas pessimism is associated with health-damaging behaviors. Consistent with such findings, optimism is also related to indicators of better physical health. The energetic, task-focused approach that optimists take to goals also relates to benefits in the socioeconomic world. Some evidence suggests that optimism relates to more persistence in educational efforts and to higher later income. Optimists also appear to fare better than pessimists in relationships. Although there are instances in which optimism fails to convey an advantage, and instances in which it may convey a disadvantage, those instances are relatively rare. In sum, the behavioral patterns of optimists appear to provide models of living for others to learn from. PMID:20170998

  12. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  13. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  14. Optimizing Design Parameters for Sets of Concentric Tube Robots using Sampling-based Motion Planning

    OpenAIRE

    Baykal, Cenk; Torres, Luis G.; Alterovitz, Ron

    2015-01-01

    Concentric tube robots are tentacle-like medical robots that can bend around anatomical obstacles to access hard-to-reach clinical targets. The component tubes of these robots can be swapped prior to performing a task in order to customize the robot’s behavior and reachable workspace. Optimizing a robot’s design by appropriately selecting tube parameters can improve the robot’s effectiveness on a procedure-and patient-specific basis. In this paper, we present an algorithm that generates sets ...

  15. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    under three different conditions. Fe(III)-nitrilotriacetic acid (NTA) IMAC resin was chosen due to its superior performance in all tests. We further investigated the solution ionization efficiency change of the phosphoryl group and carboxylic group in different acetonitrile-water solutions and observed...... that the ionization efficiencies of the phosphoryl group and carboxylic group changed differently when the acetonitrile concentration was increased. A magnified difference was achieved in high acetonitrile content solutions. On the basis of this concept, an optimized phosphopeptide enrichment protocol was established...

  16. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  17. An accurate metalloprotein-specific scoring function and molecular docking program devised by a dynamic sampling and iteration optimization strategy.

    Science.gov (United States)

    Bai, Fang; Liao, Sha; Gu, Junfeng; Jiang, Hualiang; Wang, Xicheng; Li, Honglin

    2015-04-27

    Metalloproteins, particularly zinc metalloproteins, are promising therapeutic targets, and recent efforts have focused on the identification of potent and selective inhibitors of these proteins. However, the ability of current drug discovery and design technologies, such as molecular docking and molecular dynamics simulations, to probe metal-ligand interactions remains limited because of their complicated coordination geometries and rough treatment in current force fields. Herein we introduce a robust, multiobjective optimization algorithm-driven metalloprotein-specific docking program named MpSDock, which runs on a scheme similar to consensus scoring consisting of a force-field-based scoring function and a knowledge-based scoring function. For this purpose, in this study, an effective knowledge-based zinc metalloprotein-specific scoring function based on the inverse Boltzmann law was designed and optimized using a dynamic sampling and iteration optimization strategy. This optimization strategy can dynamically sample and regenerate decoy poses used in each iteration step of refining the scoring function, thus dramatically improving both the effectiveness of the exploration of the binding conformational space and the sensitivity of the ranking of the native binding poses. To validate the zinc metalloprotein-specific scoring function and its special built-in docking program, denoted MpSDockZn, an extensive comparison was performed against six universal, popular docking programs: Glide XP mode, Glide SP mode, Gold, AutoDock, AutoDock4Zn, and EADock DSS. The zinc metalloprotein-specific knowledge-based scoring function exhibited prominent performance in accurately describing the geometries and interactions of the coordination bonds between the zinc ions and chelating agents of the ligands. In addition, MpSDockZn had a competitive ability to sample and identify native binding poses with a higher success rate than the other six docking programs.

  18. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    Energy Technology Data Exchange (ETDEWEB)

    JR Bontha; GR Golcar; N Hannigan

    2000-08-29

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%.

  19. Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system

    Science.gov (United States)

    He, Shengmao; Zhu, Zhengfan; Peng, Chao; Ma, Jian; Zhu, Xiaolong; Gao, Yang

    2016-08-01

    In the 6th edition of the Chinese Space Trajectory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selection, escape from and capture by the Earth-Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital resonance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid exploration.

  20. Optimal design of near-Earth asteroid sample-return trajectories in the Sun-Earth-Moon system

    Institute of Scientific and Technical Information of China (English)

    Shengmao He; Zhengfan Zhu; Chao Peng; Jian Ma; Xiaolong Zhu; Yang Gao

    2016-01-01

    In the 6th edition of the Chinese Space Trajec-tory Design Competition held in 2014, a near-Earth asteroid sample-return trajectory design problem was released, in which the motion of the spacecraft is modeled in multi-body dynamics, considering the gravitational forces of the Sun, Earth, and Moon. It is proposed that an electric-propulsion spacecraft initially parking in a circular 200-km-altitude low Earth orbit is expected to rendezvous with an asteroid and carry as much sample as possible back to the Earth in a 10-year time frame. The team from the Technology and Engi-neering Center for Space Utilization, Chinese Academy of Sciences has reported a solution with an asteroid sample mass of 328 tons, which is ranked first in the competition. In this article, we will present our design and optimization methods, primarily including overall analysis, target selec-tion, escape from and capture by the Earth–Moon system, and optimization of impulsive and low-thrust trajectories that are modeled in multi-body dynamics. The orbital res-onance concept and lunar gravity assists are considered key techniques employed for trajectory design. The reported solution, preliminarily revealing the feasibility of returning a hundreds-of-tons asteroid or asteroid sample, envisions future space missions relating to near-Earth asteroid explo-ration.

  1. Fast Determination of Manganese in Milk and Similar Infant Food Samples Using Multivariate Optimization and GF AAS

    Directory of Open Access Journals (Sweden)

    Flávia Regina de Amorim

    2011-01-01

    Full Text Available Manganese is an essential element, but high levels in foods can be toxic mainly for children. A fast and efficient method to determine Mn in milk and other infant foods using slurries and liquid samples is presented. Slurries were prepared in ultrapure water with 10 minutes of sonication. Liquid samples were diluted in ultrapure water when necessary. Multivariate optimization was used to establish some optimal analytical parameters through a fractional factorial design and a central composite design. Slurred and diluted samples were analyzed directly by GF AAS. The method presented limit of detection of (0.98±0.04 μg L−1, characteristic mass of (2.9±0.3 pg (recommended value 2 pg, RSD of 2.3% (n=5, and linear range from 0.98 to 20.0 μg L−1 using iridium as permanent modifier. The accuracy was evaluated analyzing two certified reference materials: nonfat milk powder (SRM1549 and whole milk powder (SRM8435. The powdered samples concentrations were between 0.210 and 26.3 μg g−1.

  2. Mate choice and optimal search behavior: fitness returns under the fixed sample and sequential search strategies.

    Science.gov (United States)

    Wiegmann, Daniel D; Seubert, Steven M; Wade, Gordon A

    2010-02-21

    The behavior of a female in search of a mate determines the likelihood that she encounters a high-quality male in the search process. The fixed sample (best-of-n) search strategy and the sequential search (fixed threshold) strategy are two prominent models of search behavior. The sequential search strategy dominates the former strategy--yields an equal or higher expected net fitness return to searchers--when search costs are nontrivial and the distribution of quality among prospective mates is uniform or truncated normal. In this paper our objective is to determine whether there are any search costs or distributions of male quality for which the sequential search strategy is inferior to the fixed sample search strategy. The two search strategies are derived under general conditions in which females evaluate encountered males by inspection of an indicator character that has some functional relationship to male quality. The solutions are identical to the original models when the inspected male attribute is itself male quality. The sequential search strategy is shown to dominate the fixed sample search strategy for all search costs and distributions of male quality. Low search costs have been implicated to explain empirical observations that are consistent with the use of a fixed sample search strategy, but under conditions in which the original models were derived there is no search cost or distribution of male quality that favors the fixed sample search strategy. Plausible alternative explanations for the apparent use of this search strategy are discussed.

  3. Dynamically optimized Wang-Landau sampling with adaptive trial moves and modification factors.

    Science.gov (United States)

    Koh, Yang Wei; Lee, Hwee Kuan; Okabe, Yutaka

    2013-11-01

    The density of states of continuous models is known to span many orders of magnitudes at different energies due to the small volume of phase space near the ground state. Consequently, the traditional Wang-Landau sampling which uses the same trial move for all energies faces difficulties sampling the low-entropic states. We developed an adaptive variant of the Wang-Landau algorithm that very effectively samples the density of states of continuous models across the entire energy range. By extending the acceptance ratio method of Bouzida, Kumar, and Swendsen such that the step size of the trial move and acceptance rate are adapted in an energy-dependent fashion, the random walker efficiently adapts its sampling according to the local phase space structure. The Wang-Landau modification factor is also made energy dependent in accordance with the step size, enhancing the accumulation of the density of states. Numerical simulations show that our proposed method performs much better than the traditional Wang-Landau sampling.

  4. Optimization

    CERN Document Server

    Pearce, Charles

    2009-01-01

    Focuses on mathematical structure, and on real-world applications. This book includes developments in several optimization-related topics such as decision theory, linear programming, turnpike theory, duality theory, convex analysis, and queuing theory.

  5. Optimization of an "in situ" subtidal rocky-shore sampling strategy for monitoring purposes.

    Science.gov (United States)

    Gallon, R K; Ysnel, F; Feunteun, E

    2013-09-15

    This study compared 2 standardized protocols to monitor subtidal rocky shores. We tested 2 sampling methods (temporal unit and quadrat) to assess the efficiency of extracting biota parameters (diversity, abundance, and biomass) of macroalgae, Mollusca, and Porifera with respect to time-cost and the number of sampling units. Species richness and occurrence of rocky subtidal habitats were better described by visual censuses than by quadrats. The same estimated richness was provided by the 2 methods. The association of a visual census and a quadrat was the most efficient way for responding to the requirements. A minimum of 5 sampling units per discrete area is recommended for accurately describing habitats. Then, we tested the sensitivity of the proposed protocol on the Bizeux Islet to study the variations of community structures according to depth and station. Based on the results, recommendations for monitoring purposes have been proposed according to European directives.

  6. An external sub-milliprobe optimized for PIXE analysis of archaeological samples

    Energy Technology Data Exchange (ETDEWEB)

    Torkiha, M., E-mail: m-torkiha@ipm.i [School of Physics, Institute for Studies in Theoretical Physics and Mathematics (IPM), P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of); Lamehi-Rachti, M.; Kakuee, O.R.; Fathollahi, V. [Van de Graaff Laboratory, Nuclear Science Research School, NSTRI, P.O. Box 14395-836, Tehran (Iran, Islamic Republic of)

    2010-05-01

    A simple and compact electrostatic quadrupole triplet lens has been designed and fabricated as part of the dedicated beam line for analysis of archaeological samples. A Fortran based ion optics program has been developed to simulate the beam line and lens parameters to achieve a focused sub-millimeter beam spot. The results of simulations are utilized to design and fabricate beam-line elements. The beam spot was measured by wire scanning method to be 0.3 mm for the object-slit width of 1 mm at a distance of 15 mm from the exit window. The improved Ion Beam Analysis setup allows accelerated PIXE analysis of samples whose details are comparable with the beam probe in size. The PIXE spectrum obtained by external analysis of a historical enameled ceramic sample with a sub-millimeter beam is compared with that obtained by in-vacuum standard PIXE analysis.

  7. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  8. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Science.gov (United States)

    Marques, João Tiago; Ramos Pereira, Maria J; Marques, Tiago A; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M

    2013-01-01

    Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  9. Cadmium and lead determination by ICPMS: Method optimization and application in carabao milk samples

    Directory of Open Access Journals (Sweden)

    Riza A. Magbitang

    2012-06-01

    Full Text Available A method utilizing inductively coupled plasma mass spectrometry (ICPMS as the element-selective detector with microwave-assisted nitric acid digestion as the sample pre-treatment technique was developed for the simultaneous determination of cadmium (Cd and lead (Pb in milk samples. The estimated detection limits were 0.09ìg kg-1 and 0.33ìg kg-1 for Cd and Pb, respectively. The method was linear in the concentration range 0.01 to 500ìg kg-1with correlation coefficients of 0.999 for both analytes.The method was validated using certified reference material BCR 150 and the determined values for Cd and Pb were 18.24 ± 0.18 ìg kg-1 and 807.57 ± 7.07ìg kg-1, respectively. Further validation using another certified reference material, NIST 1643e, resulted in determined concentrations of 6.48 ± 0.10 ìg L-1 for Cd and 21.96 ± 0.87 ìg L-1 for Pb. These determined values agree well with the certified values in the reference materials.The method was applied to processed and raw carabao milk samples collected in Nueva Ecija, Philippines.The Cd levels determined in the samples were in the range 0.11 ± 0.07 to 5.17 ± 0.13 ìg kg-1 for the processed milk samples, and 0.11 ± 0.07 to 0.45 ± 0.09 ìg kg-1 for the raw milk samples. The concentrations of Pb were in the range 0.49 ± 0.21 to 5.82 ± 0.17 ìg kg-1 for the processed milk samples, and 0.72 ± 0.18 to 6.79 ± 0.20 ìg kg-1 for the raw milk samples.

  10. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  11. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  12. Optimized design and analysis of sparse-sampling fMRI experiments

    Directory of Open Access Journals (Sweden)

    Tyler K Perrachione

    2013-04-01

    Full Text Available Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI, in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional timeseries. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR delay (an acquisition parameter, stimulation rate (an experimental design parameter and model basis function (an analysis parameter act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1 Sparse analyses should utilize a physiologically-informed model that incorporates hemodynamic response convolution to reduce model error. (2 The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3 TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to improve

  13. Statistical inference optimized with respect to the observed sample for single or multiple hypotheses

    CERN Document Server

    Bickel, David R

    2010-01-01

    The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the p-value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such that the probability of misleading evidence vanishes asymptotically under weak regularity conditions and such that evidence can support a simple null hypothesis. Unlike the Bayes factor, the DI does not require a prior distribution and is minimax optimal in a sense that does not involve averaging over outcomes that did not occur. Replacing a (possibly pseudo-) likelihood function with its weighted counterpart extends the scope of the DI to models for which the unweighted NML is undefined. The likelihood weights leverage ...

  14. Determining the Optimal Spectral Sampling Frequency and Uncertainty Thresholds for Hyperspectral Remote Sensing of Ocean Color

    Science.gov (United States)

    Vandermeulen, Ryan A.; Mannino, Antonio; Neeley, Aimee; Werdell, Jeremy; Arnone, Robert

    2017-01-01

    Using a modified geostatistical technique, empirical variograms were constructed from the first derivative of several diverse remote sensing reflectance and phytoplankton absorbance spectra to describe how data points are correlated with distance across the spectra. The maximum rate of information gain is measured as a function of the kurtosis associated with the Gaussian structure of the output, and is determined for discrete segments of spectra obtained from a variety of water types (turbid river filaments, coastal waters, shelf waters, a dense Microcystis bloom, and oligotrophic waters), as well as individual and mixed phytoplankton functional types (PFTs; diatoms, chlorophytes, cyanobacteria, coccolithophores). Results show that a continuous spectrum of 5 to 7 nm spectral resolution is optimal to resolve the variability across mixed reflectance and absorbance spectra. In addition, the impact of uncertainty on subsequent derivative analysis is assessed, showing that a limit of 3 Gaussian noise (SNR 66) is tolerated without smoothing the spectrum, and 13 (SNR 15) noise is tolerated with smoothing.

  15. Role of over-sampled data in superresolution processing and a progressive up-sampling scheme for optimized implementations of iterative restoration algorithms

    Science.gov (United States)

    Sundareshan, Malur K.; Zegers, Pablo

    1999-07-01

    Super-resolution algorithms are often needed to enhance the resolution of diffraction-limited imagery acquired from certain sensors, particularly those operating in the millimeter-wave range. While several powerful iterative procedures for image superresolution are currently being developed, some practical implementation considerations become important in order to reduce the computational complexity and improve the convergence rate in deploying these algorithms in applications where real-time performance is of critical importance. Issues of particular interest are representation of the acquired imagery data on appropriate sample grids and the availability of oversampled data prior to super-resolution processing. Sampling at the Nyquist rate corresponds to an optimal spacing of detector elements or a scan rate that provides the largest dwell time (for scan- type focal plane imaging arrays), thus ensuring an increased SNR in the acquired image. However, super-resolution processing of this data could produce aliasing of the spectral components, leading not only to inaccurate estimates of the frequencies beyond the sensor cutoff frequency but also corruption of the passband itself, in turn resulting in a restored image that is poorer than the original. Obtaining sampled image data at a rate higher than the Nyquist rate can be accomplished either during data collection by modifying the acquisition hardware or as a post-acquisition signal processing step. If the ultimate goal in obtaining the oversampled image is to perform super- resolution, however, upsampling operations implemented as part of the overall signal processing software can offer several important benefits compared to acquiring oversampled data by hardware methods (such as by increasing number of detector elements in the sensor array or by microscanning). In this paper, we shall give a mathematical characterization of the process of image representation on a sample grid and establish the role of

  16. Optimized 3D-NMR sampling for resonance assignment of partially unfolded proteins.

    Science.gov (United States)

    Pannetier, Nicolas; Houben, Klaartje; Blanchard, Laurence; Marion, Dominique

    2007-05-01

    Resonance assignment of NMR spectra of unstructured proteins is made difficult by severe overlap due to the lack of secondary structure. Fortunately, this drawback is partially counterbalanced by the narrow line-widths due to the internal flexibility. Alternate sampling schemes can be used to achieve better resolution in less experimental time. Deterministic schemes (such as radial sampling) suffer however from the presence of systematic artifacts. Random acquisition patterns can alleviate this problem by randomizing the artifacts. We show in this communication that quantitative well-resolved spectra can be obtained, provided that the data points are properly weighted before FT. These weights can be evaluated using the concept of Voronoi cells associated with the data points. The introduced artifacts do not affect the direct surrounding of the peaks and thus do not alter the amplitude and frequency of the signals. This procedure is illustrated on 60-residue viral protein, which lacks any persistent secondary structure and thus exhibits major signal overlap.

  17. H-ATLAS High-Z Sources: An Optimal Sample for Cross-Correlation Analyses

    CERN Document Server

    González-Nuevo, J; Bianchini, F

    2014-01-01

    We report a highly signicant ( > 10 ) spatial correlation between galaxies with S 350 m 30 mJy detected in the equatorial elds of the Herschel Astrophysical Terahertz Large Area Survey (H-ATLAS) with estimated redshifts & 1 : 5, and SDSS or GAMA galaxies at 0 : 2 z 0 : 6. The signicance of the cross-correlation is much higher than those reported so far for samples with non-overlapping redshift distributions selected in other wavebands.

  18. Optimized Routing of Intelligent, Mobile Sensors for Dynamic, Data-Driven Sampling

    Science.gov (United States)

    2016-09-27

    notable dates Multi-Hole Probe System for Small UAS Flow Measurements (UMD invention disclosure #PS-2016-015) Do you plan to pursue a claim for personal or...estimation and control, to design coordinated sampling trajectories that yield the most informative measure - ments of estimated dynamical and stochastic...the DDDAS concept in which measurement data is used to update the model description and the updated model is used to guide subsequent measurements . 1

  19. Optimal sample storage and extraction procotols for reliable multilocus genotyping of the human parasite Schistosoma mansoni.

    Science.gov (United States)

    Van den Broeck, F; Geldof, S; Polman, K; Volckaert, F A M; Huyse, T

    2011-08-01

    Genotyping individual larval stages and eggs of natural parasite populations is complicated by the difficulty of obtaining reliable genotypes from low quantity DNA template. A suitable storage and extraction protocol, together with a thorough quantification of genotyping errors are therefore crucial for molecular epidemiological studies. Here we test the robustness, handling time, ease of use, cost effectiveness and success rate of various fixation (Whatman FTA(®) Classic and Elute Cards, 70% EtOH and RNAlater(®)) and subsequent DNA extraction methods (commercial kits and proteinase K protocol). None of these methods require a cooling chain and are therefore suitable for field collection. Based on a multiplex microsatellite PCR with nine loci the success and reliability of each technique is evaluated by the proportion of samples with at least eight scored loci and the proportion of genotyping errors. If only the former is taken into account, FTA(®) Elute is recommended (83% success; 44% genotyping error; 0.2 €/sample; 1h 20 m handling time). However, when also considering the genotyping errors, handling time and ease of use, we opt for 70% EtOH with the 96-well plate technology followed by a simple proteinase K extraction (73% success; 0% genotyping error; 0.2 €/sample; 15m handling time). For eggs we suggest (1) to pool all eggs per person in 1.5 ml tubes filled with 70% EtOH for transport and (2) to identify each egg to species level prior to genotyping. To this end we extended the Rapid diagnostic PCR developed by Webster et al. (2010) with a S. mansoni-specific primer to discriminate between S. mansoni, S. haematobium and S. bovis in a single PCR reaction. The success rate of genotyping eggs was 75% (0% genotyping error). This is the first study to incorporate genotyping errors through re-amplification for the evaluation of schistosome sampling protocols and the identification of error-prone loci.

  20. Optimal sampling efficiency in Monte Carlo simulation with an approximate potential.

    Science.gov (United States)

    Coe, Joshua D; Sewell, Thomas D; Shaw, M Sam

    2009-04-28

    Building on the work of Iftimie et al. [J. Chem. Phys. 113, 4852 (2000)] and Gelb [J. Chem. Phys. 118, 7747 (2003)], Boltzmann sampling of an approximate potential (the "reference" system) is used to build a Markov chain in the isothermal-isobaric ensemble. At the end points of the chain, the energy is evaluated at a more accurate level (the "full" system) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. For reference system chains of sufficient length, consecutive full energies are statistically decorrelated and thus far fewer are required to build ensemble averages with a given variance. Without modifying the original algorithm, however, the maximum reference chain length is too short to decorrelate full configurations without dramatically lowering the acceptance probability of the composite move. This difficulty stems from the fact that the reference and full potentials sample different statistical distributions. By manipulating the thermodynamic variables characterizing the reference system (pressure and temperature, in this case), we maximize the average acceptance probability of composite moves, lengthening significantly the random walk between consecutive full energy evaluations. In this manner, the number of full energy evaluations needed to precisely characterize equilibrium properties is dramatically reduced. The method is applied to a model fluid, but implications for sampling high-dimensional systems with ab initio or density functional theory potentials are discussed.

  1. Evaluation of dissolved oxygen in water by artificial neural network and sample optimization

    Institute of Scientific and Technical Information of China (English)

    陈丽华; 李丽

    2008-01-01

    Three important factors influencing directly the dissolved oxygen (DO) of river including the outflow, the water temperature and the pH, were used as input parameters to set up a BP neural network based on Levenberg-Marquant algorithm. The neural network model was proposed to evaluate DO in water. The model contains two parts: firstly, the learning sample is unified; secondly, the neural network is used to train the unified samples to ensure the best node number of hidden layer. The proposed model is applied to assessing the DO concentration of the Yellow River in Lanzhou city. The evaluation result is compared with that by the neural network method and the reported result in Lanzhou city. The comparison result indicates that the performance of the neural network model is practically feasible in the assessment of DO. At the same time, the linear interpolation method can add the number of network’s learning sample to improve the prediction precision of the network.

  2. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  3. Optimal Feature Extraction for Discriminating Raman Spectra of Different Skin Samples using Statistical Methods and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Zohreh Dehghani Bidgoli

    2011-06-01

    Full Text Available Introduction: Raman spectroscopy, that is a spectroscopic technique based on inelastic scattering of monochromatic light, can provide valuable information about molecular vibrations, so using this technique we can study molecular changes in a sample. Material and Methods: In this research, 153 Raman spectra obtained from normal and dried skin samples. Baseline and electrical noise were eliminated in the preprocessing stage with subsequent normalization of Raman spectra. Then, using statistical analysis and Genetic algorithm, optimal features for discrimination between these two classes have been searched.  In statistical analysis for choosing optimal features, T test, Bhattacharyya distance and entropy between two classes have been calculated. Seeing that T test can better discriminate these two classes so this method used for selecting the best features. Another time Genetic algorithm used for selecting optimal features, finally using these selected features and classifiers such as LDA, KNN, SVM and neural network, these two classes have been discriminated. Results: In comparison of classifiers results, under various strategies for selecting features and classifier, the best results obtained in combination of genetic algorithm in feature selection and SVM in classification. Finally using combination of genetic algorithm and SVM, we could discriminate normal and dried skin samples with accuracy of 90%, sensitivity of 89% and specificity of 91%. Discussion and Conclusion: According to obtained results, we can conclude that genetic algorithm demonstrates better performance than statistical analysis in selection of discriminating features of Raman spectra. In addition, results of this research illustrate the potential of Raman spectroscopy in study of different material effects on skin and skin diseases related to skin dehydration.

  4. Soil moisture optimal sampling strategy for Sentinel 1 validation super-sites in Poland

    Science.gov (United States)

    Usowicz, Boguslaw; Lukowski, Mateusz; Marczewski, Wojciech; Lipiec, Jerzy; Usowicz, Jerzy; Rojek, Edyta; Slominska, Ewa; Slominski, Jan

    2014-05-01

    Soil moisture (SM) exhibits a high temporal and spatial variability that is dependent not only on the rainfall distribution, but also on the topography of the area, physical properties of soil and vegetation characteristics. Large variability does not allow on certain estimation of SM in the surface layer based on ground point measurements, especially in large spatial scales. Remote sensing measurements allow estimating the spatial distribution of SM in the surface layer on the Earth, better than point measurements, however they require validation. This study attempts to characterize the SM distribution by determining its spatial variability in relation to the number and location of ground point measurements. The strategy takes into account the gravimetric and TDR measurements with different sampling steps, abundance and distribution of measuring points on scales of arable field, wetland and commune (areas: 0.01, 1 and 140 km2 respectively), taking into account the different status of SM. Mean values of SM were lowly sensitive on changes in the number and arrangement of sampling, however parameters describing the dispersion responded in a more significant manner. Spatial analysis showed autocorrelations of the SM, which lengths depended on the number and the distribution of points within the adopted grids. Directional analysis revealed a differentiated anisotropy of SM for different grids and numbers of measuring points. It can therefore be concluded that both the number of samples, as well as their layout on the experimental area, were reflected in the parameters characterizing the SM distribution. This suggests the need of using at least two variants of sampling, differing in the number and positioning of the measurement points, wherein the number of them must be at least 20. This is due to the value of the standard error and range of spatial variability, which show little change with the increase in the number of samples above this figure. Gravimetric method

  5. Evaluating the interaction of faecal pellet deposition rates and DNA degradation rates to optimize sampling design for DNA-based mark-recapture analysis of Sonoran pronghorn.

    Science.gov (United States)

    Woodruff, S P; Johnson, T R; Waits, L P

    2015-07-01

    Knowledge of population demographics is important for species management but can be challenging in low-density, wide-ranging species. Population monitoring of the endangered Sonoran pronghorn (Antilocapra americana sonoriensis) is critical for assessing the success of recovery efforts, and noninvasive DNA sampling (NDS) could be more cost-effective and less intrusive than traditional methods. We evaluated faecal pellet deposition rates and faecal DNA degradation rates to maximize sampling efficiency for DNA-based mark-recapture analyses. Deposition data were collected at five watering holes using sampling intervals of 1-7 days and averaged one pellet pile per pronghorn per day. To evaluate nuclear DNA (nDNA) degradation, 20 faecal samples were exposed to local environmental conditions and sampled at eight time points from one to 124 days. Average amplification success rates for six nDNA microsatellite loci were 81% for samples on day one, 63% by day seven, 2% by day 14 and 0% by day 60. We evaluated the efficiency of different sampling intervals (1-10 days) by estimating the number of successful samples, success rate of individual identification and laboratory costs per successful sample. Cost per successful sample increased and success and efficiency declined as the sampling interval increased. Results indicate NDS of faecal pellets is a feasible method for individual identification, population estimation and demographic monitoring of Sonoran pronghorn. We recommend collecting samples sampling interval of four to seven days in summer conditions (i.e., extreme heat and exposure to UV light) will achieve desired sample sizes for mark-recapture analysis while also maximizing efficiency [Corrected]. © 2014 John Wiley & Sons Ltd.

  6. Arsenic speciation by hydride generation-quartz furnace atomic absorption spectrometry. Optimization of analytical parameters and application to environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Molenat, N.; Astruc, A.; Holeman, M.; Pinel, R. [Laboratoire de Chimie Analytique Bioinorganique et Environnement, Dept. de Chimie, Faculte des Sciences et Techniques, 64 - Pau (France); Maury, G. [Montpellier-2 Univ., 34 (France). Dept. de Chimie Organique Fine

    1999-11-01

    Analytical parameters of hydride generation, trapping, gas chromatography and atomic absorption spectrometry detection in a quartz cell furnace (HG/GC/QFAAS) device have been optimized in order to develop an efficient and sensitive method for arsenic compounds speciation. Good performances were obtained with absolute detection limits in the range of 0.1 - 0.5 ng for arsenite, arsenate, mono-methyl-arsonic acid (MMAA), dimethyl-arsinic acid (DMAA) and trimethyl-arsine oxide (TMAO). A pH selective reduction for inorganic arsenic speciation was successfully reported. Application to the accurate determination of arsenic compounds in different environmental samples was performed. (authors)

  7. Adaptive multi-sample-based photoacoustic tomography with imaging quality optimization

    Institute of Scientific and Technical Information of China (English)

    Yuxin Wang; Jie Yuan; Sidan Du; Xiaojun Liu; Guan Xu; Xueding Wang

    2015-01-01

    The energy of light exposed on human skin is compulsively limited for safety reasons which affects the power of photoacoustic (PA) signal and its signal-to-noise ratio (SNR) level.Thus,the final reconstructed PA image quality is degraded.This Letter proposes an adaptive multi-sample-based approach to enhance the SNR of PA signals and in addition,detailed information in rebuilt PA images that used to be buried in the noise can be distinguished.Both ex vivo and in vivo experiments are conducted to validate the effectiveness of our proposed method which provides its potential value in clinical trials.

  8. 基于Cramer法则的区间灰数预测模型参数优化方法研究%Research on the Parameter Optimal Method of Interval Grey Number Prediction Model based on Cramer Rule

    Institute of Scientific and Technical Information of China (English)

    曾波; 石娟娟; 周雪玉

    2015-01-01

    以改善区间灰数预测模型的模拟及预测性能为目的,对区间灰数预测模型的参数优化方法进行研究,应用Cramer法则推导了核序列GM (1,1)模型通用形式的参数无偏估计新方法,从理论上证明了新方法对非齐次指数“核”序列的模拟无偏性,并在此基础上构建了一种新的区间灰数预测模型;通过与优化前的区间灰数预测模型模拟精度进行比较,结果表明新模型具有更为优秀的模拟及预测性能。此研究成果对丰富和完善灰色预测模型方法体系与拓展灰色模型应用范围,具有积极意义。%The parameter optimal method of interval grey number prediction model was studied to improve its simulative and predictive performance in this paper .It applied Cramer rule to deduce the novel unbiased estimation method of GM (1 ,1) model common form for interval grey numbers'kernel sequence , and the simulative unbiasedness of the novel method for nonhomogeneous exponent kernel sequences has been theoretically proven and a new interval grey number prediction model has been established . Comparing the simulative accuracy of the novel model with that of the previous interval grey model without parameter optimization reveals that the novel method has better performances in terms of modeling and prediction . The findings in this paper enrich the literature of grey prediction model and pave the way towards extending the application of grey model .

  9. Multiple sampling in one day to optimize smear microscopy in children with tuberculosis in Yemen.

    Directory of Open Access Journals (Sweden)

    Nasher Al-Aghbari

    Full Text Available BACKGROUND AND AIM: The diagnosis of pulmonary Tuberculosis (TB in children is difficult and often requires hospitalization. We explored whether the yield of specimens collected for smear microscopy from different anatomical sites in one visit is comparable to the yield of specimens collected from a single anatomical site over several days. METHODOLOGY AND PRINCIPAL FINDINGS: Children with signs/symptoms of pulmonary TB attending a reference hospital in Sana'a Yemen underwent one nasopharyngeal aspirate (NPA the first day of consultation and three gastric aspirates (GA plus three expectorated/induced sputa over 3 consecutive days. Specimens were examined using smear microscopy (Ziehl-Neelsen and cultured in solid media (Ogawa. Two hundred and thirteen children (aged 2 months-15 years were enrolled. One hundred and ninety seven (93% underwent nasopharyngeal aspirates, 196 (92% GA, 122 (57% expectorated sputum and 88 induced sputum. A total 1309 specimens were collected requiring 237 hospitalization days. In total, 29 (13.6% children were confirmed by culture and 18 (8.5% by smear microscopy. The NPA identified 10 of the 18 smear-positives; three consecutive GA identified 10 and induced/expectorated sputa identified 13 (6 by induced, 8 by expectorated sputum and one positive by both. In comparison, 22 (3.7% of 602 specimens obtained the first day were smear-positive and identified 14 (6.6% smear-positive children. CONCLUSION/SIGNIFICANCE: The examination of multiple tests the first day of consultation identified a similar proportion of smear-positive children than specimens collected over several days; would require half the number of tests and significantly less hospitalization. Optimized smear microscopy approaches for children should be explored further.

  10. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Directory of Open Access Journals (Sweden)

    Joel Adu-Brimpong

    2017-03-01

    Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  11. Optimization of FRAM precision for isotopic measurements on large samples of low-burnup PuO2

    Energy Technology Data Exchange (ETDEWEB)

    Vo, Duc T [Los Alamos National Laboratory; Wenz, Tracy R [Los Alamos National Laboratory; Sampson, Thomas E [Los Alamos National Laboratory

    2009-01-01

    The gamma ray spectrum of plutonium contains measurable gamma rays ranging in energy from 60 keV to above 1 MeV. The FRAM gamma ray isotopic analysis code can analyze data from all types of HPGe detectors in this energy range typically using planar detectors in the energy range 60-210 keV or 120-451 keV and using coaxial detectors in the energy ranges 120-451 keV or 200-1001 keV. The statistical measurement precision depends upon the detector/energy range combination as well as the characteristics of the sample and any addition filters. In this paper we carry out the optimization of measurement precision for the important case of a multi-kg sample of low bumup PuO{sub 2} contained in a DOE 3013 Standard-compatible long-teml storage container.

  12. Optimization of the treatment of wheat samples for the determination of phytic acid by HPLC with refractive index detection.

    Science.gov (United States)

    Amaro, Rosa; Murillo, Miguel; González, Zurima; Escalona, Andrés; Hernández, Luís

    2009-01-01

    The treatment of wheat samples was optimized before the determination of phytic acid by high-performance liquid chromatography with refractive index detection. Drying by lyophilization and oven drying were studied; drying by lyophilization gave better results, confirming that this step is critical in preventing significant loss of analyte. In the extraction step, washing of the residue and collection of this water before retention of the phytates in the NH2 Sep-Pak cartridge were important. The retention of phytates in the NH2 Sep-Pak cartridge and elimination of the HCI did not produce significant loss (P = 0.05) in the phytic acid content of the sample. Recoveries of phytic acid averaged 91%, which is a substantial improvement with respect to values reported by others using this methodology.

  13. Competitive Comparison of Optimal Designs of Experiments for Sampling-based Sensitivity Analysis

    CERN Document Server

    Janouchova, Eliska

    2012-01-01

    Nowadays, the numerical models of real-world structures are more precise, more complex and, of course, more time-consuming. Despite the growth of a computational effort, the exploration of model behaviour remains a complex task. The sensitivity analysis is a basic tool for investigating the sensitivity of the model to its inputs. One widely used strategy to assess the sensitivity is based on a finite set of simulations for a given sets of input parameters, i.e. points in the design space. An estimate of the sensitivity can be then obtained by computing correlations between the input parameters and the chosen response of the model. The accuracy of the sensitivity prediction depends on the choice of design points called the design of experiments. The aim of the presented paper is to review and compare available criteria determining the quality of the design of experiments suitable for sampling-based sensitivity analysis.

  14. Forecasting by Interval Type-2 Fuzzy Logic System Optimized with QPSO Algorithm%基于QPSO算法优化的区间二型模糊逻辑系统预测

    Institute of Scientific and Technical Information of China (English)

    陈阳; 王大志; 宁武

    2016-01-01

    A kind of interval type-2 fuzzy logic system was designed to investigate forecasting problems based on the historical data. In the process of designing the interval type-2 fuzzy logic system,the antecedent,consequent and input measurement primary membership functions of interval type-2 fuzzy sets were all Gaussian type-2 membership functions with uncertain standard deviation. The quantum particle swarm optimization algorithm was used to tune the parameters of the designed interval type-2 fuzzy logic system. Part of the load competition data of European network on intelligent technologies and the price data of West Texas Intermediate crude oil were used to test the proposed fuzzy logic system forecasting method. Comprehensive evaluation error sum was defined as the forecasting performance index of fuzzy logic system. Simulation studies showed that the proposed interval type-2 fuzzy logic system forecasting methods outperform their corresponding type-1 fuzzy logic system on convergence and stability.%设计了一类区间二型模糊逻辑系统,研究基于历史数据的预测问题。在区间二型模糊逻辑系统设计中,前件、后件、输入测量区间二型模糊的主隶属函数均选择成具有不确定标准偏差的高斯型二型隶属函数。量子粒子群优化( QPSO)算法用来调整所设计的区间二型模糊逻辑系统参数。部分欧洲智能技术网络( EUNITE)的负荷竞赛数据和美国田纳西州( WTI)原油价格数据用来测试所提出的模糊逻辑系统预测方法。定义综合评价误差和作为模糊逻辑系统的预测性能指标。仿真研究表明,所提出的区间二型模糊逻辑系统预测方法在收敛性和稳定性上均优于相应的一型模糊逻辑系统。

  15. Assets and Liabilities Optimization Model of the Risk Control Based on Nonlinear Interval Number%基于非线性区问数风险控制的资产负债优化模型

    Institute of Scientific and Technical Information of China (English)

    冯宝军; 闰达文; 迟国泰

    2012-01-01

    现有研究把存、贷款利率视为常数,无法使资产配置的优化结果适应未来市场利率的变化。本文的资产负债管理优化模型通过资产与负债的区间数的持续期缺口建立了区间型利率风险免疫条件,使资产的最优配置在资产与负债的收益率变化时仍能免疫利率风险。研究表明本文引入的持效期缺口区间的偏向选择参数7决定预留缺口是赚钱还是亏钱。γ取值0.5时缺口区间两端点的绝对值最小;γ越大于0.5时,正缺口越大,在利率下降时就越赚钱。γ越小于0.5时,负缺口越大,在利率上升时就越赚钱。而区间长度选择参数γ决定损益的大小;揭示了在积极的利率风险管理策略中,选择较小的λ会获得较多的风险收益。另一方面,本文通过相关系数组合半绝对离差建立了非线性区间型组合风险的函数表达式,改变了现有研究线性区间型算法将各笔贷款风险进行简单线性加权、进而夸大了组合信用风险的弊端。%The unreasonable status that deposit and lending rates are considered as constants m existing literature couldn't immune future interest rate risk, in case deposit and lending rates would be changing. In our assets and liabilities optimization model, we construct the immune constrain of interval interest rate through duration gap of interval numbers of asset and liability, which makes the assets' allocation be immune to interest rate risk with a changing yield of asset and liability. The study shows that interval-biased selection parameter γ of duration gap decides whether the reserved gap makes money or loses money. The result is the interval-biased selection parameter γ of duration gap is 0.5, the absolute value of both ends of gap interval is in minimum; the more γ is greater than 0.5, the larger positive gap is, and more money is earned when interest rate declines ; The more γ is less than 0.5, the

  16. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  17. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  18. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Chen, DI-WEN

    2001-11-21

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian

  19. Interval arithmetic in calculations

    Science.gov (United States)

    Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima

    2016-10-01

    Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.

  20. Stop-and-Stare: Optimal Sampling Algorithms for Viral Marketing in Billion-scale Networks

    CERN Document Server

    Nguyen, Hung T; Dinh, Thang N

    2016-01-01

    Influence Maximization (IM), that seeks a small set of key users who spread the influence widely into the network, is a core problem in multiple domains. It finds applications in viral marketing, epidemic control, and assessing cascading failures within complex systems. Despite the huge amount of effort, IM in billion-scale networks such as Facebook, Twitter, and World Wide Web has not been satisfactorily solved. Even the state-of-the-art methods such as TIM+ and IMM may take days on those networks. In this paper, we propose SSA and D-SSA, two novel sampling frameworks for IM-based viral marketing problems. SSA and D-SSA are up to 1200 times faster than the SIGMOD 15 best method, IMM, while providing the same $(1- 1/e-\\epsilon)$ approximation guarantee. Underlying our frameworks is an innovative Stop-and-Stare strategy in which they stop at exponential check points to verify (stare) if there is adequate statistical evidence on the solution quality. Theoretically, we prove that SSA and D-SSA are the first appr...

  1. Optimization of multiple muco-cutaneous site sampling method for screening MRSA colonization in ICU

    Directory of Open Access Journals (Sweden)

    Priya Datta

    2013-01-01

    Full Text Available Aims: Active screening for methicillin resistant Staphylococcus aureus (MRSA carriers remains a vital component of infection control policy in any health-care setting. The relative advantage of multiple anatomical site screening for detecting MRSA carriers is well recognized. However, this leads to increase in financial and logistical load in a developing world scenario. The objective of our study was to determine the sensitivity of MRSA screening of nose, throat, axilla, groin, perineum and the site of catheterization (central line catheter individually among intensive care unit patients and to compare it with the sensitivity of multiple site screening. Materials and Methods: Active surveillance of 400 patients was done to detect MRSA colonization; 6 sites-nose, throat, axilla, perineum, groin and site of catheter were swabbed. Result and Discussion: The throat swab alone was able to detect maximum number of MRSA (76/90 carriers, with sensitivity of 84.4%. Next in order of sensitivity was nasal swab, which tested 77.7% of MRSA colonized patients. When multiple sites are screened, the sensitivity for MRSA detection increased to 95%. Conclusions: We found that though throat represent the most common site of MRSA colonization, nose or groin must also be sampled simultaneously to attain a higher sensitivity.

  2. Optimization of a Novel Non-invasive Oral Sampling Technique for Zoonotic Pathogen Surveillance in Nonhuman Primates.

    Science.gov (United States)

    Smiley Evans, Tierra; Barry, Peter A; Gilardi, Kirsten V; Goldstein, Tracey; Deere, Jesse D; Fike, Joseph; Yee, JoAnn; Ssebide, Benard J; Karmacharya, Dibesh; Cranfield, Michael R; Wolking, David; Smith, Brett; Mazet, Jonna A K; Johnson, Christine K

    2015-01-01

    Free-ranging nonhuman primates are frequent sources of zoonotic pathogens due to their physiologic similarity and in many tropical regions, close contact with humans. Many high-risk disease transmission interfaces have not been monitored for zoonotic pathogens due to difficulties inherent to invasive sampling of free-ranging wildlife. Non-invasive surveillance of nonhuman primates for pathogens with high potential for spillover into humans is therefore critical for understanding disease ecology of existing zoonotic pathogen burdens and identifying communities where zoonotic diseases are likely to emerge in the future. We developed a non-invasive oral sampling technique using ropes distributed to nonhuman primates to target viruses shed in the oral cavity, which through bite wounds and discarded food, could be transmitted to people. Optimization was performed by testing paired rope and oral swabs from laboratory colony rhesus macaques for rhesus cytomegalovirus (RhCMV) and simian foamy virus (SFV) and implementing the technique with free-ranging terrestrial and arboreal nonhuman primate species in Uganda and Nepal. Both ubiquitous DNA and RNA viruses, RhCMV and SFV, were detected in oral samples collected from ropes distributed to laboratory colony macaques and SFV was detected in free-ranging macaques and olive baboons. Our study describes a technique that can be used for disease surveillance in free-ranging nonhuman primates and, potentially, other wildlife species when invasive sampling techniques may not be feasible.

  3. Evaluation of the pre-posterior distribution of optimized sampling times for the design of pharmacokinetic studies.

    Science.gov (United States)

    Duffull, Stephen B; Graham, Gordon; Mengersen, Kerrie; Eccleston, John

    2012-01-01

    Information theoretic methods are often used to design studies that aim to learn about pharmacokinetic and linked pharmacokinetic-pharmacodynamic systems. These design techniques, such as D-optimality, provide the optimum experimental conditions. The performance of the optimum design will depend on the ability of the investigator to comply with the proposed study conditions. However, in clinical settings it is not possible to comply exactly with the optimum design and hence some degree of unplanned suboptimality occurs due to error in the execution of the study. In addition, due to the nonlinear relationship of the parameters of these models to the data, the designs are also locally dependent on an arbitrary choice of a nominal set of parameter values. A design that is robust to both study conditions and uncertainty in the nominal set of parameter values is likely to be of use clinically. We propose an adaptive design strategy to account for both execution error and uncertainty in the parameter values. In this study we investigate designs for a one-compartment first-order pharmacokinetic model. We do this in a Bayesian framework using Markov-chain Monte Carlo (MCMC) methods. We consider log-normal prior distributions on the parameters and investigate several prior distributions on the sampling times. An adaptive design was used to find the sampling window for the current sampling time conditional on the actual times of all previous samples.

  4. Optimization of crude enzyme preparation methods for analysis of glutamine synthetase activity in phytoplankton and field samples

    Institute of Scientific and Technical Information of China (English)

    WANG Yujue; WANG Dazhi; HONG Huasheng

    2009-01-01

    Glutamine synthetase (GS) is an important enzyme involved in nitrogen assimilation and metabolism in marine phytoplankton. However, little work has been done in situ due to the limitation of crude enzyme preparation methods. In this study, three enzyme preparation methods, high-speed centrifugation (HC, <10 000 g), ultracentrifugation (UC, 70 000 g), and ultrafiltration (UF) with 100 kμ, molecular weight cutoff, were compared using two diatom species (Asterionellopsis glacialis and Thalassiosira weissflogii), and two dinoflagellate species (Alexandrium catenella and Prorocentrum donghaiense) as experimental materials together with field samples collected from Xiamen Harbor, China. The results showed that HC is the best method to prepare crude enzymes for glutamine synthetase activity (GSA) in diatom species and diatom-dominant samples, while UF is the best method to extract GS from dinoflagellate species and dinoflagellate-dominant samples. For the HC method, the optimal centrifugal speed and time were 10 000 g and 35 min, respectively, and under these conditions, the highest GSA was obtained in all samples. This study indicates that both methods (HC and UF) overcome the limitation of centrifugal speed and could be applied to in situ GSA analysis, especially at sea.

  5. Evaluation and Optimization of Blood Micro-Sampling Methods: Serial Sampling in a Cross-Over Design from an Individual Mouse.

    Science.gov (United States)

    Patel, Nita J; Wickremsinhe, Enaksha; Hui, Yu-Hua; Barr, Alexandar; Masterson, Nicholas; Ruterbories, Kenneth; Weller, Jennifer; Hanes, Jennifer; Kern, Tom; Perkins, Everett

    Current practices applied to mouse pharmacokinetic (PK) studies often use large numbers of animals with sporadic or composite sampling that inadequately describe PK profiles.  The purpose of this work was to evaluate and optimize blood microsampling techniques coupled with dried blood spot (DBS) and LC-MS/MS analysis to generate reliable PK data in mice.  In addition, the feasibility of cross-over designs was assessed and recommendations are presented. The work describes a comprehensive evaluation of five blood microsampling techniques (tail clip, tail vein with needle hub, submandibular, retro-orbital, and saphenous bleeding) in CD-1 mice.  The feasibility of blood sampling was evaluated based on animal observations, ease of bleeding, and ability to collect serial samples.  Methotrexate, gemfibrozil and glipizide were used as test compounds and were dosed either orally or intravenously, followed by DBS collection and LC-MS/MS analysis to compare PK with various bleeding methods. Submandibular and retro-orbital methods that required non-serial blood collections did not allow for inter-animal variability assessments and resulted in poorly described absorption and distribution kinetics.  The submandibular and tail vein with needle-hub methods were the least favorable from a technical feasibility perspective.  Serial bleeding was possible with cannulated animals or saphenous bleeding in non-cannulated animals.   Of the methods that allowed serial sampling, the saphenous method when executed as described in this report, was most practical, reproducible and provided for assessment of inter-animal variability.  It enabled the collection of complete exposure profiles from a single mouse and the conduct of an intravenous/oral cross-over study design.  This methodology can be used routinely, it promotes the 3Rs principles by achieving reductions in the number of animals used, decreased restraints and animal stress, and improved the quality of data obtained in mouse

  6. An Optimal Sampling Design for Observing and Validating Long-Term Leaf Area Index with Temporal Variations in Spatial Heterogeneities

    Directory of Open Access Journals (Sweden)

    Yelu Zeng

    2015-01-01

    Full Text Available A sampling strategy to define elementary sampling units (ESUs for an entire site at the kilometer scale is an important step in the validation process for moderate-resolution leaf area index (LAI products. Current LAI-sampling strategies are unable to consider the vegetation seasonal changes and are better suited for single-day LAI product validation, whereas the increasingly used wireless sensor network for LAI measurement (LAINet requires an optimal sampling strategy across both spatial and temporal scales. In this study, we developed an efficient and robust LAI Sampling strategy based on Multi-temporal Prior knowledge (SMP for long-term, fixed-position LAI observations. The SMP approach employed multi-temporal vegetation index (VI maps and the vegetation classification map as a priori knowledge. The SMP approach minimized the multi-temporal bias of the VI frequency histogram between the ESUs and the entire site and maximized the nearest-neighbor index to ensure that ESUs were dispersed in the geographical space. The SMP approach was compared with four sampling strategies including random sampling, systematic sampling, sampling based on the land-cover map and a sampling strategy based on vegetation index prior knowledge using the PROSAIL model-based simulation analysis in the Heihe River basin. The results indicate that the ESUs selected using the SMP method spread more evenly in both the multi-temporal feature space and geographical space over the vegetation cycle. By considering the temporal changes in heterogeneity, the average root-mean-square error (RMSE of the LAI reference maps can be reduced from 0.12 to 0.05, and the relative error can be reduced from 6.1% to 2.2%. The SMP technique was applied to assign the LAINet ESU locations at the Huailai Remote Sensing Experimental Station in Beijing, China, from 4 July to 28 August 2013, to validate three MODIS C5 LAI products. The results suggest that the average R2, RMSE, bias and relative

  7. Optimization of dynamic headspace extraction system for measurement of halogenated volatile organic compounds in liquid or viscous samples

    Science.gov (United States)

    Taniai, G.; Oda, H.; Kurihara, M.; Hashimoto, S.

    2010-12-01

    Halogenated volatile organic compounds (HVOCs) produced in the marine environment are thought to play a key role in atmospheric reactions, particularly those involved in the global radiation budget and the depression of tropospheric and stratospheric ozone. To evaluate HVOCs concentrations in the various natural samples, we developed an automated dynamic headspace extraction method for the determination of 15 HVOCs, such as chloromethane, bromomethane, bromoethane, iodomethane, iodoethane, bromochloromethane, 1-iodopropane, 2-iodopropane, dibromomethane, bromodichloromethane, chloroiodomethane, chlorodibromomethane, bromoiodomethane, tribromomethane, and diiodomethane. Dynamic headspace system (GERSTEL DHS) was used to purge the gas phase above samples and to trap HVOCs on the adsorbent column from the purge gas. We measured the HVOCs concentrations in the adsorbent column with gas chromatograph (Agilent 6890N)- mass spectrometer (Agilent 5975C). In dynamic headspace system, an glass tube containing Tenax TA or Tenax GR was used as adsorbent column for the collection of 15 HVOCs. The parameters for purge and trap extraction, such as purge flow rate (ml/min), purge volume (ml), incubation time (min), and agitator speed (rpm), were optimized. The detection limits of HVOCs in water samples were 1270 pM (chloromethane), 103 pM (bromomethane), 42.1 pM (iodomethane), and 1.4 to 10.2 pM (other HVOCs). The repeatability (relative standard deviation) for 15 HVOCs were < 9 % except chloromethane (16.2 %) and bromomethane (11.0 %). On the basis of the measurements for various samples, we concluded that this analytical method is useful for the determination of wide range of HVOCs with boiling points between - 24°C (chloromethane) and 181°C (diiodomethane) for the liquid or viscous samples.

  8. 基于维护总成本最优的 PM 周期优化方法和应用%Optimizations on PM interval based on total maintain cost module

    Institute of Scientific and Technical Information of China (English)

    傅晨曦; 郑永前

    2014-01-01

    Based on the total maintain cost module ,utilize the natural networks tool predict the failure rate with the several already known and logged historical parameter .By Genetic Algorithm ( GA) optimize method to search the optimization solutions on the total maintain cost minimum .The new PM interval decision solution helps reduce the total maintain cost around 34%for the stud-ied system .%以维护总成本模型作为设备预防性维护周期决策的优化条件,利用所研究设备系统已记录的历史数据建立神经网络模型,预估各个分系统使用寿命对于产品工艺参数的影响,通过遗传算法确定总成本最低的最优化预防性维护周期方案,并根据实际操作情况进行修正。运用优化后的决策使该系统的维护总成本下降34%。

  9. Optimization of separation and online sample concentration of N,N-dimethyltryptamine and related compounds using MEKC.

    Science.gov (United States)

    Wang, Man-Juing; Tsai, Chih-Hsin; Hsu, Wei-Ya; Liu, Ju-Tsung; Lin, Cheng-Huang

    2009-02-01

    The optimal separation conditions and online sample concentration for N,N-dimethyltryptamine (DMT) and related compounds, including alpha-methyltryptamine (AMT), 5-methoxy-AMT (5-MeO-AMT), N,N-diethyltryptamine (DET), N,N-dipropyltryptamine (DPT), N,N-dibutyltryptamine (DBT), N,N-diisopropyltryptamine (DiPT), 5-methoxy-DMT (5-MeO-DMT), and 5-methoxy-N,N-DiPT (5-MeO-DiPT), using micellar EKC (MEKC) with UV-absorbance detection are described. The LODs (S/N = 3) for MEKC ranged from 1.0 1.8 microg/mL. Use of online sample concentration methods, including sweeping-MEKC and cation-selective exhaustive injection-sweep-MEKC (CSEI-sweep-MEKC) improved the LODs to 2.2 8.0 ng/mL and 1.3 2.7 ng/mL, respectively. In addition, the order of migration of the nine tryptamines was investigated. A urine sample, obtained by spiking urine collected from a human volunteer with DMT, was also successfully examined.

  10. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  11. Optimizing sample pretreatment for compound-specific stable carbon isotopic analysis of amino sugars in marine sediment

    Directory of Open Access Journals (Sweden)

    R. Zhu

    2014-01-01

    Full Text Available Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e. equivalent to ~ 8 ng of amino sugar carbon. Our results obtained from δ13C analysis of amino sugars in selected marine sediment samples showed that muramic acid had isotopic imprints from indigenous bacterial activities, whereas glucosamine and galactosamine were mainly derived from organic detritus. The analysis of stable carbon isotopic compositions of amino sugars opens a promising window for the investigation of microbial metabolisms in marine sediments and the deep marine biosphere.

  12. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  13. An Optimized Adsorbent Sampling Combined to Thermal Desorption GC-MS Method for Trimethylsilanol in Industrial Environments

    Directory of Open Access Journals (Sweden)

    Jae Hwan Lee

    2012-01-01

    Full Text Available Trimethylsilanol (TMSOH can cause damage to surfaces of scanner lenses in the semiconductor industry, and there is a critical need to measure and control airborne TMSOH concentrations. This study develops a thermal desorption (TD-gas chromatography (GC-mass spectrometry (MS method for measuring trace-level TMSOH in occupational indoor air. Laboratory method optimization obtained best performance when using dual-bed tube configuration (100 mg of Tenax TA followed by 100 mg of Carboxen 569, n-decane as a solvent, and a TD temperature of 300°C. The optimized method demonstrated high recovery (87%, satisfactory precision (<15% for spiked amounts exceeding 1 ng, good linearity (R2=0.9999, a wide dynamic mass range (up to 500 ng, low method detection limit (2.8 ng m−3 for a 20-L sample, and negligible losses for 3-4-day storage. The field study showed performance comparable to that in laboratory and yielded first measurements of TMSOH, ranging from 1.02 to 27.30 μg/m3, in the semiconductor industry. We suggested future development of real-time monitoring techniques for TMSOH and other siloxanes for better maintenance and control of scanner lens in semiconductor wafer manufacturing.

  14. Separation optimization of long porous-layer open-tubular columns for nano-LC-MS of limited proteomic samples.

    Science.gov (United States)

    Rogeberg, Magnus; Vehus, Tore; Grutle, Lene; Greibrokk, Tyge; Wilson, Steven Ray; Lundanes, Elsa

    2013-09-01

    The single-run resolving power of current 10 μm id porous-layer open-tubular (PLOT) columns has been optimized. The columns studied had a poly(styrene-co-divinylbenzene) porous layer (~0.75 μm thickness). In contrast to many previous studies that have employed complex plumbing or compromising set-ups, SPE-PLOT-LC-MS was assembled without the use of additional hardware/noncommercial parts, additional valves or sample splitting. A comprehensive study of various flow rates, gradient times, and column length combinations was undertaken. Maximum resolution for LC conditions or long silica monolith nanocolumns. Nearly 500 proteins (1958 peptides) could be identified in just one single injection of an extract corresponding to 1000 BxPC3 beta catenin (-/-) cells, and ~1200 and 2500 proteins in extracts of 10,000 and 100,000 cells, respectively, allowing detection of central members and regulators of the Wnt signaling pathway.

  15. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn; Zhu, Weiliang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [ACS Key Laboratory of Receptor Research, Drug Discovery and Design Center, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 555 Zuchongzhi Road, Shanghai 201203 (China); Shi, Jiye, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [UCB Pharma, 216 Bath Road, Slough SL1 4EN (United Kingdom)

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.

  16. Design and sampling plan optimization for RT-qPCR experiments in plants: a case study in blueberry

    Directory of Open Access Journals (Sweden)

    Jose V Die

    2016-03-01

    Full Text Available The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  17. Design and Sampling Plan Optimization for RT-qPCR Experiments in Plants: A Case Study in Blueberry.

    Science.gov (United States)

    Die, Jose V; Roman, Belen; Flores, Fernando; Rowland, Lisa J

    2016-01-01

    The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction) replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  18. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna;

    2010-01-01

    Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  19. Big Boss Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper big boss interval games are introduced and various characterizations are given. The structure of the core of a big boss interval game is explicitly described and plays an important role relative to interval-type bi-monotonic allocation schemes for such games. Specifically, each element

  20. Optimization of Process Parameters for Cracking Prevention of UHSS in Hot Stamping Based on Hammersley Sequence Sampling and Back Propagation Neural Network-Genetic Algorithm Mixed Methods

    Institute of Scientific and Technical Information of China (English)

    Menghan Wang∗,Zongmin Yue; Lie Meng

    2016-01-01

    In order to prevent cracking appeared in the work⁃piece during the hot stamping operation, this paper proposes a hybrid optimization method based on Hammersley sequence sampling ( HSS) , finite analysis, back⁃propagation ( BP ) neural network and genetic algorithm ( GA ) . The mechanical properties of high strength boron steel are characterized on the basis of uniaxial tensile test at elevated temperatures. The samples of process parameters are chosen via the HSS that encourages the exploration throughout the design space and hence achieves better discovery of possible global optimum in the solution space. Meanwhile, numerical simulation is carried out to predict the forming quality for the optimized design. A BP neural network model is developed to obtain the mathematical relationship between optimization goal and design variables, and genetic algorithm is used to optimize the process parameters. Finally, the results of numerical simulation are compared with those of production experiment to demonstrate that the optimization strategy proposed in the paper is feasible.

  1. Rapid, sensitive and reproducible method for point-of-collection screening of liquid milk for adulterants using a portable Raman spectrometer with novel optimized sample well

    Science.gov (United States)

    Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.

    2017-02-01

    Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.

  2. Optimization of loop-mediated isothermal amplification (LAMP) assays for the detection of Leishmania DNA in human blood samples.

    Science.gov (United States)

    Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon

    2016-10-01

    Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR.

  3. Analysis of azole fungicides in fish muscle tissues: Multi-factor optimization and application to environmental samples.

    Science.gov (United States)

    Zhong, Yuanhong; Chen, Zhi-Feng; Liu, Shuang-Shuang; Dai, Xiaoxin; Zhu, Xinping; Zheng, Guangming; Liu, Shugui; Liu, Guoguang; Cai, Zongwei

    2017-02-15

    Azole fungicides have been reported to be accumulated in fish tissue. In this study, a sensitive and robust method using high-performance liquid chromatography-tandem mass spectrometry combined with ultrasonic extraction, solid-liquid clean-up, liquid-liquid extraction and solid-phase extraction (SPE) for enrichment and purification have been proposed for determination of azole fungicides in fish muscle samples. According to the results of non-statistical analysis and statistical analysis, ethyl acetate, primary secondary amine (PSA) and mixed-mode cation exchange cartridge (MCX) were confirmed as the best extraction solvent, clean-up sorbent and SPE cartridge, respectively. The satisfied recoveries (81.7-104%) and matrix effects (-6.34-7.16%), both corrected by internal standards, were performed in various species of fish muscle matrices. Method quantification limits of all azoles were in the range of 0.07-2.83ng/g. This optimized method was successfully applied for determination of the target analytes in muscle samples of field fish from Beijiang River and its tributaries. Three azole fungicides including climbazole, clotrimazole and carbendazim were detected at ppb levels in fish muscle tissues. Therefore, this analytical method is practical and suitable for further clarifying the contamination profiles of azole fungicides in wild fish species. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Detection of the Inflammation Biomarker C-Reactive Protein in Serum Samples: Towards an Optimal Biosensor Formula

    Directory of Open Access Journals (Sweden)

    Wellington M. Fakanya

    2014-10-01

    Full Text Available The development of an electrochemical immunosensor for the biomarker, C-reactive protein (CRP, is reported in this work. CRP has been used to assess inflammation and is also used in a multi-biomarker system as a predictive biomarker for cardiovascular disease risk. A gold-based working electrode sensor was developed, and the types of electrode printing inks and ink curing techniques were then optimized. The electrodes with the best performance parameters were then employed for the construction of an immunosensor for CRP by immobilizing anti-human CRP antibody on the working electrode surface. A sandwich enzyme-linked immunosorbent assay (ELISA was then constructed after sample addition by using anti-human CRP antibody labelled with horseradish peroxidase (HRP. The signal was generated by the addition of a mediator/substrate system comprised of 3,3,5',5'-Tetramethylbenzidine dihydrochloride (TMB and hydrogen peroxide (H2O2. Measurements were conducted using chronoamperometry at −200 mV against an integrated Ag/AgCl reference electrode. A CRP limit of detection (LOD of 2.2 ng·mL−1 was achieved in spiked serum samples, and performance agreement was obtained with reference to a commercial ELISA kit. The developed CRP immunosensor was able to detect a diagnostically relevant range of the biomarker in serum without the need for signal amplification using nanoparticles, paving the way for future development on a cardiac panel electrochemical point-of-care diagnostic device.

  5. Optimization of pressurized liquid extraction (PLE) of dioxin-furans and dioxin-like PCBs from environmental samples.

    Science.gov (United States)

    Antunes, Pedro; Viana, Paula; Vinhas, Tereza; Capelo, J L; Rivera, J; Gaspar, Elvira M S M

    2008-05-30

    Pressurized liquid extraction (PLE) applying three extraction cycles, temperature and pressure, improved the efficiency of solvent extraction when compared with the classical Soxhlet extraction. Polychlorinated-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like PCBs (coplanar polychlorinated biphenyls (Co-PCBs)) in two Certified Reference Materials [DX-1 (sediment) and BCR 529 (soil)] and in two contaminated environmental samples (sediment and soil) were extracted by ASE and Soxhlet methods. Unlike data previously reported by other authors, results demonstrated that ASE using n-hexane as solvent and three extraction cycles, 12.4 MPa (1800 psi) and 150 degrees C achieves similar recovery results than the classical Soxhlet extraction for PCDFs and Co-PCBs, and better recovery results for PCDDs. ASE extraction, performed in less time and with less solvent proved to be, under optimized conditions, an excellent extraction technique for the simultaneous analysis of PCDD/PCDFs and Co-PCBs from environmental samples. Such fast analytical methodology, having the best cost-efficiency ratio, will improve the control and will provide more information about the occurrence of dioxins and the levels of toxicity and thereby will contribute to increase human health.

  6. Numerical calculation of economic uncertainty by intervals and fuzzy numbers

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2010-01-01

    This paper emphasizes that numerically correct calculation of economic uncertainty with intervals and fuzzy numbers requires implementation of global optimization techniques in contrast to straightforward application of interval arithmetic. This is demonstrated by both a simple case from managerial...

  7. Avaliação dos níveis de retinol no colostro humano coletado no intervalo de 24 horas Evaluation of retinol levels in human colostrum in two samples collected at an interval of 24 hours

    Directory of Open Access Journals (Sweden)

    Karla D. S. Ribeiro

    2007-08-01

    Full Text Available OBJETIVO: Avaliar a concentração de retinol no colostro coletado em intervalo de 24 h. MÉTODOS: Coletou-se o colostro de 24 puérperas em dois períodos, tempo zero (T0 e tempo 24 h (T24, e um pool da união de T0 e T24. A gordura foi determinada pelo crematócrito, e o retinol por cromatografia líquida de alta eficiência. RESULTADOS: Quando expresso por volume de leite (µg/dL, o nível de retinol sofreu variações entre T0, T24 e pool: 94,9±58,9; 129±78,6 e 111,9±60,4 µg/dL, respectivamente. Entretanto, quando expresso pela quantidade de gordura (µg/g, não foi observada diferença significativa. CONCLUSÕES: O retinol quantificado no colostro através de coleta única não deve ser utilizado como indicador do estado nutricional em vitamina A, devido à grande variabilidade no decorrer das coletas. Sugere-se que os resultados sejam expressos por grama de gordura, para minimizar as variações em decorrência do volume de leite.OBJECTIVE: To evaluate retinol concentration in colostrum samples collected with a 24 hour interval. METHODS: Colostrum was collected from 24 recently-delivered mothers at two points in time, 0 hours (T0 and 24 hours later (T24, and a pooled sample of colostrum from T0 and T24 was also analyzed. Fat content was determined by creamatocrit, and retinol assayed by high performance liquid chromatography. RESULTS: When expressed in terms of volume of milk (µg/dL, retinol levels varied across T0, T24 and the pooled sample: 94.9±58.9, 129±78.6 and 111.9±60.4 µg/dL, respectively. However, when expressed with relation to fat content (µg/g, no significant difference was observed. CONCLUSIONS: Retinol assayed in colostrum from a single sample should not be used as an indicator of vitamin A nutritional status, due to the great variation between samples collected at different times. It is suggested that results be expressed per gram of fat, in order to minimize variations resulting from the volume of milk.

  8. Simultaneous spectrophotometric determination of synthetic dyes in food samples after cloud point extraction using multiple response optimizations.

    Science.gov (United States)

    Heidarizadi, Elham; Tabaraki, Reza

    2016-01-01

    A sensitive cloud point extraction method for simultaneous determination of trace amounts of sunset yellow (SY), allura red (AR) and brilliant blue (BB) by spectrophotometry was developed. Experimental parameters such as Triton X-100 concentration, KCl concentration and initial pH on extraction efficiency of dyes were optimized using response surface methodology (RSM) with a Doehlert design. Experimental data were evaluated by applying RSM integrating a desirability function approach. The optimum condition for extraction efficiency of SY, AR and BB simultaneously were: Triton X-100 concentration 0.0635 mol L(-1), KCl concentration 0.11 mol L(-1) and pH 4 with maximum overall desirability D of 0.95. Correspondingly, the maximum extraction efficiency of SY, AR and BB were 100%, 92.23% and 95.69%, respectively. At optimal conditions, extraction efficiencies were 99.8%, 92.48% and 95.96% for SY, AR and BB, respectively. These values were only 0.2%, 0.25% and 0.27% different from the predicted values, suggesting that the desirability function approach with RSM was a useful technique for simultaneously dye extraction. Linear calibration curves were obtained in the range of 0.02-4 for SY, 0.025-2.5 for AR and 0.02-4 μg mL(-1) for BB under optimum condition. Detection limit based on three times the standard deviation of the blank (3Sb) was 0.009, 0.01 and 0.007 μg mL(-1) (n=10) for SY, AR and BB, respectively. The method was successfully used for the simultaneous determination of the dyes in different food samples.

  9. Optimization of the RNA extraction method for transcriptome studies of Salmonella inoculated on commercial raw chicken breast samples

    Directory of Open Access Journals (Sweden)

    Muthaiyan Arunachalam

    2011-03-01

    Full Text Available Abstract Background There has been increased interest in the study of molecular survival mechanisms expressed by foodborne pathogens present on food surfaces. Determining genomic responses of these pathogens to antimicrobials is of particular interest since this helps to understand antimicrobial effects at the molecular level. Assessment of bacterial gene expression by transcriptomic analysis in response to these antimicrobials would aid prediction of the phenotypic behavior of the bacteria in the presence of antimicrobials. However, before transcriptional profiling approaches can be implemented routinely, it is important to develop an optimal method to consistently recover pathogens from the food surface and ensure optimal quality RNA so that the corresponding gene expression analysis represents the current response of the organism. Another consideration is to confirm that there is no interference from the "background" food or meat matrix that could mask the bacterial response. Findings Our study involved developing a food model system using chicken breast meat inoculated with mid-log Salmonella cells. First, we tested the optimum number of Salmonella cells required on the poultry meat in order to extract high quality RNA. This was analyzed by inoculating 10-fold dilutions of Salmonella on the chicken samples followed by RNA extraction. Secondly, we tested the effect of two different bacterial cell recovery solutions namely 0.1% peptone water and RNAprotect (Qiagen Inc. on the RNA yield and purity. In addition, we compared the efficiency of sonication and bead beater methods to break the cells for RNA extraction. To check chicken nucleic acid interference on downstream Salmonella microarray experiments both chicken and Salmonella cDNA labeled with different fluorescent dyes were mixed together and hybridized on a single Salmonella array. Results of this experiment did not show any cross-hybridization signal from the chicken nucleic acids. In

  10. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  11. Optimization of LC-Orbitrap-HRMS acquisition and MZmine 2 data processing for nontarget screening of environmental samples using design of experiments.

    Science.gov (United States)

    Hu, Meng; Krauss, Martin; Brack, Werner; Schulze, Tobias

    2016-11-01

    Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a well-established technique for nontarget screening of contaminants in complex environmental samples. Automatic peak detection is essential, but its performance has only rarely been assessed and optimized so far. With the aim to fill this gap, we used pristine water extracts spiked with 78 contaminants as a test case to evaluate and optimize chromatogram and spectral data processing. To assess whether data acquisition strategies have a significant impact on peak detection, three values of MS cycle time (CT) of an LTQ Orbitrap instrument were tested. Furthermore, the key parameter settings of the data processing software MZmine 2 were optimized to detect the maximum number of target peaks from the samples by the design of experiments (DoE) approach and compared to a manual evaluation. The results indicate that short CT significantly improves the quality of automatic peak detection, which means that full scan acquisition without additional MS(2) experiments is suggested for nontarget screening. MZmine 2 detected 75-100 % of the peaks compared to manual peak detection at an intensity level of 10(5) in a validation dataset on both spiked and real water samples under optimal parameter settings. Finally, we provide an optimization workflow of MZmine 2 for LC-HRMS data processing that is applicable for environmental samples for nontarget screening. The results also show that the DoE approach is useful and effort-saving for optimizing data processing parameters. Graphical Abstract ᅟ.

  12. The interval ordering problem

    CERN Document Server

    Dürr, Christoph; Spieksma, Frits C R; Nobibon, Fabrice Talla; Woeginger, Gerhard J

    2011-01-01

    For a given set of intervals on the real line, we consider the problem of ordering the intervals with the goal of minimizing an objective function that depends on the exposed interval pieces (that is, the pieces that are not covered by earlier intervals in the ordering). This problem is motivated by an application in molecular biology that concerns the determination of the structure of the backbone of a protein. We present polynomial-time algorithms for several natural special cases of the problem that cover the situation where the interval boundaries are agreeably ordered and the situation where the interval set is laminar. Also the bottleneck variant of the problem is shown to be solvable in polynomial time. Finally we prove that the general problem is NP-hard, and that the existence of a constant-factor-approximation algorithm is unlikely.

  13. Optimizing Frozen Sample Preparation for Laser Microdissection: Assessment of CryoJane Tape-Transfer System®.

    Directory of Open Access Journals (Sweden)

    Yelena G Golubeva

    Full Text Available Laser microdissection is an invaluable tool in medical research that facilitates collecting specific cell populations for molecular analysis. Diversity of research targets (e.g., cancerous and precancerous lesions in clinical and animal research, cell pellets, rodent embryos, etc. and varied scientific objectives, however, present challenges toward establishing standard laser microdissection protocols. Sample preparation is crucial for quality RNA, DNA and protein retrieval, where it often determines the feasibility of a laser microdissection project. The majority of microdissection studies in clinical and animal model research are conducted on frozen tissues containing native nucleic acids, unmodified by fixation. However, the variable morphological quality of frozen sections from tissues containing fat, collagen or delicate cell structures can limit or prevent successful harvest of the desired cell population via laser dissection. The CryoJane Tape-Transfer System®, a commercial device that improves cryosectioning outcomes on glass slides has been reported superior for slide preparation and isolation of high quality osteocyte RNA (frozen bone during laser dissection. Considering the reported advantages of CryoJane for laser dissection on glass slides, we asked whether the system could also work with the plastic membrane slides used by UV laser based microdissection instruments, as these are better suited for collection of larger target areas. In an attempt to optimize laser microdissection slide preparation for tissues of different RNA stability and cryosectioning difficulty, we evaluated the CryoJane system for use with both glass (laser capture microdissection and membrane (laser cutting microdissection slides. We have established a sample preparation protocol for glass and membrane slides including manual coating of membrane slides with CryoJane solutions, cryosectioning, slide staining and dissection procedure, lysis and RNA extraction

  14. Bacterial screening of platelet concentrates on day 2 and 3 with flow cytometry: the optimal sampling time point?

    Science.gov (United States)

    Vollmer, Tanja; Schottstedt, Volkmar; Bux, Juergen; Walther-Wenke, Gabriele; Knabbe, Cornelius; Dreier, Jens

    2014-07-01

    There is growing concern on the residual risk of bacterial contamination of platelet concentrates in Germany, despite the reduction of the shelf-life of these concentrates and the introduction of bacterial screening. In this study, the applicability of the BactiFlow flow cytometric assay for bacterial screening of platelet concentrates on day 2 or 3 of their shelf-life was assessed in two German blood services. The results were used to evaluate currently implemented or newly discussed screening strategies. Two thousand and ten apheresis platelet concentrates were tested on day 2 or day 3 after donation using BactiFlow flow cytometry. Reactive samples were confirmed by the BacT/Alert culture system. Twenty-four of the 2,100 platelet concentrates tested were reactive in the first test by BactiFlow. Of these 24 platelet concentrates, 12 were false-positive and the other 12 were initially reactive. None of the microbiological cultures of the initially reactive samples was positive. Parallel examination of 1,026 platelet concentrates by culture revealed three positive platelet concentrates with bacteria detected only in the anaerobic culture bottle and identified as Staphylococcus species. Two platelet concentrates were confirmed positive for Staphylcoccus epidermidis by culture. Retrospective analysis of the growth kinetics of the bacteria indicated that the bacterial titres were most likely below the diagnostic sensitivity of the BactiFlow assay (screening of platelet concentrates independently of the testing day and the screening strategy. Although the optimal screening strategy could not be defined, this study provides further data to help achieve this goal.

  15. 基于两阶段随机规划方法的灌区水资源优化配置%Optimal water resources planning based on interval-parameter two-stage stochastic programming

    Institute of Scientific and Technical Information of China (English)

    付银环; 郭萍; 方世奇; 李茉

    2014-01-01

    灌区水资源优化配置的不确定性研究,对于提高水分的利用效率,减少农业灌溉用水,建立节水型社会具有重要的意义,尤其是对于中国的干旱半干旱地区。该文针对灌区水资源系统中存在的不确定性,以西营灌区、清源灌区、永昌灌区为研究区域,运用区间2阶段随机规划的方法,建立了地表水和地下水联合调度的灌区之间水资源优化配置模型。该模型以多灌区、多水源联合调度系统的成本最小为目标函数,引入随机数和区间数表示该系统中存在的不确定性,将地下水和地表水水资源在不同地区之间进行优化,并以配置结果为输入数据,以作物全生育期的水分生产函数为基础,建立灌区不同农作物灌溉定额的非线性区间不确定性水资源优化配置模型,将优化配置水量分配到灌区典型农作物。2个模型均以区间的形式给出优化配置的结果,为决策者提供更为准确的决策空间,更真实地反映实际的水资源优化配置形式。%Studies on water resources allocation in irrigation area under uncertainty are important for increasing water use efficiency, reducing agricultural irrigation water amount and establishing water-saving society, especially for the arid and semi-arid areas in China. In this study, two models were established based on uncertainty theory in order to make plans for efficient water resources management. One of the models was an interval-parameter two-stage stochastic optimization model developed for dispatching the underground and surface water systems for irrigation area of Xiying, Qingyuan, Yongchuan (China) under the conditions of uncertainty and complexity. In the model, the minimal system operation cost was regarded as the objective function and the probability distribution and interval parameters were used to express the uncertainty of water supply. The process of water supply from multiple

  16. Feasibility of 3-dimensional sampling perfection with application optimized contrast sequence in the evaluation of patients with hydrocephalus.

    Science.gov (United States)

    Kartal, Merve Gulbiz; Ocakoglu, Gokhan; Algin, Oktay

    2015-01-01

    This study aimed to investigate the effectiveness and additive value of T2W 3-dimensional sampling perfection with application optimized contrast (3D-SPACE) with variant flip-angle mode in imaging of all types of hydrocephalus. Our secondary objective was to assess the reliability of 3D-SPACE sequence and correspondence of the results with phase-contrast magnetic resonance imaging (PC-MRI)-based data. Forty-one patients with hydrocephalus have undergone 3-T MRI. T2W 3D-SPACE sequence has been obtained in addition to routine hydrocephalus protocol. Cerebrospinal fluid circulation, presence/type/etiology of hydrocephalus, obstruction level scores, and diagnostic levels of confidence were evaluated separately by 2 radiologists. In the first session, routine sequences with PC-MRI were evaluated, and in another session, only 3D-SPACE and 3-dimensional magnetization prepared rapid acquisition gradient echo sequences were evaluated. Results obtained in these sessions were compared with each other and those obtained in consensus session. Agreement values were very good for both 3D-SPACE and PC-MRI sequences (P technique providing extensive multiplanar reformatted images with a lower specific absorption rate. These advantages over PC-MRI make 3D-SPACE sequence a promising tool in management of patients with hydrocephalus.

  17. Optimized sampling strategy of Wireless sensor network for validation of remote sensing products over heterogeneous coarse-resolution pixel

    Science.gov (United States)

    Peng, J.; Liu, Q.; Wen, J.; Fan, W.; Dou, B.

    2015-12-01

    Coarse-resolution satellite albedo products are increasingly applied in geographical researches because of their capability to characterize the spatio-temporal patterns of land surface parameters. In the long-term validation of coarse-resolution satellite products with ground measurements, the scale effect, i.e., the mismatch between point measurement and pixel observation becomes the main challenge, particularly over heterogeneous land surfaces. Recent advances in Wireless Sensor Networks (WSN) technologies offer an opportunity for validation using multi-point observations instead of single-point observation. The difficulty is to ensure the representativeness of the WSN in heterogeneous areas with limited nodes. In this study, the objective is to develop a ground-based spatial sampling strategy through consideration of the historical prior knowledge and avoidance of the information redundancy between different sensor nodes. Taking albedo as an example. First, we derive monthly local maps of albedo from 30-m HJ CCD images a 3-year period. Second, we pick out candidate points from the areas with higher temporal stability which helps to avoid the transition or boundary areas. Then, the representativeness (r) of each candidate point is evaluated through the correlational analysis between the point-specific and area-average time sequence albedo vector. The point with the highest r was noted as the new sensor point. Before electing a new point, the vector component of the selected points should be taken out from the vectors in the following correlational analysis. The selection procedure would be ceased once if the integral representativeness (R) meets the accuracy requirement. Here, the sampling method is adapted to both single-parameter and multi-parameter situations. Finally, it is shown that this sampling method has been effectively worked in the optimized layout of Huailai remote sensing station in China. The coarse resolution pixel covering this station could be

  18. Synthesis of zinc oxide nanoparticles-chitosan for extraction of methyl orange from water samples: cuckoo optimization algorithm-artificial neural network.

    Science.gov (United States)

    Khajeh, Mostafa; Golzary, Ali Reza

    2014-10-15

    In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7μgL(-1)was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.

  19. Synthesis of zinc oxide nanoparticles-chitosan for extraction of methyl orange from water samples: Cuckoo optimization algorithm-artificial neural network

    Science.gov (United States)

    Khajeh, Mostafa; Golzary, Ali Reza

    2014-10-01

    In this work, zinc nanoparticles-chitosan based solid phase extraction has been developed for separation and preconcentration of trace amount of methyl orange from water samples. Artificial neural network-cuckoo optimization algorithm has been employed to develop the model for simulation and optimization of this method. The pH, volume of elution solvent, mass of zinc oxide nanoparticles-chitosan, flow rate of sample and elution solvent were the input variables, while recovery of methyl orange was the output. The optimum conditions were obtained by cuckoo optimization algorithm. At the optimum conditions, the limit of detections of 0.7 μg L-1was obtained for the methyl orange. The developed procedure was then applied to the separation and preconcentration of methyl orange from water samples.

  20. Interval Scheduling: A Survey

    NARCIS (Netherlands)

    Kolen, A.W.J.; Lenstra, J.K.; Papadimitriou, C.H.; Spieksma, F.C.R.

    2007-01-01

    In interval scheduling, not only the processing times of the jobs but also their starting times are given. This article surveys the area of interval scheduling and presents proofs of results that have been known within the community for some time. We first review the complexity and approximability o

  1. Estimating duration intervals

    NARCIS (Netherlands)

    Ph.H.B.F. Franses (Philip Hans); B.L.K. Vroomen (Björn)

    2003-01-01

    textabstractDuration intervals measure the dynamic impact of advertising on sales. More precise, the p per cent duration interval measures the time lag between the advertising impulse and the moment that p per cent of its effect has decayed. In this paper, we derive an expression for the duration

  2. Simultaneous Interval Graphs

    CERN Document Server

    Jampani, Krishnam Raju

    2010-01-01

    In a recent paper, we introduced the simultaneous representation problem (defined for any graph class C) and studied the problem for chordal, comparability and permutation graphs. For interval graphs, the problem is defined as follows. Two interval graphs G_1 and G_2, sharing some vertices I (and the corresponding induced edges), are said to be `simultaneous interval graphs' if there exist interval representations R_1 and R_2 of G_1 and G_2, such that any vertex of I is mapped to the same interval in both R_1 and R_2. Equivalently, G_1 and G_2 are simultaneous interval graphs if there exist edges E' between G_1-I and G_2-I such that G_1 \\cup G_2 \\cup E' is an interval graph. Simultaneous representation problems are related to simultaneous planar embeddings, and have applications in any situation where it is desirable to consistently represent two related graphs, for example: interval graphs capturing overlaps of DNA fragments of two similar organisms; or graphs connected in time, where one is an updated versi...

  3. 自适应采样间隔的无线传感器网络多目标跟踪算法%Multi-target tracking algorithm based on adaptive sampling interval in wireless sensor networks

    Institute of Scientific and Technical Information of China (English)

    王建平; 赵高丽; 胡孟杰; 陈伟

    2014-01-01

    Multi-target tracking is a hot topic of current research on wireless sensor networks (WSN ). Based on adaptive sampling interval,we propose a multi-target tracking algorithm in order to save energy consumption and prevent tracking lost for WSN.We contrast the targets moving model by using the position metadata,and predicte the targets moving status based on extended Kalman filter (EKF).we adopt the probability density function (PDF )of the estimated targets to establish the tracking cluster.By defining the tracking center,we use Markov distance to quantify the election process of the main node (MN).We comput targets impact strength through the targets importance and the distance to MN node, and then use it to build tracking algorithm.We do the simulation experiment based on MATLAB,and the experiment results show that the proposed algorithm can accurate predict the trajectory of the targets,and adjust the sampling interval while the targets were moving.By analyzing the experiments data,we know that the proposed algorithm can improve the tracking precision and save the energy consumption of WSN obviously.%多目标跟踪是无线传感器网络当前研究的热点问题。针对多目标跟踪存在耗能较大,跟踪丢失等问题,提出了一种自适应采样间隔的多目标跟踪算法。采用跟踪目标的定位元数据来对目标的运动模式进行建模。基于扩展的卡尔曼滤波器来预测跟踪目标状态,采用预测目标定位的概率密度函数构建跟踪簇。通过定义跟踪目标中心,基于马氏距离来量化主节点 MN 的选举过程。通过跟踪目标重要性和其与MN之间的距离来量化目标的影响强度,并以此构建自适应采样间隔的多目标跟踪算法。基于MATLAB进行了仿真实验,实验结果显示,本文设计的跟踪算法能准确预测目标的运动轨迹,能随着运动目标的状态实时采用自适应的采样间隔。通过数据分析得知,本

  4. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2017-09-16

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m(2) by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R (2) = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  5. Sleep and optimism: A longitudinal study of bidirectional causal relationship and its mediating and moderating variables in a Chinese student sample.

    Science.gov (United States)

    Lau, Esther Yuet Ying; Hui, C Harry; Lam, Jasmine; Cheung, Shu-Fai

    2017-01-01

    While both sleep and optimism have been found to be predictive of well-being, few studies have examined their relationship with each other. Neither do we know much about the mediators and moderators of the relationship. This study investigated (1) the causal relationship between sleep quality and optimism in a college student sample, (2) the role of symptoms of depression, anxiety, and stress as mediators, and (3) how circadian preference might moderate the relationship. Internet survey data were collected from 1,684 full-time university students (67.6% female, mean age = 20.9 years, SD = 2.66) at three time-points, spanning about 19 months. Measures included the Attributional Style Questionnaire, the Pittsburgh Sleep Quality Index, the Composite Scale of Morningness, and the Depression Anxiety Stress Scale-21. Moderate correlations were found among sleep quality, depressive mood, stress symptoms, anxiety symptoms, and optimism. Cross-lagged analyses showed a bidirectional effect between optimism and sleep quality. Moreover, path analyses demonstrated that anxiety and stress symptoms partially mediated the influence of optimism on sleep quality, while depressive mood partially mediated the influence of sleep quality on optimism. In support of our hypothesis, sleep quality affects mood symptoms and optimism differently for different circadian preferences. Poor sleep results in depressive mood and thus pessimism in non-morning persons only. In contrast, the aggregated (direct and indirect) effects of optimism on sleep quality were invariant of circadian preference. Taken together, people who are pessimistic generally have more anxious mood and stress symptoms, which adversely affect sleep while morningness seems to have a specific protective effect countering the potential damage poor sleep has on optimism. In conclusion, optimism and sleep quality were both cause and effect of each other. Depressive mood partially explained the effect of sleep quality on optimism

  6. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  7. Sampling design optimization of a wireless sensor network for monitoring ecohydrological processes in the Babao River basin, China

    NARCIS (Netherlands)

    Ge, Y.; Wang, J.H.; Heuvelink, G.B.M.; Jin, R.; Li, X.; Wang, J.F.

    2015-01-01

    Optimal selection of observation locations is an essential task in designing an effective ecohydrological process monitoring network, which provides information on ecohydrological variables by capturing their spatial variation and distribution. This article presents a geostatistical method for mu

  8. Characterizing the optimal flux space of genome-scale metabolic reconstructions through modified latin-hypercube sampling

    NARCIS (Netherlands)

    Chaudhary, N.; Tøndel, K.; Bhatnagar, R.; Martins dos Santos, V.A.P.; Puchalka, J.

    2016-01-01

    Genome-Scale Metabolic Reconstructions (GSMRs), along with optimization-based methods, predominantly Flux Balance Analysis (FBA) and its derivatives, are widely applied for assessing and predicting the behavior of metabolic networks upon perturbation, thereby enabling identification of potential nov

  9. The fallacy of placing confidence in confidence intervals

    NARCIS (Netherlands)

    Morey, Richard D.; Hoekstra, Rink; Rouder, Jeffrey N.; Lee, Michael D.; Wagenmakers, Eric-Jan

    2016-01-01

    Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true

  10. The fallacy of placing confidence in confidence intervals

    NARCIS (Netherlands)

    Morey, R.D.; Hoekstra, R.; Rouder, J.N.; Lee, M.D.; Wagenmakers, E.-J.

    Interval estimates – estimates of parameters that include an allowance for sampling uncertainty – have long been touted as a key component of statistical analyses. There are several kinds of interval estimates, but the most popular are confidence intervals (CIs): intervals that contain the true

  11. Optimized method for atmospheric signal reduction in irregular sampled InSAR time series assisted by external atmospheric information

    Science.gov (United States)

    Gong, W.; Meyer, F. J.

    2013-12-01

    It is well known that spatio-temporal the tropospheric phase signatures complicate the interpretation and detection of smaller magnitude deformation signals or unstudied motion fields. Several advanced time-series InSAR techniques were developed in the last decade that make assumptions about the stochastic properties of the signal components in interferometric phases to reduce atmospheric delay effects on surface deformation estimates. However, their need for large datasets to successfully separate the different phase contributions limits their performance if data is scarce and irregularly sampled. Limited SAR data coverage is true for many areas affected by geophysical deformation. This is either due to their low priority in mission programming, unfavorable ground coverage condition, or turbulent seasonal weather effects. In this paper, we present new adaptive atmospheric phase filtering algorithms that are specifically designed to reconstruct surface deformation signals from atmosphere-affected and irregularly sampled InSAR time series. The filters take advantage of auxiliary atmospheric delay information that is extracted from various sources, e.g. atmospheric weather models. They are embedded into a model-free Persistent Scatterer Interferometry (PSI) approach that was selected to accommodate non-linear deformation patterns that are often observed near volcanoes and earthquake zones. Two types of adaptive phase filters were developed that operate in the time dimension and separate atmosphere from deformation based on their different temporal correlation properties. Both filter types use the fact that atmospheric models can reliably predict the spatial statistics and signal power of atmospheric phase delay fields in order to automatically optimize the filter's shape parameters. In essence, both filter types will attempt to maximize the linear correlation between a-priori and the extracted atmospheric phase information. Topography-related phase components, orbit

  12. Actualización de los límites críticos del intervalo hídrico óptimo Review of the critical limits of the optimal hydric interval

    Directory of Open Access Journals (Sweden)

    Miguel Angel PIlatti

    2012-07-01

    Full Text Available El Intervalo Hídrico Óptimo (IHO es la fracción de agua edáfica fácilmente utilizable por los cultivos, durante la cual el suelo puede ser penetrado por las raíces sin mayor resistencia y la aeración no limita la respiración radical. En este trabajo se discuten los límites superior e inferior del IHO. El primero es èCC -agua retenida en capacidad de campo- si garantiza una aceptable capacidad de aire (èa; de lo contrario, el límite se alcanza cuando èa no restringe la respiración radical. El límite inferior queda determinado por la variable de mayor valor entre èRP (contenido hídrico edáfico por debajo del cual las raíces restringen su crecimiento, y èFU (agua fácilmente utilizable por debajo del cual comienza el estrés hídrico. Se analiza y discute la validez de los límites, y las dificultades metodológicas que implican sus determinaciones. Se comparan valores del IHO obtenidos por otros autores, que utilizaron diferentes límites, con los calculados con los límites críticos aquí propuestos. Cada situación agronómica (combinación de suelos, clima, cultivo requiere valores particulares de IHO que deberán ser determinados para cada región. Para el norte de la Región Pampeana (Argentina y sus cultivos habituales proponemos los siguientes valores críticos: èCC = contenido hídrico a -10 kPa; èa = 15%; èRP = 2,5 a 6 MPa (según el porcentaje de arcilla y èFU = -0,17 MPa.The Optimal Hydric Interval (IHO is the interval of easily available soil water for the crops, during which soil resistance and aeration do not limit root growth. In this paper, the upper and lower limits of the IHO are discussed. The upper limit is èCC (soil water content at field capacity when air capacity (èa is not restrictive for root respiration; otherwise, the limit is èa. The lower limit is determined by the variable of greater value between èRP (soil water content at which soil resistance reduces root elongation and èFU (soil water

  13. BIRTH INTERVAL AMONG NOMAD WOMEN

    Directory of Open Access Journals (Sweden)

    E.Keyvan

    1976-06-01

    Full Text Available To have an, idea about the relation between the length of birth interval and lactation, and birth control program this study have been done. The material for such analysis was nomad women's fertility history that was in their reproductive period (15-44. The material itself was gathered through a health survey. The main sample was composed of 2,165 qualified women, of whom 49 due to previous or presently using contraceptive methods and 10 for the lack of enough data were excluded from 'this study. Purpose of analysis was to find a relation between No. of live births and pregnancies with total duration of married life (in other word, total months which the women were at risk of pregnancy. 2,106 women which their fertility history was analyzed had a totally of272, 502 months married life. During this time 8,520 live births did occurred which gave a birth interval of 32 months. As pregnancy termination could be through either live birth, still birth or abortion (induced or spontaneous, bringing all together will give No. of pregnancies which have occurred during this period (8,520 + 124 + 328 = 8,972 with an average of interpregnancy interval of 30.3 months. Considering the length of components of birth interval: Post partum amenorrhea which depends upon lactation. - Anovulatory cycles (2 month - Ooulatory exposure, in the absence of contraceptive methods (5 months - Pregnancy (9 months.Difference between the length, of birth interval from the sum of the mentioned period (except the first component, (2 + 5+ 9 = 16 will be duration of post partum amenorrhea (32 - 16 = 16, or in other word duration of breast feeding among nomad women. In this study it was found that, in order to reduce birth by 50% a contraceptive method with 87% effectiveness is needed.

  14. Optimization of perforation interval for fire flood in thick heavy oil reservoirs%厚层稠油油藏火驱射孔层段优化探讨

    Institute of Scientific and Technical Information of China (English)

    张方礼

    2013-01-01

    火烧油层(火驱)技术已成为辽河油田稠油油藏蒸汽吞吐后主体接替技术之一,目前主要应用于厚层块状普通稠油油藏、薄互层状普通稠油油藏以及超稠油油藏.厚层油藏在火驱过程中火线超覆严重,影响了火驱开发效果.重点针对制约火驱开发效果的注采井射孔技术进行研究探讨,通过室内物理模拟研究认识了厚层火驱二次燃烧现象,通过现场测试及数值模拟认识了厚层油藏平面及纵向火驱动用状况,并应用数值模拟和油藏工程方法对厚层常规火驱、重力火驱注采井射孔层段进行了优化设计,提出了不同火驱方式注采井射孔优化方案.该研究可为厚层稠油油藏火驱开发提供一定的技术借鉴.%In situ combustion (fire flood) has become one of the major substitute techniques for heavy oil recovery after cyclic steam stimulation in the Liaohe oilfield involving thick massive conventional heavy oil reservoir, thin interbedded conventional heavy oil reservoir and ultra heavy oil reservoir. Thick reservoirs exhibit severe fire front override in fire flooding process, thus affected the development result. This research focuses on perforation technique which may restrict fire flood response. The phenomenon of secondary combustion in fire flood for thick reservoirs has been understood through physical simulations; the areal and vertical producing degree of fire flood in thick reservoirs has been understood through field tests and numerical simulations; the perforation interval of injection and production wells in conventional and gravity fire floods for thick reservoirs has been optimized by employing numerical simulation and reservoir engineering methods; and an optimum perforation plan has been proposed for different fire flood schemes. This research offers certain technical reference to fire flood in thick heavy oil reservoirs.

  15. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    Science.gov (United States)

    Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.

    2011-05-01

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  16. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    Science.gov (United States)

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    Science.gov (United States)

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  18. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    Energy Technology Data Exchange (ETDEWEB)

    Brum, Daniel M.; Lima, Claudio F. [Departamento de Quimica, Universidade Federal de Vicosa, A. Peter Henry Rolfs s/n, Vicosa/MG, 36570-000 (Brazil); Robaina, Nicolle F. [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil); Fonseca, Teresa Cristina O. [Petrobras, Cenpes/PDEDS/QM, Av. Horacio Macedo 950, Ilha do Fundao, Rio de Janeiro/RJ, 21941-915 (Brazil); Cassella, Ricardo J., E-mail: cassella@vm.uff.br [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil)

    2011-05-15

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO{sub 3}, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO{sub 3} medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  19. Product interval automata

    Indian Academy of Sciences (India)

    Deepak D’Souza; P S Thiagarajan

    2002-04-01

    We identify a subclass of timed automata called product interval automata and develop its theory. These automata consist of a network of timed agents with the key restriction being that there is just one clock for each agent and the way the clocks are read and reset is determined by the distribution of shared actions across the agents. We show that the resulting automata admit a clean theory in both logical and language theoretic terms. We also show that product interval automata are expressive enough to model the timed behaviour of asynchronous digital circuits.

  20. Optimization of a method based on micro-matrix solid-phase dispersion (micro-MSPD for the determination of PCBs in mussel samples

    Directory of Open Access Journals (Sweden)

    Nieves Carro

    2017-03-01

    Full Text Available This paper reports the development and optimization of micro-matrix solid-phase dispersion (micro-MSPD of nine polychlorinated biphenyls (PCBs in mussel samples (Mytilus galloprovincialis by using a two-level factorial design. Four variables (amount of sample, anhydrous sodium sulphate, Florisil and solvent volume were considered as factors in the optimization process. The results suggested that only the interaction between the amount of anhydrous sodium sulphate and the solvent volume was statistically significant for the overall recovery of a trichlorinated compound, CB 28. Generally most of the considered species exhibited a similar behaviour, the sample and Florisil amounts had a positive effect on PCBs extractions and solvent volume and sulphate amount had a negative effect. The analytical determination and confirmation of PCBs were carried out by using GC-ECD and GC-MS/MS, respectively. The method was validated having satisfactory precision and accuracy with RSD values below 6% and recoveries between 81 and 116% for all congeners. The optimized method was applied to the extraction of real mussel samples from two Galician Rías.

  1. Optimization of diagnostic RT-PCR protocols and sampling procedures for the reliable and cost-effective detection of Cassava brown streak virus.

    Science.gov (United States)

    Abarshi, M M; Mohammed, I U; Wasswa, P; Hillocks, R J; Holt, J; Legg, J P; Seal, S E; Maruthi, M N

    2010-02-01

    Sampling procedures and diagnostic protocols were optimized for accurate diagnosis of Cassava brown streak virus (CBSV) (genus Ipomovirus, family Potyviridae). A cetyl trimethyl ammonium bromide (CTAB) method was optimized for sample preparation from infected cassava plants and compared with the RNeasy plant mini kit (Qiagen) for sensitivity, reproducibility and costs. CBSV was detectable readily in total RNAs extracted using either method. The major difference between the two methods was in the cost of consumables, with the CTAB 10x cheaper (0.53 pounds sterling=US$0.80 per sample) than the RNeasy method (5.91 pounds sterling=US$8.86 per sample). A two-step RT-PCR (1.34 pounds sterling=US$2.01 per sample), although less sensitive, was at least 3-times cheaper than a one-step RT-PCR (4.48 pounds sterling=US$6.72). The two RT-PCR tests revealed consistently the presence of CBSV both in symptomatic and asymptomatic leaves and indicated that asymptomatic leaves can be used reliably for virus diagnosis. Depending on the accuracy required, sampling 100-400 plants per field is an appropriate recommendation for CBSD diagnosis, giving a 99.9% probability of detecting a disease incidence of 6.7-1.7%, respectively. CBSV was detected at 10(-4)-fold dilutions in composite sampling, indicating that the most efficient way to index many samples for CBSV will be to screen pooled samples. The diagnostic protocols described below are reliable and the most cost-effective methods available currently for detecting CBSV.

  2. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  3. Interval Solution for Nonlinear Programming of Maximizing the Fatigue Life of V-Belt under Polymorphic Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Zhong Wan

    2013-01-01

    Full Text Available In accord with the practical engineering design conditions, a nonlinear programming model is constructed for maximizing the fatigue life of V-belt drive in which some polymorphic uncertainties are incorporated. For a given satisfaction level and a confidence level, an equivalent formulation of this uncertain optimization model is obtained where only interval parameters are involved. Based on the concepts of maximal and minimal range inequalities for describing interval inequality, the interval parameter model is decomposed into two standard nonlinear programming problems, and an algorithm, called two-step based sampling algorithm, is developed to find an interval optimal solution for the original problem. Case study is employed to demonstrate the validity and practicability of the constructed model and the algorithm.

  4. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T1 and T2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially resolved

  5. Fast simulated annealing and adaptive Monte Carlo sampling based parameter optimization for dense optical-flow deformable image registration of 4DCT lung anatomy

    Science.gov (United States)

    Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.

    2016-03-01

    Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.

  6. Standardization and optimization of core sampling procedure for carbon isotope analysis in eucalyptus and variation in carbon isotope ratios across species and growth conditions

    CSIR Research Space (South Africa)

    Raju, M

    2011-11-01

    Full Text Available and optimization of core sampling procedure for carbon isotope analysis in eucalyptus and variation in carbon isotope ratios across species and growth conditions Mohan Raju, B#; Nuveshen Naidoo*; Sheshshaayee, M. S; Verryn, S. D*; Kamalkannan, R^; Bindumadhava... isotope analysis in Eucalyptus. Methods Expt 1: * Cores were taken from periphery to pith in 5 year old trees of Eucalyptus * Five half sib families of Eucalyptus grandis & E. urophylla were used ? Cores were further subdivided into 5 fragments...

  7. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  8. Evaluation of optimal number of soil samples for detail reconstruction of initial field of 137Cs fallout in Chernobyl affected areas

    Directory of Open Access Journals (Sweden)

    Maxim Ivanov

    2015-10-01

    Full Text Available A Chernobyl-derived 137Cs- fallout was associated with one or two rainfalls Because of that vast areas of the Europe affected by Chernobyl-derived fallout are characterized by non-uniform field of radionuclide contamination. It was assessed after detailed field investigation within few river basins of the Central Russia located in areas with different levels of Chernobyl contamination, that existing maps of radionuclide contamination composed during last two decades are not enough detailed for assessment of initial contamination field transformation by the lateral migration processes of the Chernobyl-derived 137Cs. This problem can be overcomed if additional soil sampling are undertaken in reference locations for correction of exiting radionuclide contamination maps. However it is necessary to evaluate the optimal number of bulk samples which should be taken in each sampling point for receiving statistically correct results of radionuclide concentration. Special investigation was undertaken in few catchments (S= 2-50 km2 of the Central Russia, located in areas with different levels of initial Chernobyl contamination, for evaluation the optimal number of samples, which should be taken in each sampling point for the determination of Cs-137 concentrations error not exceed 30 % on 95 % confidence level.

  9. Optimization and clinical validation of a Real-Time PCR protocol for direct detection of Trichomonas vaginalis in pooled urine samples

    Directory of Open Access Journals (Sweden)

    WHA Zandijk

    2009-12-01

    Full Text Available Background and Objectives: A new Real- Time PCR protocol for the detection of Trichomonas vaginalis in pooled urine"nsamples has been optimized and validated."nMaterials and Methods: The amplification protocol, targeting a 2kb repeated gene in the T. vaginalis genome, was optimized"nby varying PCR parameters. As a reference method, a Real-Time PCR protocol targeting the beta-tubulin gene (Y. Versluis"net al, 2006, Int J STD AIDS 17:642 was used. Clinical validation was performed with pooled urine samples obtained from"npatients of the sexually transmitted diseases clinic of a university hospital (n=963; from February – June 2007."nResults: Positive samples with the new optimized technique is 1.1% (n=10, while the beta-tubulin real-time PCR method"ngenerated four positives (0.3%."nConclusion: The new RT- PCR protocol is a sensitive (1.000 and specific (0.993 procedure to detect and to identify T."nvaginalis in urine samples.

  10. MetSizeR: selecting the optimal sample size for metabolomic studies using an analysis based approach

    Science.gov (United States)

    2013-01-01

    Background Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. Results In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experiments even when experimental pilot data are not available. The key motivation for MetSizeR is that it considers the type of analysis the researcher intends to use for data analysis when estimating sample size. MetSizeR uses information about the data analysis technique and prior expert knowledge of the metabolomic experiment to simulate pilot data from a statistical model. Permutation based techniques are then applied to the simulated pilot data to estimate the required sample size. Conclusions The MetSizeR methodology, and a publicly available software package which implements the approach, are illustrated through real metabolomic applications. Sample size estimates, informed by the intended statistical analysis technique, and the associated uncertainty are provided. PMID:24261687

  11. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: