WorldWideScience

Sample records for optimal sampling intervals

  1. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  2. Surveillance test interval optimization

    International Nuclear Information System (INIS)

    Cepin, M.; Mavko, B.

    1995-01-01

    Technical specifications have been developed on the bases of deterministic analyses, engineering judgment, and expert opinion. This paper introduces our risk-based approach to surveillance test interval (STI) optimization. This approach consists of three main levels. The first level is the component level, which serves as a rough estimation of the optimal STI and can be calculated analytically by a differentiating equation for mean unavailability. The second and third levels give more representative results. They take into account the results of probabilistic risk assessment (PRA) calculated by a personal computer (PC) based code and are based on system unavailability at the system level and on core damage frequency at the plant level

  3. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  4. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  5. Global Optimization using Interval Analysis : Interval Optimization for Aerospace Applications

    NARCIS (Netherlands)

    Van Kampen, E.

    2010-01-01

    Optimization is an important element in aerospace related research. It is encountered for example in trajectory optimization problems, such as: satellite formation flying, spacecraft re-entry optimization and airport approach and departure optimization; in control optimization, for example in

  6. Optimal Data Interval for Estimating Advertising Response

    OpenAIRE

    Gerard J. Tellis; Philip Hans Franses

    2006-01-01

    The abundance of highly disaggregate data (e.g., at five-second intervals) raises the question of the optimal data interval to estimate advertising carryover. The literature assumes that (1) the optimal data interval is the interpurchase time, (2) too disaggregate data causes a disaggregation bias, and (3) recovery of true parameters requires assumption of the underlying advertising process. In contrast, we show that (1) the optimal data interval is what we call , (2) too disaggregate data do...

  7. Estimation of individual reference intervals in small sample sizes

    DEFF Research Database (Denmark)

    Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz

    2007-01-01

    In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... from various variables such as gender, age, BMI, alcohol, smoking, and menopause. The reference intervals were compared to reference intervals calculated using IFCC recommendations. Where comparable, the IFCC calculated reference intervals had a wider range compared to the variance component models...

  8. An Interval Bound Algorithm of optimizing reactor core loading pattern by using reactivity interval schema

    International Nuclear Information System (INIS)

    Gong Zhaohu; Wang Kan; Yao Dong

    2011-01-01

    Highlights: → We present a new Loading Pattern Optimization method - Interval Bound Algorithm (IBA). → IBA directly uses the reactivity of fuel assemblies and burnable poison. → IBA can optimize fuel assembly orientation in a coupled way. → Numerical experiment shows that IBA outperforms genetic algorithm and engineers. → We devise DDWF technique to deal with multiple objectives and constraints. - Abstract: In order to optimize the core loading pattern in Nuclear Power Plants, the paper presents a new optimization method - Interval Bound Algorithm (IBA). Similar to the typical population based algorithms, e.g. genetic algorithm, IBA maintains a population of solutions and evolves them during the optimization process. IBA acquires the solution by statistical learning and sampling the control variable intervals of the population in each iteration. The control variables are the transforms of the reactivity of fuel assemblies or the worth of burnable poisons, which are the crucial heuristic information for loading pattern optimization problems. IBA can deal with the relationship between the dependent variables by defining the control variables. Based on the IBA algorithm, a parallel Loading Pattern Optimization code, named IBALPO, has been developed. To deal with multiple objectives and constraints, the Dynamic Discontinuous Weight Factors (DDWF) for the fitness function have been used in IBALPO. Finally, the code system has been used to solve a realistic reloading problem and a better pattern has been obtained compared with the ones searched by engineers and genetic algorithm, thus the performance of the code is proved.

  9. Optimization of Spacecraft Rendezvous and Docking using Interval Analysis

    NARCIS (Netherlands)

    Van Kampen, E.; Chu, Q.P.; Mulder, J.A.

    2010-01-01

    This paper applies interval optimization to the fixed-time multiple impulse rendezvous and docking problem. Current methods for solving this type of optimization problem include for example genetic algorithms and gradient based optimization. Unlike these methods, interval methods can guarantee that

  10. Application of the entropic coefficient for interval number optimization during interval assessment

    Directory of Open Access Journals (Sweden)

    Tynynyka A. N.

    2017-06-01

    Full Text Available In solving many statistical problems, the most precise choice of the distribution law of a random variable is required, the sample of which the authors observe. This choice requires the construction of an interval series. Therefore, the problem arises of assigning an optimal number of intervals, and this study proposes a number of formulas for solving it. Which of these formulas solves the problem more accurately? In [9], this question is investigated using the Pearson criterion. This article describes the procedure and on its basis gives formulas available in literature and proposed new formulas using the entropy coefficient. A comparison is made with the previously published results of applying Pearson's concord criterion for these purposes. Differences in the estimates of the accuracy of the formulas are found. The proposed new formulas for calculating the number of intervals showed the best results. Calculations have been made to compare the work of the same formulas for the distribution of sample data according to the normal law and the Rayleigh law.

  11. Optimal prediction intervals of wind power generation

    DEFF Research Database (Denmark)

    Wan, Can; Wu, Zhao; Pinson, Pierre

    2014-01-01

    direct optimization of both the coverage probability and sharpness to ensure the quality. The proposed method does not involve the statistical inference or distribution assumption of forecasting errors needed in most existing methods. Case studies using real wind farm data from Australia have been...

  12. Discrete-time optimal control and games on large intervals

    CERN Document Server

    Zaslavski, Alexander J

    2017-01-01

    Devoted to the structure of approximate solutions of discrete-time optimal control problems and approximate solutions of dynamic discrete-time two-player zero-sum games, this book presents results on properties of approximate solutions in an interval that is independent lengthwise, for all sufficiently large intervals. Results concerning the so-called turnpike property of optimal control problems and zero-sum games in the regions close to the endpoints of the time intervals are the main focus of this book. The description of the structure of approximate solutions on sufficiently large intervals and its stability will interest graduate students and mathematicians in optimal control and game theory, engineering, and economics. This book begins with a brief overview and moves on to analyze the structure of approximate solutions of autonomous nonconcave discrete-time optimal control Lagrange problems.Next the structures of approximate solutions of autonomous discrete-time optimal control problems that are discret...

  13. Interval sampling methods and measurement error: a computer simulation.

    Science.gov (United States)

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  14. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Directory of Open Access Journals (Sweden)

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  15. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  16. Process control and optimization with simple interval calculation method

    DEFF Research Database (Denmark)

    Pomerantsev, A.; Rodionova, O.; Høskuldsson, Agnar

    2006-01-01

    for the quality improvement in the course of production. The latter is an active quality optimization, which takes into account the actual history of the process. The advocate approach is allied to the conventional method of multivariate statistical process control (MSPC) as it also employs the historical process......Methods of process control and optimization are presented and illustrated with a real world example. The optimization methods are based on the PLS block modeling as well as on the simple interval calculation methods of interval prediction and object status classification. It is proposed to employ...... the series of expanding PLS/SIC models in order to support the on-line process improvements. This method helps to predict the effect of planned actions on the product quality and thus enables passive quality control. We have also considered an optimization approach that proposes the correcting actions...

  17. Relativistic rise measurements with very fine sampling intervals

    International Nuclear Information System (INIS)

    Ludlam, T.; Platner, E.D.; Polychronakos, V.A.; Lindenbaum, S.J.; Kramer, M.A.; Teramoto, Y.

    1980-01-01

    The motivation of this work was to determine whether the technique of charged particle identification via the relativistic rise in the ionization loss can be significantly improved by virtue of very small sampling intervals. A fast-sampling ADC and a longitudinal drift geometry were used to provide a large number of samples from a single drift chamber gap, achieving sampling intervals roughly 10 times smaller than any previous study. A single layer drift chamber was used, and tracks of 1 meter length were simulated by combining together samples from many identified particles in this detector. These data were used to study the resolving power for particle identification as a function of sample size, averaging technique, and the number of discrimination levels (ADC bits) used for pulse height measurements

  18. Symbol interval optimization for molecular communication with drift.

    Science.gov (United States)

    Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung

    2014-09-01

    In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.

  19. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  20. An Improvement to Interval Estimation for Small Samples

    Directory of Open Access Journals (Sweden)

    SUN Hui-Ling

    2017-02-01

    Full Text Available Because it is difficult and complex to determine the probability distribution of small samples,it is improper to use traditional probability theory to process parameter estimation for small samples. Bayes Bootstrap method is always used in the project. Although,the Bayes Bootstrap method has its own limitation,In this article an improvement is given to the Bayes Bootstrap method,This method extended the amount of samples by numerical simulation without changing the circumstances in a small sample of the original sample. And the new method can give the accurate interval estimation for the small samples. Finally,by using the Monte Carlo simulation to model simulation to the specific small sample problems. The effectiveness and practicability of the Improved-Bootstrap method was proved.

  1. Optimal time interval for induction of immunologic adaptive response

    International Nuclear Information System (INIS)

    Ju Guizhi; Song Chunhua; Liu Shuzheng

    1994-01-01

    The optimal time interval between prior dose (D1) and challenge dose (D2) for the induction of immunologic adaptive response was investigated. Kunming mice were exposed to 75 mGy X-rays at a dose rate of 12.5 mGy/min. 3, 6, 12, 24 or 60 h after the prior irradiation the mice were challenged with a dose of 1.5 Gy at a dose rate of 0.33 Gy/min. 18h after D2, the mice were sacrificed for examination of immunological parameters. The results showed that with an interval of 6 h between D1 and D2, the adaptive response of the reaction of splenocytes to LPS was induced, and with an interval of 12 h the adaptive responses of spontaneous incorporation of 3 H-TdR into thymocytes and the reaction of splenocytes to Con A and LPS were induced with 75 mGy prior irradiation. The data suggested that the optimal time intervals between D1 and D2 for the induction of immunologic adaptive response were 6 h and 12 h with a D1 of 75 mGy and a D2 of 1.5 Gy. The mechanism of immunologic adaptation following low dose radiation is discussed

  2. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  3. Number of core samples: Mean concentrations and confidence intervals

    International Nuclear Information System (INIS)

    Jensen, L.; Cromar, R.D.; Wilmarth, S.R.; Heasler, P.G.

    1995-01-01

    This document provides estimates of how well the mean concentration of analytes are known as a function of the number of core samples, composite samples, and replicate analyses. The estimates are based upon core composite data from nine recently sampled single-shell tanks. The results can be used when determining the number of core samples needed to ''characterize'' the waste from similar single-shell tanks. A standard way of expressing uncertainty in the estimate of a mean is with a 95% confidence interval (CI). The authors investigate how the width of a 95% CI on the mean concentration decreases as the number of observations increase. Specifically, the tables and figures show how the relative half-width (RHW) of a 95% CI decreases as the number of core samples increases. The RHW of a CI is a unit-less measure of uncertainty. The general conclusions are as follows: (1) the RHW decreases dramatically as the number of core samples is increased, the decrease is much smaller when the number of composited samples or the number of replicate analyses are increase; (2) if the mean concentration of an analyte needs to be estimated with a small RHW, then a large number of core samples is required. The estimated number of core samples given in the tables and figures were determined by specifying different sizes of the RHW. Four nominal sizes were examined: 10%, 25%, 50%, and 100% of the observed mean concentration. For a majority of analytes the number of core samples required to achieve an accuracy within 10% of the mean concentration is extremely large. In many cases, however, two or three core samples is sufficient to achieve a RHW of approximately 50 to 100%. Because many of the analytes in the data have small concentrations, this level of accuracy may be satisfactory for some applications

  4. Optimal interval for major maintenance actions in electricity distribution networks

    Energy Technology Data Exchange (ETDEWEB)

    Louit, Darko; Pascual, Rodrigo [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna MacKenna, 4860 Santiago (Chile); Banjevic, Dragan [Centre for Maintenance Optimization and Reliability Engineering, University of Toronto, 5 King' s College Rd., Toronto, Ontario (Canada)

    2009-09-15

    Many systems require the periodic undertaking of major (preventive) maintenance actions (MMAs) such as overhauls in mechanical equipment, reconditioning of train lines, resurfacing of roads, etc. In the long term, these actions contribute to achieving a lower rate of occurrence of failures, though in many cases they increase the intensity of the failure process shortly after performed, resulting in a non-monotonic trend for failure intensity. Also, in the special case of distributed assets such as communications and energy networks, pipelines, etc., it is likely that the maintenance action takes place sequentially over an extended period of time, implying that different sections of the network underwent the MMAs at different periods. This forces the development of a model based on a relative time scale (i.e. time since last major maintenance event) and the combination of data from different sections of a grid, under a normalization scheme. Additionally, extended maintenance times and sequential execution of the MMAs make it difficult to identify failures occurring before and after the preventive maintenance action. This results in the loss of important information for the characterization of the failure process. A simple model is introduced to determine the optimal MMA interval considering such restrictions. Furthermore, a case study illustrates the optimal tree trimming interval around an electricity distribution network. (author)

  5. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  6. Optimal Wind Power Uncertainty Intervals for Electricity Market Operation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ying; Zhou, Zhi; Botterud, Audun; Zhang, Kaifeng

    2018-01-01

    It is important to select an appropriate uncertainty level of the wind power forecast for power system scheduling and electricity market operation. Traditional methods hedge against a predefined level of wind power uncertainty, such as a specific confidence interval or uncertainty set, which leaves the questions of how to best select the appropriate uncertainty levels. To bridge this gap, this paper proposes a model to optimize the forecast uncertainty intervals of wind power for power system scheduling problems, with the aim of achieving the best trade-off between economics and reliability. Then we reformulate and linearize the models into a mixed integer linear programming (MILP) without strong assumptions on the shape of the probability distribution. In order to invest the impacts on cost, reliability, and prices in a electricity market, we apply the proposed model on a twosettlement electricity market based on a six-bus test system and on a power system representing the U.S. state of Illinois. The results show that the proposed method can not only help to balance the economics and reliability of the power system scheduling, but also help to stabilize the energy prices in electricity market operation.

  7. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  8. Optimization of Allowed Outage Time and Surveillance Test Intervals

    Energy Technology Data Exchange (ETDEWEB)

    Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)

    2015-10-15

    The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general

  9. Optimization of Allowed Outage Time and Surveillance Test Intervals

    International Nuclear Information System (INIS)

    Al-Dheeb, Mujahed; Kang, Sunkoo; Kim, Jonghyun

    2015-01-01

    The primary purpose of surveillance testing is to assure that the components of standby safety systems will be operable when they are needed in an accident. By testing these components, failures can be detected that may have occurred since the last test or the time when the equipment was last known to be operational. The probability a system or system component performs a specified function or mission under given conditions at a prescribed time is called availability (A). Unavailability (U) as a risk measure is just the complementary probability to A(t). The increase of U means the risk is increased as well. D and T have an important impact on components, or systems, unavailability. The extension of D impacts the maintenance duration distributions for at-power operations, making them longer. This, in turn, increases the unavailability due to maintenance in the systems analysis. As for T, overly-frequent surveillances can result in high system unavailability. This is because the system may be taken out of service often due to the surveillance itself and due to the repair of test-caused failures of the component. The test-caused failures include those incurred by wear and tear of the component due to the surveillances. On the other hand, as the surveillance interval increases, the component's unavailability will grow because of increased occurrences of time-dependent random failures. In that situation, the component cannot be relied upon, and accordingly the system unavailability will increase. Thus, there should be an optimal component surveillance interval in terms of the corresponding system availability. This paper aims at finding the optimal T and D which result in minimum unavailability which in turn reduces the risk. Applying the methodology in section 2 to find the values of optimal T and D for two components, i.e., safety injection pump (SIP) and turbine driven aux feedwater pump (TDAFP). Section 4 is addressing interaction between D and T. In general

  10. An optimal dynamic interval preventive maintenance scheduling for series systems

    International Nuclear Information System (INIS)

    Gao, Yicong; Feng, Yixiong; Zhang, Zixian; Tan, Jianrong

    2015-01-01

    This paper studies preventive maintenance (PM) with dynamic interval for a multi-component system. Instead of equal interval, the time of PM period in the proposed dynamic interval model is not a fixed constant, which varies from interval-down to interval-up. It is helpful to reduce the outage loss on frequent repair parts and avoid lack of maintenance of the equipment by controlling the equipment maintenance frequency, when compared to a periodic PM scheme. According to the definition of dynamic interval, the reliability of system is analyzed from the failure mechanisms of its components and the different effects of non-periodic PM actions on the reliability of the components. Following the proposed model of reliability, a novel framework for solving the non-periodical PM schedule with dynamic interval based on the multi-objective genetic algorithm is proposed. The framework denotes the strategies include updating strategy, deleting strategy, inserting strategy and moving strategy, which is set to correct the invalid population individuals of the algorithm. The values of the dynamic interval and the selections of PM action for the components on every PM stage are determined by achieving a certain level of system availability with the minimum total PM-related cost. Finally, a typical rotary table system of NC machine tool is used as an example to describe the proposed method. - Highlights: • A non-periodic preventive maintenance scheduling model is proposed. • A framework for solving the non-periodical PM schedule problem is developed. • The interval of non-periodic PM is flexible and schedule can be better adjusted. • Dynamic interval leads to more efficient solutions than fixed interval does

  11. Binomial Distribution Sample Confidence Intervals Estimation 1. Sampling and Medical Key Parameters Calculation

    Directory of Open Access Journals (Sweden)

    Tudor DRUGAN

    2003-08-01

    Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.

  12. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  13. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  14. RF power consumption emulation optimized with interval valued homotopies

    DEFF Research Database (Denmark)

    Musiige, Deogratius; Anton, François; Yatskevich, Vital

    2011-01-01

    This paper presents a methodology towards the emulation of the electrical power consumption of the RF device during the cellular phone/handset transmission mode using the LTE technology. The emulation methodology takes the physical environmental variables and the logical interface between...... the baseband and the RF system as inputs to compute the emulated power dissipation of the RF device. The emulated power, in between the measured points corresponding to the discrete values of the logical interface parameters is computed as a polynomial interpolation using polynomial basis functions....... The evaluation of polynomial and spline curve fitting models showed a respective divergence (test error) of 8% and 0.02% from the physically measured power consumption. The precisions of the instruments used for the physical measurements have been modeled as intervals. We have been able to model the power...

  15. Effects of Spatial Sampling Interval on Roughness Parameters and Microwave Backscatter over Agricultural Soil Surfaces

    Directory of Open Access Journals (Sweden)

    Matías Ernesto Barber

    2016-06-01

    Full Text Available The spatial sampling interval, as related to the ability to digitize a soil profile with a certain number of features per unit length, depends on the profiling technique itself. From a variety of profiling techniques, roughness parameters are estimated at different sampling intervals. Since soil profiles have continuous spectral components, it is clear that roughness parameters are influenced by the sampling interval of the measurement device employed. In this work, we contributed to answer which sampling interval the profiles needed to be measured at to accurately account for the microwave response of agricultural surfaces. For this purpose, a 2-D laser profiler was built and used to measure surface soil roughness at field scale over agricultural sites in Argentina. Sampling intervals ranged from large (50 mm to small ones (1 mm, with several intermediate values. Large- and intermediate-sampling-interval profiles were synthetically derived from nominal, 1 mm ones. With these data, the effect of sampling-interval-dependent roughness parameters on backscatter response was assessed using the theoretical backscatter model IEM2M. Simulations demonstrated that variations of roughness parameters depended on the working wavelength and was less important at L-band than at C- or X-band. In any case, an underestimation of the backscattering coefficient of about 1-4 dB was observed at larger sampling intervals. As a general rule a sampling interval of 15 mm can be recommended for L-band and 5 mm for C-band.

  16. Multi-objective reliability redundancy allocation in an interval environment using particle swarm optimization

    International Nuclear Information System (INIS)

    Zhang, Enze; Chen, Qingwei

    2016-01-01

    Most of the existing works addressing reliability redundancy allocation problems are based on the assumption of fixed reliabilities of components. In real-life situations, however, the reliabilities of individual components may be imprecise, most often given as intervals, under different operating or environmental conditions. This paper deals with reliability redundancy allocation problems modeled in an interval environment. An interval multi-objective optimization problem is formulated from the original crisp one, where system reliability and cost are simultaneously considered. To render the multi-objective particle swarm optimization (MOPSO) algorithm capable of dealing with interval multi-objective optimization problems, a dominance relation for interval-valued functions is defined with the help of our newly proposed order relations of interval-valued numbers. Then, the crowding distance is extended to the multi-objective interval-valued case. Finally, the effectiveness of the proposed approach has been demonstrated through two numerical examples and a case study of supervisory control and data acquisition (SCADA) system in water resource management. - Highlights: • We model the reliability redundancy allocation problem in an interval environment. • We apply the particle swarm optimization directly on the interval values. • A dominance relation for interval-valued multi-objective functions is defined. • The crowding distance metric is extended to handle imprecise objective functions.

  17. A parallel optimization method for product configuration and supplier selection based on interval

    Science.gov (United States)

    Zheng, Jian; Zhang, Meng; Li, Guoxi

    2017-06-01

    In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.

  18. Identification of optimal inspection interval via delay-time concept

    Directory of Open Access Journals (Sweden)

    Glauco Ricardo Simões Gomes

    2016-06-01

    Full Text Available This paper presents an application of mathematical modeling aimed at managing maintenance based on the delay-time concept. The study scenario was the manufacturing sector of an industrial unit, which operates 24 hours a day in a continuous flow of production. The main idea was to use the concepts of this approach to determine the optimal time of preventive action by the maintenance department in order to ensure the greatest availability of equipment and facilities at appropriate maintenance costs. After a brief introduction of the subject, the article presents topics that illustrate the importance of mathematical modeling in maintenance management and the delay-time concept. It also describes the characteristics of the company where the study was conducted, as well as the data related to the production process and maintenance actions. Finally, the results obtained after applying the delay-time concept are presented and discussed, as well as the limitations of the article and the proposals for future research.

  19. Optimal parallel algorithms for problems modeled by a family of intervals

    Science.gov (United States)

    Olariu, Stephan; Schwing, James L.; Zhang, Jingyuan

    1992-01-01

    A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

  20. Optimal test intervals for shutdown systems for the Cernavoda nuclear power station

    International Nuclear Information System (INIS)

    Negut, Gh.; Laslau, F.

    1993-01-01

    Cernavoda nuclear power station required a complete PSA study. As a part of this study, an important goal to enhance the effectiveness of the plant operation is to establish optimal test intervals for the important engineering safety systems. The paper presents, briefly, the current methods to optimize the test intervals. For this reason it was used Vesely methods to establish optimal test intervals and Frantic code to survey the influence of the test intervals on system availability. The applications were done on the Shutdown System no. 1, a shutdown system provided whit solid rods and on Shutdown System no. 2 provided with injecting poison. The shutdown systems receive nine total independent scram signals that dictate the test interval. Fault trees for the both safety systems were developed. For the fault tree solutions an original code developed in our Institute was used. The results, intended to be implemented in the technical specifications for test and operation of Cernavoda NPS are presented

  1. Interval estimation methods of the mean in small sample situation and the results' comparison

    International Nuclear Information System (INIS)

    Wu Changli; Guo Chunying; Jiang Meng; Lin Yuangen

    2009-01-01

    The methods of the sample mean's interval estimation, namely the classical method, the Bootstrap method, the Bayesian Bootstrap method, the Jackknife method and the spread method of the Empirical Characteristic distribution function are described. Numerical calculation on the samples' mean intervals is carried out where the numbers of the samples are 4, 5, 6 respectively. The results indicate the Bootstrap method and the Bayesian Bootstrap method are much more appropriate than others in small sample situation. (authors)

  2. Technical note: Instantaneous sampling intervals validated from continuous video observation for behavioral recording of feedlot lambs.

    Science.gov (United States)

    Pullin, A N; Pairis-Garcia, M D; Campbell, B J; Campler, M R; Proudfoot, K L

    2017-11-01

    When considering methodologies for collecting behavioral data, continuous sampling provides the most complete and accurate data set whereas instantaneous sampling can provide similar results and also increase the efficiency of data collection. However, instantaneous time intervals require validation to ensure accurate estimation of the data. Therefore, the objective of this study was to validate scan sampling intervals for lambs housed in a feedlot environment. Feeding, lying, standing, drinking, locomotion, and oral manipulation were measured on 18 crossbred lambs housed in an indoor feedlot facility for 14 h (0600-2000 h). Data from continuous sampling were compared with data from instantaneous scan sampling intervals of 5, 10, 15, and 20 min using a linear regression analysis. Three criteria determined if a time interval accurately estimated behaviors: 1) ≥ 0.90, 2) slope not statistically different from 1 ( > 0.05), and 3) intercept not statistically different from 0 ( > 0.05). Estimations for lying behavior were accurate up to 20-min intervals, whereas feeding and standing behaviors were accurate only at 5-min intervals (i.e., met all 3 regression criteria). Drinking, locomotion, and oral manipulation demonstrated poor associations () for all tested intervals. The results from this study suggest that a 5-min instantaneous sampling interval will accurately estimate lying, feeding, and standing behaviors for lambs housed in a feedlot, whereas continuous sampling is recommended for the remaining behaviors. This methodology will contribute toward the efficiency, accuracy, and transparency of future behavioral data collection in lamb behavior research.

  3. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  4. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  5. The Gas Sampling Interval Effect on V˙O2peak Is Independent of Exercise Protocol.

    Science.gov (United States)

    Scheadler, Cory M; Garver, Matthew J; Hanson, Nicholas J

    2017-09-01

    There is a plethora of gas sampling intervals available during cardiopulmonary exercise testing to measure peak oxygen consumption (V˙O2peak). Different intervals can lead to altered V˙O2peak. Whether differences are affected by the exercise protocol or subject sample is not clear. The purpose of this investigation was to determine whether V˙O2peak differed because of the manipulation of sampling intervals and whether differences were independent of the protocol and subject sample. The first subject sample (24 ± 3 yr; V˙O2peak via 15-breath moving averages: 56.2 ± 6.8 mL·kg·min) completed the Bruce and the self-paced V˙O2max protocols. The second subject sample (21.9 ± 2.7 yr; V˙O2peak via 15-breath moving averages: 54.2 ± 8.0 mL·kg·min) completed the Bruce and the modified Astrand protocols. V˙O2peak was identified using five sampling intervals: 15-s block averages, 30-s block averages, 15-breath block averages, 15-breath moving averages, and 30-s block averages aligned to the end of exercise. Differences in V˙O2peak between intervals were determined using repeated-measures ANOVAs. The influence of subject sample on the sampling effect was determined using independent t-tests. There was a significant main effect of sampling interval on V˙O2peak (first sample Bruce and self-paced V˙O2max P sample Bruce and modified Astrand P sampling intervals followed a similar pattern for each protocol and subject sample, with 15-breath moving average presenting the highest V˙O2peak. The effect of manipulating gas sampling intervals on V˙O2peak appears to be protocol and sample independent. These findings highlight our recommendation that the clinical and scientific community request and report the sampling interval whenever metabolic data are presented. The standardization of reporting would assist in the comparison of V˙O2peak.

  6. Optimize the Coverage Probability of Prediction Interval for Anomaly Detection of Sensor-Based Monitoring Series

    Directory of Open Access Journals (Sweden)

    Jingyue Pang

    2018-03-01

    Full Text Available Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR and relevance vector machine (RVM are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP, which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%. There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application.

  7. Optimal Testing Intervals in the Squatting Test to Determine Baroreflex Sensitivity

    OpenAIRE

    Ishitsuka, S.; Kusuyama, N.; Tanaka, M.

    2014-01-01

    The recently introduced “squatting test” (ST) utilizes a simple postural change to perturb the blood pressure and to assess baroreflex sensitivity (BRS). In our study, we estimated the reproducibility of and the optimal testing interval between the STs in healthy volunteers. Thirty-four subjects free of cardiovascular disorders and taking no medication were instructed to perform the repeated ST at 30-sec, 1-min, and 3-min intervals in duplicate in a random sequence, while the systolic blood p...

  8. Optimal test intervals of standby components based on actual plant-specific data

    International Nuclear Information System (INIS)

    Jones, R.B.; Bickel, J.H.

    1987-01-01

    Based on standard reliability analysis techniques, both under testing and over testing affect the availability of standby components. If tests are performed too often, unavailability is increased since the equipment is being used excessively. Conversely if testing is performed too infrequently, the likelihood of component unavailability is also increased due to the formation of rust, heat or radiation damage, dirt infiltration, etc. Thus from a physical perspective, an optimal test interval should exist which minimizes unavailability. This paper illustrates the application of an unavailability model that calculates optimal testing intervals for components with a failure database. (orig./HSCH)

  9. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  10. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  11. The effects of varying sampling intervals on the growth and survival ...

    African Journals Online (AJOL)

    Four different sampling intervals were investigated during a six-week outdoor nursery management of Heterobranchus longifilis (Valenciennes, 1840) fry in outdoor concrete tanks in order to determine the most suitable sampling regime for maximum productivity in terms of optimum growth and survival of hatchlings and ...

  12. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  13. A novel non-probabilistic approach using interval analysis for robust design optimization

    International Nuclear Information System (INIS)

    Sun, Wei; Dong, Rongmei; Xu, Huanwei

    2009-01-01

    A technique for formulation of the objective and constraint functions with uncertainty plays a crucial role in robust design optimization. This paper presents the first application of interval methods for reformulating the robust optimization problem. Based on interval mathematics, the original real-valued objective and constraint functions are replaced with the interval-valued functions, which directly represent the upper and lower bounds of the new functions under uncertainty. The single objective function is converted into two objective functions for minimizing the mean value and the variation, and the constraint functions are reformulated with the acceptable robustness level, resulting in a bi-level mathematical model. Compared with other methods, this method is efficient and does not require presumed probability distribution of uncertain factors or gradient or continuous information of constraints. Two numerical examples are used to illustrate the validity and feasibility of the presented method

  14. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  15. Estimating fluvial wood discharge from timelapse photography with varying sampling intervals

    Science.gov (United States)

    Anderson, N. K.

    2013-12-01

    There is recent focus on calculating wood budgets for streams and rivers to help inform management decisions, ecological studies and carbon/nutrient cycling models. Most work has measured in situ wood in temporary storage along stream banks or estimated wood inputs from banks. Little effort has been employed monitoring and quantifying wood in transport during high flows. This paper outlines a procedure for estimating total seasonal wood loads using non-continuous coarse interval sampling and examines differences in estimation between sampling at 1, 5, 10 and 15 minutes. Analysis is performed on wood transport for the Slave River in Northwest Territories, Canada. Relative to the 1 minute dataset, precision decreased by 23%, 46% and 60% for the 5, 10 and 15 minute datasets, respectively. Five and 10 minute sampling intervals provided unbiased equal variance estimates of 1 minute sampling, whereas 15 minute intervals were biased towards underestimation by 6%. Stratifying estimates by day and by discharge increased precision over non-stratification by 4% and 3%, respectively. Not including wood transported during ice break-up, the total minimum wood load estimated at this site is 3300 × 800$ m3 for the 2012 runoff season. The vast majority of the imprecision in total wood volumes came from variance in estimating average volume per log. Comparison of proportions and variance across sample intervals using bootstrap sampling to achieve equal n. Each trial was sampled for n=100, 10,000 times and averaged. All trials were then averaged to obtain an estimate for each sample interval. Dashed lines represent values from the one minute dataset.

  16. Influence of sampling interval and number of projections on the quality of SR-XFMT reconstruction

    International Nuclear Information System (INIS)

    Deng Biao; Yu Xiaohan; Xu Hongjie

    2007-01-01

    Synchrotron Radiation based X-ray Fluorescent Microtomography (SR-XFMT) is a nondestructive technique for detecting elemental composition and distribution inside a specimen with high spatial resolution and sensitivity. In this paper, computer simulation of SR-XFMT experiment is performed. The influence of the sampling interval and the number of projections on the quality of SR-XFMT image reconstruction is analyzed. It is found that the sampling interval has greater effect on the quality of reconstruction than the number of projections. (authors)

  17. A Hybrid Interval-Robust Optimization Model for Water Quality Management.

    Science.gov (United States)

    Xu, Jieyu; Li, Yongping; Huang, Guohe

    2013-05-01

    In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.

  18. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    Science.gov (United States)

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  19. Networked control systems with communication constraints :tradeoffs between sampling intervals, delays and performance

    NARCIS (Netherlands)

    Heemels, W.P.M.H.; Teel, A.R.; Wouw, van de N.; Nesic, D.

    2010-01-01

    There are many communication imperfections in networked control systems (NCS) such as varying transmission delays, varying sampling/transmission intervals, packet loss, communication constraints and quantization effects. Most of the available literature on NCS focuses on only some of these aspects,

  20. Estimation of reference intervals from small samples: an example using canine plasma creatinine.

    Science.gov (United States)

    Geffré, A; Braun, J P; Trumel, C; Concordet, D

    2009-12-01

    According to international recommendations, reference intervals should be determined from at least 120 reference individuals, which often are impossible to achieve in veterinary clinical pathology, especially for wild animals. When only a small number of reference subjects is available, the possible bias cannot be known and the normality of the distribution cannot be evaluated. A comparison of reference intervals estimated by different methods could be helpful. The purpose of this study was to compare reference limits determined from a large set of canine plasma creatinine reference values, and large subsets of this data, with estimates obtained from small samples selected randomly. Twenty sets each of 120 and 27 samples were randomly selected from a set of 1439 plasma creatinine results obtained from healthy dogs in another study. Reference intervals for the whole sample and for the large samples were determined by a nonparametric method. The estimated reference limits for the small samples were minimum and maximum, mean +/- 2 SD of native and Box-Cox-transformed values, 2.5th and 97.5th percentiles by a robust method on native and Box-Cox-transformed values, and estimates from diagrams of cumulative distribution functions. The whole sample had a heavily skewed distribution, which approached Gaussian after Box-Cox transformation. The reference limits estimated from small samples were highly variable. The closest estimates to the 1439-result reference interval for 27-result subsamples were obtained by both parametric and robust methods after Box-Cox transformation but were grossly erroneous in some cases. For small samples, it is recommended that all values be reported graphically in a dot plot or histogram and that estimates of the reference limits be compared using different methods.

  1. Two sample Bayesian prediction intervals for order statistics based on the inverse exponential-type distributions using right censored sample

    Directory of Open Access Journals (Sweden)

    M.M. Mohie El-Din

    2011-10-01

    Full Text Available In this paper, two sample Bayesian prediction intervals for order statistics (OS are obtained. This prediction is based on a certain class of the inverse exponential-type distributions using a right censored sample. A general class of prior density functions is used and the predictive cumulative function is obtained in the two samples case. The class of the inverse exponential-type distributions includes several important distributions such the inverse Weibull distribution, the inverse Burr distribution, the loglogistic distribution, the inverse Pareto distribution and the inverse paralogistic distribution. Special cases of the inverse Weibull model such as the inverse exponential model and the inverse Rayleigh model are considered.

  2. Test interval optimization of safety systems of nuclear power plant using fuzzy-genetic approach

    International Nuclear Information System (INIS)

    Durga Rao, K.; Gopika, V.; Kushwaha, H.S.; Verma, A.K.; Srividya, A.

    2007-01-01

    Probabilistic safety assessment (PSA) is the most effective and efficient tool for safety and risk management in nuclear power plants (NPP). PSA studies not only evaluate risk/safety of systems but also their results are very useful in safe, economical and effective design and operation of NPPs. The latter application is popularly known as 'Risk-Informed Decision Making'. Evaluation of technical specifications is one such important application of Risk-Informed decision making. Deciding test interval (TI), one of the important technical specifications, with the given resources and risk effectiveness is an optimization problem. Uncertainty is inherently present in the availability parameters such as failure rate and repair time due to the limitation in assessing these parameters precisely. This paper presents a solution to test interval optimization problem with uncertain parameters in the model with fuzzy-genetic approach along with a case of application from a safety system of Indian pressurized heavy water reactor (PHWR)

  3. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming.

    Science.gov (United States)

    Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun

    2015-01-01

    The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Adaptive Kalman Filter Based on Adjustable Sampling Interval in Burst Detection for Water Distribution System

    Directory of Open Access Journals (Sweden)

    Doo Yong Choi

    2016-04-01

    Full Text Available Rapid detection of bursts and leaks in water distribution systems (WDSs can reduce the social and economic costs incurred through direct loss of water into the ground, additional energy demand for water supply, and service interruptions. Many real-time burst detection models have been developed in accordance with the use of supervisory control and data acquisition (SCADA systems and the establishment of district meter areas (DMAs. Nonetheless, no consideration has been given to how frequently a flow meter measures and transmits data for predicting breaks and leaks in pipes. This paper analyzes the effect of sampling interval when an adaptive Kalman filter is used for detecting bursts in a WDS. A new sampling algorithm is presented that adjusts the sampling interval depending on the normalized residuals of flow after filtering. The proposed algorithm is applied to a virtual sinusoidal flow curve and real DMA flow data obtained from Jeongeup city in South Korea. The simulation results prove that the self-adjusting algorithm for determining the sampling interval is efficient and maintains reasonable accuracy in burst detection. The proposed sampling method has a significant potential for water utilities to build and operate real-time DMA monitoring systems combined with smart customer metering systems.

  5. Low Carbon-Oriented Optimal Reliability Design with Interval Product Failure Analysis and Grey Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Yixiong Feng

    2017-03-01

    Full Text Available The problem of large amounts of carbon emissions causes wide concern across the world, and it has become a serious threat to the sustainable development of the manufacturing industry. The intensive research into technologies and methodologies for green product design has significant theoretical meaning and practical value in reducing the emissions of the manufacturing industry. Therefore, a low carbon-oriented product reliability optimal design model is proposed in this paper: (1 The related expert evaluation information was prepared in interval numbers; (2 An improved product failure analysis considering the uncertain carbon emissions of the subsystem was performed to obtain the subsystem weight taking the carbon emissions into consideration. The interval grey correlation analysis was conducted to obtain the subsystem weight taking the uncertain correlations inside the product into consideration. Using the above two kinds of subsystem weights and different caution indicators of the decision maker, a series of product reliability design schemes is available; (3 The interval-valued intuitionistic fuzzy sets (IVIFSs were employed to select the optimal reliability and optimal design scheme based on three attributes, namely, low carbon, correlation and functions, and economic cost. The case study of a vertical CNC lathe proves the superiority and rationality of the proposed method.

  6. Optimizing structure of complex technical system by heterogeneous vector criterion in interval form

    Science.gov (United States)

    Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.

    2018-05-01

    The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.

  7. Dual-source CT coronary imaging in heart transplant recipients: image quality and optimal reconstruction interval

    International Nuclear Information System (INIS)

    Bastarrika, Gorka; Arraiza, Maria; Pueyo, Jesus C.; Cecco, Carlo N. de; Ubilla, Matias; Mastrobuoni, Stefano; Rabago, Gregorio

    2008-01-01

    The image quality and optimal reconstruction interval for coronary arteries in heart transplant recipients undergoing non-invasive dual-source computed tomography (DSCT) coronary angiography was evaluated. Twenty consecutive heart transplant recipients who underwent DSCT coronary angiography were included (19 male, one female; mean age 63.1±10.7 years). Data sets were reconstructed in 5% steps from 30% to 80% of the R-R interval. Two blinded independent observers assessed the image quality of each coronary segments using a five-point scale (from 0 = not evaluative to 4=excellent quality). A total of 289 coronary segments in 20 heart transplant recipients were evaluated. Mean heart rate during the scan was 89.1±10.4 bpm. At the best reconstruction interval, diagnostic image quality (score ≥2) was obtained in 93.4% of the coronary segments (270/289) with a mean image quality score of 3.04± 0.63. Systolic reconstruction intervals provided better image quality scores than diastolic reconstruction intervals (overall mean quality scores obtained with the systolic and diastolic reconstructions 3.03±1.06 and 2.73±1.11, respectively; P<0.001). Different systolic reconstruction intervals (35%, 40%, 45% of RR interval) did not yield to significant differences in image quality scores for the coronary segments (P=0.74). Reconstructions obtained at the systolic phase of the cardiac cycle allowed excellent diagnostic image quality coronary angiograms in heart transplant recipients undergoing DSCT coronary angiography. (orig.)

  8. Binomial Distribution Sample Confidence Intervals Estimation 7. Absolute Risk Reduction and ARR-like Expressions

    Directory of Open Access Journals (Sweden)

    Andrei ACHIMAŞ CADARIU

    2004-08-01

    Full Text Available Assessments of a controlled clinical trial suppose to interpret some key parameters as the controlled event rate, experimental event date, relative risk, absolute risk reduction, relative risk reduction, number needed to treat when the effect of the treatment are dichotomous variables. Defined as the difference in the event rate between treatment and control groups, the absolute risk reduction is the parameter that allowed computing the number needed to treat. The absolute risk reduction is compute when the experimental treatment reduces the risk for an undesirable outcome/event. In medical literature when the absolute risk reduction is report with its confidence intervals, the method used is the asymptotic one, even if it is well know that may be inadequate. The aim of this paper is to introduce and assess nine methods of computing confidence intervals for absolute risk reduction and absolute risk reduction – like function.Computer implementations of the methods use the PHP language. Methods comparison uses the experimental errors, the standard deviations, and the deviation relative to the imposed significance level for specified sample sizes. Six methods of computing confidence intervals for absolute risk reduction and absolute risk reduction-like functions were assessed using random binomial variables and random sample sizes.The experiments shows that the ADAC, and ADAC1 methods obtains the best overall performance of computing confidence intervals for absolute risk reduction.

  9. Determining optimal preventive maintenance interval for component of Well Barrier Element in an Oil & Gas Company

    Science.gov (United States)

    Siswanto, A.; Kurniati, N.

    2018-04-01

    An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.

  10. A Note on Confidence Interval for the Power of the One Sample Test

    OpenAIRE

    A. Wong

    2010-01-01

    In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a ( 1 − ) 100% confidence interval for...

  11. Optimization of the Reconstruction Interval in Neurovascular 4D-CTA Imaging

    Science.gov (United States)

    Hoogenboom, T.C.H.; van Beurden, R.M.J.; van Teylingen, B.; Schenk, B.; Willems, P.W.A.

    2012-01-01

    Summary Time resolved whole brain CT angiography (4D-CTA) is a novel imaging technology providing information regarding blood flow. One of the factors that influence the diagnostic value of this examination is the temporal resolution, which is affected by the gantry rotation speed during acquisition and the reconstruction interval during post-processing. Post-processing determines the time spacing between two reconstructed volumes and, unlike rotation speed, does not affect radiation burden. The data sets of six patients who underwent a cranial 4D-CTA were used for this study. Raw data was acquired using a 320-slice scanner with a rotation speed of 2 Hz. The arterial to venous passage of an intravenous contrast bolus was captured during a 15 s continuous scan. The raw data was reconstructed using four different reconstruction-intervals: 0.2, 0.3, 0.5 and 1.0 s. The results were rated by two observers using a standardized score sheet. The appearance of each lesion was rated correctly in all readings. Scoring for quality of temporal resolution revealed a stepwise improvement from the 1.0 s interval to the 0.3 s interval, while no discernable improvement was noted between the 0.3 s and 0.2 s interval. An increase in temporal resolution may improve the diagnostic quality of cranial 4D-CTA. Using a rotation speed of 0.5 s, the optimal reconstruction interval appears to be 0.3 s, beyond which, changes can no longer be discerned. PMID:23217631

  12. Constrained optimization of test intervals using a steady-state genetic algorithm

    International Nuclear Information System (INIS)

    Martorell, S.; Carlos, S.; Sanchez, A.; Serradell, V.

    2000-01-01

    There is a growing interest from both the regulatory authorities and the nuclear industry to stimulate the use of Probabilistic Risk Analysis (PRA) for risk-informed applications at Nuclear Power Plants (NPPs). Nowadays, special attention is being paid on analyzing plant-specific changes to Test Intervals (TIs) within the Technical Specifications (TSs) of NPPs and it seems to be a consensus on the need of making these requirements more risk-effective and less costly. Resource versus risk-control effectiveness principles formally enters in optimization problems. This paper presents an approach for using the PRA models in conducting the constrained optimization of TIs based on a steady-state genetic algorithm (SSGA) where the cost or the burden is to be minimized while the risk or performance is constrained to be at a given level, or vice versa. The paper encompasses first with the problem formulation, where the objective function and constraints that apply in the constrained optimization of TIs based on risk and cost models at system level are derived. Next, the foundation of the optimizer is given, which is derived by customizing a SSGA in order to allow optimizing TIs under constraints. Also, a case study is performed using this approach, which shows the benefits of adopting both PRA models and genetic algorithms, in particular for the constrained optimization of TIs, although it is also expected a great benefit of using this approach to solve other engineering optimization problems. However, care must be taken in using genetic algorithms in constrained optimization problems as it is concluded in this paper

  13. An optimal design of cluster spacing intervals for staged fracturing in horizontal shale gas wells based on the optimal SRVs

    Directory of Open Access Journals (Sweden)

    Lan Ren

    2017-09-01

    Full Text Available When horizontal well staged cluster fracturing is applied in shale gas reservoirs, the cluster spacing is essential to fracturing performance. If the cluster spacing is too small, the stimulated area between major fractures will be overlapped, and the efficiency of fracturing stimulation will be decreased. If the cluster spacing is too large, the area between major fractures cannot be stimulated completely and reservoir recovery extent will be adversely impacted. At present, cluster spacing design is mainly based on the static model with the potential reservoir stimulation area as the target, and there is no cluster spacing design method in accordance with the actual fracturing process and targets dynamic stimulated reservoir volume (SRV. In this paper, a dynamic SRV calculation model for cluster fracture propagation was established by analyzing the coupling mechanisms among fracture propagation, fracturing fluid loss and stress. Then, the cluster spacing was optimized to reach the target of the optimal SRVs. This model was applied for validation on site in the Jiaoshiba shale gasfield in the Fuling area of the Sichuan Basin. The key geological engineering parameters influencing the optimal cluster spacing intervals were analyzed. The reference charts for the optimal cluster spacing design were prepared based on the geological characteristics of south and north blocks in the Jiaoshiba shale gasfield. It is concluded that the cluster spacing optimal design method proposed in this paper is of great significance in overcoming the blindness in current cluster perforation design and guiding the optimal design of volume fracturing in shale gas reservoirs. Keywords: Shale gas, Horizontal well, Staged fracturing, Cluster spacing, Reservoir, Stimulated reservoir volume (SRV, Mathematical model, Optimal method, Sichuan basin, Jiaoshiba shale gasfield

  14. Computing interval-valued reliability measures: application of optimal control methods

    DEFF Research Database (Denmark)

    Kozin, Igor; Krymsky, Victor

    2017-01-01

    The paper describes an approach to deriving interval-valued reliability measures given partial statistical information on the occurrence of failures. We apply methods of optimal control theory, in particular, Pontryagin’s principle of maximum to solve the non-linear optimisation problem and derive...... the probabilistic interval-valued quantities of interest. It is proven that the optimisation problem can be translated into another problem statement that can be solved on the class of piecewise continuous probability density functions (pdfs). This class often consists of piecewise exponential pdfs which appear...... as soon as among the constraints there are bounds on a failure rate of a component under consideration. Finding the number of switching points of the piecewise continuous pdfs and their values becomes the focus of the approach described in the paper. Examples are provided....

  15. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  16. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  17. A Note on Confidence Interval for the Power of the One Sample Test

    Directory of Open Access Journals (Sweden)

    A. Wong

    2010-01-01

    Full Text Available In introductory statistics texts, the power of the test of a one-sample mean when the variance is known is widely discussed. However, when the variance is unknown, the power of the Student's -test is seldom mentioned. In this note, a general methodology for obtaining inference concerning a scalar parameter of interest of any exponential family model is proposed. The method is then applied to the one-sample mean problem with unknown variance to obtain a (1−100% confidence interval for the power of the Student's -test that detects the difference (−0. The calculations require only the density and the cumulative distribution functions of the standard normal distribution. In addition, the methodology presented can also be applied to determine the required sample size when the effect size and the power of a size test of mean are given.

  18. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  19. Rescaled Range Analysis and Detrended Fluctuation Analysis: Finite Sample Properties and Confidence Intervals

    Czech Academy of Sciences Publication Activity Database

    Krištoufek, Ladislav

    4/2010, č. 3 (2010), s. 236-250 ISSN 1802-4696 R&D Projects: GA ČR GD402/09/H045; GA ČR GA402/09/0965 Grant - others:GA UK(CZ) 118310 Institutional research plan: CEZ:AV0Z10750506 Keywords : rescaled range analysis * detrended fluctuation analysis * Hurst exponent * long-range dependence Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2010/E/kristoufek-rescaled range analysis and detrended fluctuation analysis finite sample properties and confidence intervals.pdf

  20. Suboptimal and optimal order policies for fixed and varying replenishment interval with declining market

    Science.gov (United States)

    Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon

    2016-06-01

    One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.

  1. Does a 4–6 Week Shoeing Interval Promote Optimal Foot Balance in the Working Equine?

    Directory of Open Access Journals (Sweden)

    Kirsty Leśniak

    2017-03-01

    Full Text Available Variation in equine hoof conformation between farriery interventions lacks research, despite associations with distal limb injuries. This study aimed to determine linear and angular hoof variations pre- and post-farriery within a four to six week shoeing/trimming interval. Seventeen hoof and distal limb measurements were drawn from lateral and anterior digital photographs from 26 horses pre- and post-farriery. Most lateral view variables changed significantly. Reductions of the dorsal wall, and weight bearing and coronary band lengths resulted in an increased vertical orientation of the hoof. The increased dorsal hoof wall angle, heel angle, and heel height illustrated this further, improving dorsopalmar alignment. Mediolateral measurements of coronary band and weight bearing lengths reduced, whilst medial and lateral wall lengths from the 2D images increased, indicating an increased vertical hoof alignment. Additionally, dorsopalmar balance improved. However, the results demonstrated that a four to six week interval is sufficient for a palmer shift in the centre of pressure, increasing the loading on acutely inclined heels, altering DIP angulation, and increasing the load on susceptible structures (e.g., DDFT. Mediolateral variable asymmetries suit the lateral hoof landing and unrollment pattern of the foot during landing. The results support regular (four to six week farriery intervals for the optimal prevention of excess loading of palmar limb structures, reducing long-term injury risks through cumulative, excessive loading.

  2. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  3. Sample preparation optimization in fecal metabolic profiling.

    Science.gov (United States)

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (w f /v s ) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  5. Determining the optimal screening interval for type 2 diabetes mellitus using a risk prediction model.

    Directory of Open Access Journals (Sweden)

    Andrei Brateanu

    Full Text Available Progression to diabetes mellitus (DM is variable and the screening time interval not well defined. The American Diabetes Association and US Preventive Services Task Force suggest screening every 3 years, but evidence is limited. The objective of the study was to develop a model to predict the probability of developing DM and suggest a risk-based screening interval.We included non-diabetic adult patients screened for DM in the Cleveland Clinic Health System if they had at least two measurements of glycated hemoglobin (HbA1c, an initial one less than 6.5% (48 mmol/mol in 2008, and another between January, 2009 and December, 2013. Cox proportional hazards models were created. The primary outcome was DM defined as HbA1C greater than 6.4% (46 mmol/mol. The optimal rescreening interval was chosen based on the predicted probability of developing DM.Of 5084 participants, 100 (4.4% of the 2281 patients with normal HbA1c and 772 (27.5% of the 2803 patients with prediabetes developed DM within 5 years. Factors associated with developing DM included HbA1c (HR per 0.1 units increase 1.20; 95%CI, 1.13-1.27, family history (HR 1.31; 95%CI, 1.13-1.51, smoking (HR 1.18; 95%CI, 1.03-1.35, triglycerides (HR 1.01; 95%CI, 1.00-1.03, alanine aminotransferase (HR 1.07; 95%CI, 1.03-1.11, body mass index (HR 1.06; 95%CI, 1.01-1.11, age (HR 0.95; 95%CI, 0.91-0.99 and high-density lipoproteins (HR 0.93; 95% CI, 0.90-0.95. Five percent of patients in the highest risk tertile developed DM within 8 months, while it took 35 months for 5% of the middle tertile to develop DM. Only 2.4% percent of the patients in the lowest tertile developed DM within 5 years.A risk prediction model employing commonly available data can be used to guide screening intervals. Based on equal intervals for equal risk, patients in the highest risk category could be rescreened after 8 months, while those in the intermediate and lowest risk categories could be rescreened after 3 and 5 years

  6. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  7. An Optimized Prediction Intervals Approach for Short Term PV Power Forecasting

    Directory of Open Access Journals (Sweden)

    Qiang Ni

    2017-10-01

    Full Text Available High quality photovoltaic (PV power prediction intervals (PIs are essential to power system operation and planning. To improve the reliability and sharpness of PIs, in this paper, a new method is proposed, which involves the model uncertainties and noise uncertainties, and PIs are constructed with a two-step formulation. In the first step, the variance of model uncertainties is obtained by using extreme learning machine to make deterministic forecasts of PV power. In the second stage, innovative PI-based cost function is developed to optimize the parameters of ELM and noise uncertainties are quantization in terms of variance. The performance of the proposed approach is examined by using the PV power and meteorological data measured from 1kW rooftop DC micro-grid system. The validity of the proposed method is verified by comparing the experimental analysis with other benchmarking methods, and the results exhibit a superior performance.

  8. Carbohydrate-Restriction with High-Intensity Interval Training: An Optimal Combination for Treating Metabolic Diseases?

    Directory of Open Access Journals (Sweden)

    Monique E. Francois

    2017-10-01

    Full Text Available Lifestyle interventions incorporating both diet and exercise strategies remain cornerstone therapies for treating metabolic disease. Carbohydrate-restriction and high-intensity interval training (HIIT have independently been shown to improve cardiovascular and metabolic health. Carbohydrate-restriction reduces postprandial hyperglycemia, thereby limiting potential deleterious metabolic and cardiovascular consequences of excessive glucose excursions. Additionally, carbohydrate-restriction has been shown to improve body composition and blood lipids. The benefits of exercise for improving insulin sensitivity are well known. In this regard, HIIT has been shown to rapidly improve glucose control, endothelial function, and cardiorespiratory fitness. Here, we report the available evidence for each strategy and speculate that the combination of carbohydrate-restriction and HIIT will synergistically maximize the benefits of both approaches. We hypothesize that this lifestyle strategy represents an optimal intervention to treat metabolic disease; however, further research is warranted in order to harness the potential benefits of carbohydrate-restriction and HIIT for improving cardiometabolic health.

  9. Carbohydrate-Restriction with High-Intensity Interval Training: An Optimal Combination for Treating Metabolic Diseases?

    Science.gov (United States)

    Francois, Monique E; Gillen, Jenna B; Little, Jonathan P

    2017-01-01

    Lifestyle interventions incorporating both diet and exercise strategies remain cornerstone therapies for treating metabolic disease. Carbohydrate-restriction and high-intensity interval training (HIIT) have independently been shown to improve cardiovascular and metabolic health. Carbohydrate-restriction reduces postprandial hyperglycemia, thereby limiting potential deleterious metabolic and cardiovascular consequences of excessive glucose excursions. Additionally, carbohydrate-restriction has been shown to improve body composition and blood lipids. The benefits of exercise for improving insulin sensitivity are well known. In this regard, HIIT has been shown to rapidly improve glucose control, endothelial function, and cardiorespiratory fitness. Here, we report the available evidence for each strategy and speculate that the combination of carbohydrate-restriction and HIIT will synergistically maximize the benefits of both approaches. We hypothesize that this lifestyle strategy represents an optimal intervention to treat metabolic disease; however, further research is warranted in order to harness the potential benefits of carbohydrate-restriction and HIIT for improving cardiometabolic health.

  10. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  11. An Optimization Model for Kardeh Reservoir Operation Using Interval-Parameter, Multi-stage, Stochastic Programming

    Directory of Open Access Journals (Sweden)

    Fatemeh Rastegaripour

    2010-09-01

    Full Text Available The present study investigates water allocation of Kardeh Reservoir to domestic and agricultural users using an Interval Parameter, Multi-stage, Stochastic Programming (IMSLP under uncertainty. The advantages of the method include its dynamics nature, use of a pre-defined policy in its optimization process, and the use of interval parameter and probability under uncertainty conditions. Additionally, it offers different decision-making alternatives for different scenarios of water shortage. The required data were collected from Khorasan Razavi Regional Water Organization and from the Water and Wastewater Co. for the period 1988-2007. Results showed that, under the worst conditions, the water deficits expected to occur for each of the next 3 years will be 1.9, 2.55, and 3.11 million cubic meters for the domestic use and 0.22, 0.32, 0.75 million cubic meters for irrigation. Approximate reductions of 0.5, 0.7, and 1 million cubic meters in the monthly consumption of the urban community and enhanced irrigation efficiencies of about 6, 11, and 20% in the agricultural sector are recommended as approaches for combating the water shortage over the next 3 years.

  12. V–V delay interval optimization in CRT using echocardiography compared to QRS width in surface ECG

    Directory of Open Access Journals (Sweden)

    Amr Nawar

    2012-09-01

    Conclusion: Significant correlation appeared to exist during optimization of CRT between VV programming based on the shortest QRS interval at 12-lead ECG pacing and that based on highest LVOT VTI by echocardiography. A combined ECG and echocardiographic approach could be a more convenient solution in performing V–V optimization.

  13. Economic Statistical Design of Variable Sampling Interval X¯$\\overline X $ Control Chart Based on Surrogate Variable Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lee Tae-Hoon

    2016-12-01

    Full Text Available In many cases, a X¯$\\overline X $ control chart based on a performance variable is used in industrial fields. Typically, the control chart monitors the measurements of a performance variable itself. However, if the performance variable is too costly or impossible to measure, and a less expensive surrogate variable is available, the process may be more efficiently controlled using surrogate variables. In this paper, we present a model for the economic statistical design of a VSI (Variable Sampling Interval X¯$\\overline X $ control chart using a surrogate variable that is linearly correlated with the performance variable. We derive the total average profit model from an economic viewpoint and apply the model to a Very High Temperature Reactor (VHTR nuclear fuel measurement system and derive the optimal result using genetic algorithms. Compared with the control chart based on a performance variable, the proposed model gives a larger expected net income per unit of time in the long-run if the correlation between the performance variable and the surrogate variable is relatively high. The proposed model was confined to the sample mean control chart under the assumption that a single assignable cause occurs according to the Poisson process. However, the model may also be extended to other types of control charts using a single or multiple assignable cause assumptions such as VSS (Variable Sample Size X¯$\\overline X $ control chart, EWMA, CUSUM charts and so on.

  14. Impact of sampling interval in training data acquisition on intrafractional predictive accuracy of indirect dynamic tumor-tracking radiotherapy.

    Science.gov (United States)

    Mukumoto, Nobutaka; Nakamura, Mitsuhiro; Akimoto, Mami; Miyabe, Yuki; Yokota, Kenji; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro

    2017-08-01

    To explore the effect of sampling interval of training data acquisition on the intrafractional prediction error of surrogate signal-based dynamic tumor-tracking using a gimbal-mounted linac. Twenty pairs of respiratory motions were acquired from 20 patients (ten lung, five liver, and five pancreatic cancer patients) who underwent dynamic tumor-tracking with the Vero4DRT. First, respiratory motions were acquired as training data for an initial construction of the prediction model before the irradiation. Next, additional respiratory motions were acquired for an update of the prediction model due to the change of the respiratory pattern during the irradiation. The time elapsed prior to the second acquisition of the respiratory motion was 12.6 ± 3.1 min. A four-axis moving phantom reproduced patients' three dimensional (3D) target motions and one dimensional surrogate motions. To predict the future internal target motion from the external surrogate motion, prediction models were constructed by minimizing residual prediction errors for training data acquired at 80 and 320 ms sampling intervals for 20 s, and at 500, 1,000, and 2,000 ms sampling intervals for 60 s using orthogonal kV x-ray imaging systems. The accuracies of prediction models trained with various sampling intervals were estimated based on training data with each sampling interval during the training process. The intrafractional prediction errors for various prediction models were then calculated on intrafractional monitoring images taken for 30 s at the constant sampling interval of a 500 ms fairly to evaluate the prediction accuracy for the same motion pattern. In addition, the first respiratory motion was used for the training and the second respiratory motion was used for the evaluation of the intrafractional prediction errors for the changed respiratory motion to evaluate the robustness of the prediction models. The training error of the prediction model was 1.7 ± 0.7 mm in 3D for all sampling

  15. On the Influence of the Data Sampling Interval on Computer-Derived K-Indices

    Directory of Open Access Journals (Sweden)

    A Bernard

    2011-06-01

    Full Text Available The K index was devised by Bartels et al. (1939 to provide an objective monitoring of irregular geomagnetic activity. The K index was then routinely used to monitor the magnetic activity at permanent magnetic observatories as well as at temporary stations. The increasing number of digital and sometimes unmanned observatories and the creation of INTERMAGNET put the question of computer production of K at the centre of the debate. Four algorithms were selected during the Vienna meeting (1991 and endorsed by IAGA for the computer production of K indices. We used one of them (FMI algorithm to investigate the impact of the geomagnetic data sampling interval on computer produced K values through the comparison of the computer derived K values for the period 2009, January 1st to 2010, May 31st at the Port-aux-Francais magnetic observatory using magnetic data series with different sampling rates (the smaller: 1 second; the larger: 1 minute. The impact is investigated on both 3-hour range values and K indices data series, as a function of the activity level for low and moderate geomagnetic activity.

  16. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  17. Optimal time interval between capecitabine intake and radiotherapy in preoperative chemoradiation for locally advanced rectal cancer

    International Nuclear Information System (INIS)

    Yu, Chang Sik; Kim, Tae Won; Kim, Jong Hoon; Choi, Won Sik; Kim, Hee Cheol; Chang, Heung Moon; Ryu, Min Hee; Jang, Se Jin; Ahn, Seung Do; Lee, Sang-wook; Shin, Seong Soo; Choi, Eun Kyung; Kim, Jin Cheon

    2007-01-01

    Purpose: Capecitabine and its metabolites reach peak plasma concentrations 1 to 2 hours after a single oral administration, and concentrations rapidly decrease thereafter. We performed a retrospective analysis to find the optimal time interval between capecitabine administration and radiotherapy for rectal cancer. Methods and Materials: The time interval between capecitabine intake and radiotherapy was measured in patients who were treated with preoperative radiotherapy and concurrent capecitabine for rectal cancer. Patients were classified into the following groups. Group A1 included patients who took capecitabine 1 hour before radiotherapy, and Group B1 included all other patients. Group B1 was then subdivided into Group A2 (patients who took capecitabine 2 hours before radiotherapy) and Group B2. Group B2 was further divided into Group A3 and Group B3 with the same method. Total mesorectal excision was performed 6 weeks after completion of chemoradiation and the pathologic response was evaluated. Results: A total of 200 patients were enrolled in this study. Pathologic examination showed that Group A1 had higher rates of complete regression of primary tumors in the rectum (23.5% vs. 9.6%, p = 0.01), good response (44.7% vs. 25.2%, p = 0.006), and lower T stages (p = 0.021) compared with Group B1; however, Groups A2 and A3 did not show any improvement compared with Groups B2 and B3. Multivariate analysis showed that increases in primary tumors in the rectum and good response were only significant when capecitabine was administered 1 hour before radiotherapy. Conclusion: In preoperative chemoradiotherapy for rectal cancer, the pathologic response could be improved by administering capecitabine 1 hour before radiotherapy

  18. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  19. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  20. Optimally decoding the input rate from an observation of the interspike intervals

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [COGS, University of Sussex at Brighton (United Kingdom) and Computational Neuroscience Laboratory, Babraham Institute, Cambridge (United Kingdom)]. E-mail: jf218@cam.ac.uk

    2001-09-21

    A neuron extensively receives both inhibitory and excitatory inputs. What is the ratio r between these two types of input so that the neuron can most accurately read out input information (rate)? We explore the issue in this paper provided that the neuron is an ideal observer - decoding the input information with the attainment of the Cramer-Rao inequality bound. It is found that, in general, adding certain amounts of inhibitory inputs to a neuron improves its capability of accurately decoding the input information. By calculating the Fisher information of an integrate-and-fire neuron, we determine the optimal ratio r for decoding the input information from an observation of the efferent interspike intervals. Surprisingly, the Fisher information can be zero for certain values of the ratio, seemingly implying that it is impossible to read out the encoded information at these values. By analysing the maximum likelihood estimate of the input information, it is concluded that the input information is in fact most easily estimated at the points where the Fisher information vanishes. (author)

  1. Retrieval interval mapping, a tool to optimize the spectral retrieval range in differential optical absorption spectroscopy

    Science.gov (United States)

    Vogel, L.; Sihler, H.; Lampel, J.; Wagner, T.; Platt, U.

    2012-06-01

    Remote sensing via differential optical absorption spectroscopy (DOAS) has become a standard technique to identify and quantify trace gases in the atmosphere. The technique is applied in a variety of configurations, commonly classified into active and passive instruments using artificial and natural light sources, respectively. Platforms range from ground based to satellite instruments and trace-gases are studied in all kinds of different environments. Due to the wide range of measurement conditions, atmospheric compositions and instruments used, a specific challenge of a DOAS retrieval is to optimize the parameters for each specific case and particular trace gas of interest. This becomes especially important when measuring close to the detection limit. A well chosen evaluation wavelength range is crucial to the DOAS technique. It should encompass strong absorption bands of the trace gas of interest in order to maximize the sensitivity of the retrieval, while at the same time minimizing absorption structures of other trace gases and thus potential interferences. Also, instrumental limitations and wavelength depending sources of errors (e.g. insufficient corrections for the Ring effect and cross correlations between trace gas cross sections) need to be taken into account. Most often, not all of these requirements can be fulfilled simultaneously and a compromise needs to be found depending on the conditions at hand. Although for many trace gases the overall dependence of common DOAS retrieval on the evaluation wavelength interval is known, a systematic approach to find the optimal retrieval wavelength range and qualitative assessment is missing. Here we present a novel tool to determine the optimal evaluation wavelength range. It is based on mapping retrieved values in the retrieval wavelength space and thus visualize the consequence of different choices of retrieval spectral ranges, e.g. caused by slightly erroneous absorption cross sections, cross correlations and

  2. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  3. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  4. Life cycle cost optimization of biofuel supply chains under uncertainties based on interval linear programming

    DEFF Research Database (Denmark)

    Ren, Jingzheng; Dong, Liang; Sun, Lu

    2015-01-01

    in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed...

  5. Efficient Estimation for Diffusions Sampled at High Frequency Over a Fixed Time Interval

    DEFF Research Database (Denmark)

    Jakobsen, Nina Munkholt; Sørensen, Michael

    Parametric estimation for diffusion processes is considered for high frequency observations over a fixed time interval. The processes solve stochastic differential equations with an unknown parameter in the diffusion coefficient. We find easily verified conditions on approximate martingale...

  6. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  7. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  8. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  9. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  10. Allocation of ESS by interval optimization method considering impact of ship swinging on hybrid PV/diesel ship power system

    International Nuclear Information System (INIS)

    Wen, Shuli; Lan, Hai; Hong, Ying-Yi; Yu, David C.; Zhang, Lijun; Cheng, Peng

    2016-01-01

    Highlights: • An uncertainty model of PV generation on board is developed based on the experiments. • The moving and swinging of the ship are considered in the optimal ESS sizing problem. • Optimal sizing of ESS in a hybrid PV/diesel/ESS ship power system is gained by the interval optimization method. • Different cases were studied to show the significance of the proposed method considering the swinging effects on the cost. - Abstract: Owing to low efficiency of traditional ships and the serious environmental pollution that they cause, the use of solar energy and an energy storage system (ESS) in a ship’s power system is increasingly attracting attention. However, the swinging of a ship raises crucial challenges in designing an optimal system for a large oil tanker ship, which are associated with uncertainties in solar energy. In this study, a series of experiments are performed to investigate the characteristics of a photovoltaic (PV) system on a moving ship. Based on the experimental results, an interval uncertainty model of on-board PV generation is established, which considers the effect of the swinging of the ship. Due to the power balance equations, the outputs of the diesel generator and the ESS on a large oil tanker are also modeled using interval variables. An interval optimization method is developed to determine the optimal size of the ESS in this hybrid ship power system to reduce the fuel cost, capital cost of the ESS, and emissions of greenhouse gases. Variations of the ship load are analyzed using a new method, taking five operating conditions into account. Several cases are compared in detail to demonstrate the effectiveness of the proposed algorithm.

  11. Contrasting Perspectives of Anesthesiologists and Gastroenterologists on the Optimal Time Interval between Bowel Preparation and Endoscopic Sedation

    Directory of Open Access Journals (Sweden)

    Deepak Agrawal

    2015-01-01

    Full Text Available Background. The optimal time interval between the last ingestion of bowel prep and sedation for colonoscopy remains controversial, despite guidelines that sedation can be administered 2 hours after consumption of clear liquids. Objective. To determine current practice patterns among anesthesiologists and gastroenterologists regarding the optimal time interval for sedation after last ingestion of bowel prep and to understand the rationale underlying their beliefs. Design. Questionnaire survey of anesthesiologists and gastroenterologists in the USA. The questions were focused on the preferred time interval of endoscopy after a polyethylene glycol based preparation in routine cases and select conditions. Results. Responses were received from 109 anesthesiologists and 112 gastroenterologists. 96% of anesthesiologists recommended waiting longer than 2 hours until sedation, in contrast to only 26% of gastroenterologists. The main reason for waiting >2 hours was that PEG was not considered a clear liquid. Most anesthesiologists, but not gastroenterologists, waited longer in patients with history of diabetes or reflux. Conclusions. Anesthesiologists and gastroenterologists do not agree on the optimal interval for sedation after last drink of bowel prep. Most anesthesiologists prefer to wait longer than the recommended 2 hours for clear liquids. The data suggest a need for clearer guidelines on this issue.

  12. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Science.gov (United States)

    Fung, Tak; Keenan, Kevin

    2014-01-01

    The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%), a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L.), occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  13. Confidence intervals for population allele frequencies: the general case of sampling from a finite diploid population of any size.

    Directory of Open Access Journals (Sweden)

    Tak Fung

    Full Text Available The estimation of population allele frequencies using sample data forms a central component of studies in population genetics. These estimates can be used to test hypotheses on the evolutionary processes governing changes in genetic variation among populations. However, existing studies frequently do not account for sampling uncertainty in these estimates, thus compromising their utility. Incorporation of this uncertainty has been hindered by the lack of a method for constructing confidence intervals containing the population allele frequencies, for the general case of sampling from a finite diploid population of any size. In this study, we address this important knowledge gap by presenting a rigorous mathematical method to construct such confidence intervals. For a range of scenarios, the method is used to demonstrate that for a particular allele, in order to obtain accurate estimates within 0.05 of the population allele frequency with high probability (> or = 95%, a sample size of > 30 is often required. This analysis is augmented by an application of the method to empirical sample allele frequency data for two populations of the checkerspot butterfly (Melitaea cinxia L., occupying meadows in Finland. For each population, the method is used to derive > or = 98.3% confidence intervals for the population frequencies of three alleles. These intervals are then used to construct two joint > or = 95% confidence regions, one for the set of three frequencies for each population. These regions are then used to derive a > or = 95%% confidence interval for Jost's D, a measure of genetic differentiation between the two populations. Overall, the results demonstrate the practical utility of the method with respect to informing sampling design and accounting for sampling uncertainty in studies of population genetics, important for scientific hypothesis-testing and also for risk-based natural resource management.

  14. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  15. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  16. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  17. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  18. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  19. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  20. An efficient sampling approach for variance-based sensitivity analysis based on the law of total variance in the successive intervals without overlapping

    Science.gov (United States)

    Yun, Wanying; Lu, Zhenzhou; Jiang, Xian

    2018-06-01

    To efficiently execute the variance-based global sensitivity analysis, the law of total variance in the successive intervals without overlapping is proved at first, on which an efficient space-partition sampling-based approach is subsequently proposed in this paper. Through partitioning the sample points of output into different subsets according to different inputs, the proposed approach can efficiently evaluate all the main effects concurrently by one group of sample points. In addition, there is no need for optimizing the partition scheme in the proposed approach. The maximum length of subintervals is decreased by increasing the number of sample points of model input variables in the proposed approach, which guarantees the convergence condition of the space-partition approach well. Furthermore, a new interpretation on the thought of partition is illuminated from the perspective of the variance ratio function. Finally, three test examples and one engineering application are employed to demonstrate the accuracy, efficiency and robustness of the proposed approach.

  1. Simultaneous parameter and tolerance optimization of structures via probability-interval mixed reliability model

    DEFF Research Database (Denmark)

    Luo, Yangjun; Wu, Xiaoxiang; Zhou, Mingdong

    2015-01-01

    Both structural sizes and dimensional tolerances strongly influence the manufacturing cost and the functional performance of a practical product. This paper presents an optimization method to simultaneously find the optimal combination of structural sizes and dimensional tolerances. Based...... transformed into their equivalent formulations by using the performance measure approach. The optimization problem is then solved with the sequential approximate programming. Meanwhile, a numerically stable algorithm based on the trust region method is proposed to efficiently update the target performance...

  2. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    OpenAIRE

    Yan, Ying; Suo, Bin

    2017-01-01

    Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix b...

  3. What is the optimal interval between successive home blood pressure readings using an automated oscillometric device?

    Science.gov (United States)

    Eguchi, Kazuo; Kuruvilla, Sujith; Ogedegbe, Gbenga; Gerin, William; Schwartz, Joseph E; Pickering, Thomas G

    2009-06-01

    To clarify whether a shorter interval between three successive home blood pressure (HBP) readings (10 s vs. 1 min) taken twice a day gives a better prediction of the average 24-h BP and better patient compliance. We enrolled 56 patients from a hypertension clinic (mean age: 60 +/- 14 years; 54% female patients). The study consisted of three clinic visits, with two 4-week periods of self-monitoring of HBP between them, and a 24-h ambulatory BP monitoring at the second visit. Using a crossover design, with order randomized, the oscillometric HBP device (HEM-5001) could be programmed to take three consecutive readings at either 10-s or 1-min intervals, each of which was done for 4 weeks. Patients were asked to measure three HBP readings in the morning and evening. All the readings were stored in the memory of the monitors. The analyses were performed using the second-third HBP readings. The average systolic BP/diastolic BP for the 10-s and 1-min intervals at home were 136.1 +/- 15.8/77.5 +/- 9.5 and 133.2 +/- 15.5/76.9 +/- 9.3 mmHg (P = 0.001/0.19 for the differences in systolic BP and diastolic BP), respectively. The 1-min BP readings were significantly closer to the average of awake ambulatory BP (131 +/- 14/79 +/- 10 mmHg) than the 10-s interval readings. There was no significant difference in patients' compliance in taking adequate numbers of readings at the different time intervals. The 1-min interval between HBP readings gave a closer agreement with the daytime average BP than the 10-s interval.

  4. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  5. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  6. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  7. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    Science.gov (United States)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  8. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  9. A risk explicit interval linear programming model for uncertainty-based environmental economic optimization in the Lake Fuxian watershed, China.

    Science.gov (United States)

    Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan

    2013-01-01

    The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  10. A Risk Explicit Interval Linear Programming Model for Uncertainty-Based Environmental Economic Optimization in the Lake Fuxian Watershed, China

    Directory of Open Access Journals (Sweden)

    Xiaoling Zhang

    2013-01-01

    Full Text Available The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers’ preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.

  11. A Multiobjective Interval Programming Model for Wind-Hydrothermal Power System Dispatching Using 2-Step Optimization Algorithm

    Science.gov (United States)

    Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663

  12. A multiobjective interval programming model for wind-hydrothermal power system dispatching using 2-step optimization algorithm.

    Science.gov (United States)

    Ren, Kun; Jihong, Qu

    2014-01-01

    Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.

  13. Optimizing Time Intervals of Meteorological Data Used with Atmospheric Dose Modeling at SRS

    International Nuclear Information System (INIS)

    Simpkins, A.A.

    1999-01-01

    Measured tritium oxide concentrations in air have been compared with calculated values using routine release Gaussian plume models for different time intervals of meteorological data. These comparisons determined an optimum time interval of meteorological data used with atmospheric dose models at the Savannah River Site (SRS). Meteorological data of varying time intervals (1-yr to 10-yr) were used for the comparison. Insignificant differences are seen in using a one-year database as opposed to a five-year database. Use of a ten-year database results in slightly more conservative results. For meteorological databases of length one to five years the mean ratio of predicted to measured tritium oxide concentrations is approximately 1.25 whereas for the ten-year meteorological database the ration is closer to 1.35. Currently at the Savannah River Site a meteorological database of five years duration is used for all dose models. This study suggests no substantially improved accuracy using meteorological files of shorter or longer time intervals

  14. Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models

    Science.gov (United States)

    Doebler, Anna; Doebler, Philipp; Holling, Heinz

    2013-01-01

    The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…

  15. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  16. Penentuan Interval Waktu Penggantian Optimal Komponen Berdasarkan Model Opportunity Based-Age Replacement

    OpenAIRE

    Giatman, Muhammad

    2008-01-01

    Maintenance system, especially replacement that is not good, can cause much lose out for the company. The lose out is caused production process disturbing bay unexpectadly or unscheduled replacement. This will lose out for factory that have continue flow shop type, because replacement of the component that is need shut down machine will cause all machine in the process production stop. To anticipate of lose out that cause by replacement activity, so in this research will search interval of op...

  17. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  18. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  19. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  20. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  1. Optimal Design and Tuning of PID-Type Interval Type-2 Fuzzy Logic Controllers for Delta Parallel Robots

    Directory of Open Access Journals (Sweden)

    Xingguo Lu

    2016-05-01

    Full Text Available In this work, we propose a new method for the optimal design and tuning of a Proportional-Integral-Derivative type (PID-type interval type-2 fuzzy logic controller (IT2 FLC for Delta parallel robot trajectory tracking control. The presented methodology starts with an optimal design problem of IT2 FLC. A group of IT2 FLCs are obtained by blurring the membership functions using a variable called blurring degree. By comparing the performance of the controllers, the optimal structure of IT2 FLC is obtained. Then, a multi-objective optimization problem is formulated to tune the scaling factors of the PID-type IT2 FLC. The Non-dominated Sorting Genetic Algorithm (NSGA-II is adopted to solve the constrained nonlinear multi-objective optimization problem. Simulation results of the optimized controller are presented and discussed regarding application in the Delta parallel robot. The proposed method provides an effective way to design and tune the PID-type IT2 FLC with a desired control performance.

  2. The duration of uncertain times: audiovisual information about intervals is integrated in a statistically optimal fashion.

    Directory of Open Access Journals (Sweden)

    Jess Hartcher-O'Brien

    Full Text Available Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme isstatistically optimal because it theoretically results in unbiased perceptual estimates with the highest precisionpossible.There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time.In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimationof audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies - i.e. consideringthe time of onset/offset ofsignals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates.

  3. Biostratigraphic analysis of core samples from wells drilled in the Devonian shale interval of the Appalachian and Illinois Basins

    Energy Technology Data Exchange (ETDEWEB)

    Martin, S.J.; Zielinski, R.E.

    1978-07-14

    A palynological investigation was performed on 55 samples of core material from four wells drilled in the Devonian Shale interval of the Appalachian and Illinois Basins. Using a combination of spores and acritarchs, it was possible to divide the Middle Devonian from the Upper Devonian and to make subdivisions within the Middle and Upper Devonian. The age of the palynomorphs encountered in this study is Upper Devonian.

  4. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  5. Optimal debulking targets in women with advanced stage ovarian cancer: a retrospective study of immediate versus interval debulking surgery.

    Science.gov (United States)

    Altman, Alon D; Nelson, Gregg; Chu, Pamela; Nation, Jill; Ghatage, Prafull

    2012-06-01

    The objective of this study was to examine both overall and disease-free survival of patients with advanced stage ovarian cancer after immediate or interval debulking surgery based on residual disease. We performed a retrospective chart review at the Tom Baker Cancer Centre in Calgary, Alberta of patients with pathologically confirmed stage III or IV ovarian cancer, fallopian tube cancer, or primary peritoneal cancer between 2003 and 2007. We collected data on the dates of diagnosis, recurrence, and death; cancer stage and grade, patients' age, surgery performed, and residual disease. One hundred ninety-two patients were included in the final analysis. The optimal debulking rate with immediate surgery was 64.8%, and with interval surgery it was 85.9%. There were improved overall and disease-free survival rates for optimally debulked disease (advanced stage ovarian cancer, the goal of surgery should be resection of disease to microscopic residual at the initial procedure. This results in improved overall survival than lesser degrees of resection. Further studies are required to determine optimal surgical management.

  6. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  7. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  8. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  9. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  10. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  12. Optimization of the test intervals of a nuclear safety system by genetic algorithms, solution clustering and fuzzy preference assignment

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a procedure is developed for identifying a number of representative solutions manageable for decision-making in a multiobjective optimization problem concerning the test intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are identified by a genetic algorithm and then clustered by subtractive clustering into 'families'. On the basis of the decision maker's preferences, each family is then synthetically represented by a 'head of the family' solution. This is done by introducing a scoring system that ranks the solutions with respect to the different objectives: a fuzzy preference assignment is employed to this purpose. Level Diagrams are then used to represent, analyze and interpret the Pareto Fronts reduced to the head-of-the-family solutions

  13. A novel interval type-2 fractional order fuzzy PID controller: Design, performance evaluation, and its optimal time domain tuning.

    Science.gov (United States)

    Kumar, Anupam; Kumar, Vijay

    2017-05-01

    In this paper, a novel concept of an interval type-2 fractional order fuzzy PID (IT2FO-FPID) controller, which requires fractional order integrator and fractional order differentiator, is proposed. The incorporation of Takagi-Sugeno-Kang (TSK) type interval type-2 fuzzy logic controller (IT2FLC) with fractional controller of PID-type is investigated for time response measure due to both unit step response and unit load disturbance. The resulting IT2FO-FPID controller is examined on different delayed linear and nonlinear benchmark plants followed by robustness analysis. In order to design this controller, fractional order integrator-differentiator operators are considered as design variables including input-output scaling factors. A new hybridized algorithm named as artificial bee colony-genetic algorithm (ABC-GA) is used to optimize the parameters of the controller while minimizing weighted sum of integral of time absolute error (ITAE) and integral of square of control output (ISCO). To assess the comparative performance of the IT2FO-FPID, authors compared it against existing controllers, i.e., interval type-2 fuzzy PID (IT2-FPID), type-1 fractional order fuzzy PID (T1FO-FPID), type-1 fuzzy PID (T1-FPID), and conventional PID controllers. Furthermore, to show the effectiveness of the proposed controller, the perturbed processes along with the larger dead time are tested. Moreover, the proposed controllers are also implemented on multi input multi output (MIMO), coupled, and highly complex nonlinear two-link robot manipulator system in presence of un-modeled dynamics. Finally, the simulation results explicitly indicate that the performance of the proposed IT2FO-FPID controller is superior to its conventional counterparts in most of the cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Interval Optimization Model Considering Terrestrial Ecological Impacts for Water Rights Transfer from Agriculture to Industry in Ningxia, China.

    Science.gov (United States)

    Sun, Lian; Li, Chunhui; Cai, Yanpeng; Wang, Xuan

    2017-06-14

    In this study, an interval optimization model is developed to maximize the benefits of a water rights transfer system that comprises industry and agriculture sectors in the Ningxia Hui Autonomous Region in China. The model is subjected to a number of constraints including water saving potential from agriculture and ecological groundwater levels. Ecological groundwater levels serve as performance indicators of terrestrial ecology. The interval method is applied to present the uncertainty of parameters in the model. Two scenarios regarding dual industrial development targets (planned and unplanned ones) are used to investigate the difference in potential benefits of water rights transfer. Runoff of the Yellow River as the source of water rights fluctuates significantly in different years. Thus, compensation fees for agriculture are calculated to reflect the influence of differences in the runoff. Results show that there are more available water rights to transfer for industrial development. The benefits are considerable but unbalanced between buyers and sellers. The government should establish a water market that is freer and promote the interest of agriculture and farmers. Though there has been some success of water rights transfer, the ecological impacts and the relationship between sellers and buyers require additional studies.

  15. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Directory of Open Access Journals (Sweden)

    Andreas Steimer

    Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing

  16. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Science.gov (United States)

    Steimer, Andreas; Schindler, Kaspar

    2015-01-01

    Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational

  17. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  18. A predictive score for optimal cytoreduction at interval debulking surgery in epithelial ovarian cancer: a two- centers experience.

    Science.gov (United States)

    Ghisoni, Eleonora; Katsaros, Dionyssios; Maggiorotto, Furio; Aglietta, Massimo; Vaira, Marco; De Simone, Michele; Mittica, Gloria; Giannone, Gaia; Robella, Manuela; Genta, Sofia; Lucchino, Fabiola; Marocco, Francesco; Borella, Fulvio; Valabrega, Giorgio; Ponzone, Riccardo

    2018-05-30

    Optimal cytoreduction (macroscopic Residual Tumor, RT = 0) is the best survival predictor factor in epithelial ovarian cancer (EOC). It doesn't exist a consolidated criteria to predict optimal surgical resection at interval debulking surgery (IDS). The aim of this study is to develop a predictive model of complete cytoreduction at IDS. We, retrospectively, analyzed 93 out of 432 patients, with advanced EOC, underwent neoadjuvant chemotherapy (NACT) and IDS from January 2010 to December 2016 in two referral cancer centers. The correlation between clinical-pathological variables and residual disease at IDS has been investigated with univariate and multivariate analysis. A predictive score of cytoreduction (PSC) has been created by combining all significant variables. The performance of each single variable and PSC has been reported and the correlation of all significant variables with progression free survival (PFS) has been assessed. At IDS, 65 patients (69,8%) had complete cytoreduction with no residual disease (R = 0). Three criteria independently predicted R > 0: age ≥ 60 years (p = 0.014), CA-125 before NACT > 550 UI/dl (p = 0.044), and Peritoneal Cancer Index (PCI) > 16 (p  16, a PSC ≥ 3 and the presence of R > 0 after IDS were all significantly associated with shorter PFS (p  0). The PSC should be prospectively validated in a larger series of EOC patients undergoing NACT-IDS.

  19. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  20. An optimal cut-off point for the calving interval may be used as an indicator of bovine abortions.

    Science.gov (United States)

    Bronner, Anne; Morignat, Eric; Gay, Emilie; Calavas, Didier

    2015-10-01

    The bovine abortion surveillance system in France aims to detect as early as possible any resurgence of bovine brucellosis, a disease of which the country has been declared free since 2005. It relies on the mandatory notification and testing of each aborting cow, but under-reporting is high. This research uses a new and simple approach which considers the calving interval (CI) as a "diagnostic test" to determine optimal cut-off point c and estimate diagnostic performance of the CI to identify aborting cows, and herds with multiple abortions (i.e. three or more aborting cows per calving season). The period between two artificial inseminations (AI) was considered as a "gold standard". During the 2006-2010 calving seasons, the mean optimal CI cut-off point for identifying aborting cows was 691 days for dairy cows and 703 days for beef cows. Depending on the calving season, production type and scale at which c was computed (individual or herd), the average sensitivity of the CI varied from 42.6% to 64.4%; its average specificity from 96.7% to 99.7%; its average positive predictive value from 27.6% to 65.4%; and its average negative predictive value from 98.7% to 99.8%. When applied to the French bovine population as a whole, this indicator identified 2-3% of cows suspected to have aborted, and 10-15% of herds suspected of multiple abortions. The optimal cut-off point and CI performance were consistent over calving seasons. By applying an optimal CI cut-off point to the cattle demographics database, it becomes possible to identify herds with multiple abortions, carry out retrospective investigations to find the cause of these abortions and monitor a posteriori compliance of farmers with their obligation to report abortions for brucellosis surveillance needs. Therefore, the CI could be used as an indicator of abortions to help improve the current mandatory notification surveillance system. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Conducting an acute intense interval exercise session during the Ramadan fasting month: what is the optimal time of the day?

    Science.gov (United States)

    Aziz, Abdul Rashid; Chia, Michael Yong Hwa; Low, Chee Yong; Slater, Gary John; Png, Weileen; Teh, Kong Chuan

    2012-10-01

    This study examines the effects of Ramadan fasting on performance during an intense exercise session performed at three different times of the day, i.e., 08:00, 18:00, and 21:00 h. The purpose was to determine the optimal time of the day to perform an acute high-intensity interval exercise during the Ramadan fasting month. After familiarization, nine trained athletes performed six 30-s Wingate anaerobic test (WAnT) cycle bouts followed by a time-to-exhaustion (T(exh)) cycle on six separate randomized and counterbalanced occasions. The three time-of-day nonfasting (control, CON) exercise sessions were performed before the Ramadan month, and the three corresponding time-of-day Ramadan fasting (RAM) exercise sessions were performed during the Ramadan month. Note that the 21:00 h session during Ramadan month was conducted in the nonfasted state after the breaking of the day's fast. Total work (TW) completed during the six WAnT bouts was significantly lower during RAM compared to CON for the 08:00 and 18:00 h (p effect size [d] = .55 [small] and .39 [small], respectively) sessions, but not for the 21:00 h (p = .03, d = .18 [trivial]) session. The T(exh) cycle duration was significantly shorter during RAM than CON in the 18:00 (p Ramadan fasting had a small to moderate, negative impact on quality of performance during an acute high-intensity exercise session, particularly during the period of the daytime fast. The optimal time to conduct an acute high-intensity exercise session during the Ramadan fasting month is in the evening, after the breaking of the day's fast.

  2. Focusing light through dynamical samples using fast continuous wavefront optimization.

    Science.gov (United States)

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  3. Optimal grade control sampling practice in open-pit mining

    DEFF Research Database (Denmark)

    Engström, Karin; Esbensen, Kim Harry

    2017-01-01

    Misclassification of ore grades results in lost revenues, and the need for representative sampling procedures in open pit mining is increasingly important in all mining industries. This study evaluated possible improvements in sampling representativity with the use of Reverse Circulation (RC) drill...... sampling compared to manual Blast Hole (BH) sampling in the Leveäniemi open pit mine, northern Sweden. The variographic experiment results showed that sampling variability was lower for RC than for BH sampling. However, the total costs for RC drill sampling are significantly exceeding current costs...... for manual BH sampling, which needs to be compensated for by other benefits to motivate introduction of RC drilling. The main conclusion is that manual BH sampling can be fit-for-purpose in the studied open pit mine. However, with so many mineral commodities and mining methods in use globally...

  4. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  5. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  6. Optimizing 4D cone beam computed tomography acquisition by varying the gantry velocity and projection time interval

    International Nuclear Information System (INIS)

    O’Brien, Ricky T; Cooper, Benjamin J; Keall, Paul J

    2013-01-01

    Four dimensional cone beam computed tomography (4DCBCT) is an emerging clinical image guidance strategy for tumour sites affected by respiratory motion. In current generation 4DCBCT techniques, both the gantry rotation speed and imaging frequency are constant and independent of the patient’s breathing which can lead to projection clustering. We present a mixed integer quadratic programming (MIQP) model for respiratory motion guided-4DCBCT (RMG-4DCBCT) which regulates the gantry velocity and projection time interval, in response to the patient’s respiratory signal, so that a full set of evenly spaced projections can be taken in a number of phase, or displacement, bins during the respiratory cycle. In each respiratory bin, an image can be reconstructed from the projections to give a 4D view of the patient’s anatomy so that the motion of the lungs, and tumour, can be observed during the breathing cycle. A solution to the full MIQP model in a practical amount of time, 10 s, is not possible with the leading commercial MIQP solvers, so a heuristic method is presented. Using parameter settings typically used on current generation 4DCBCT systems (4 min image acquisition, 1200 projections, 10 respiratory bins) and a sinusoidal breathing trace with a 4 s period, we show that the root mean square (RMS) of the angular separation between projections with displacement binning is 2.7° using existing constant gantry speed systems and 0.6° using RMG-4DCBCT. For phase based binning the RMS is 2.7° using constant gantry speed systems and 2.5° using RMG-4DCBCT. The optimization algorithm presented is a critical step on the path to developing a system for RMG-4DCBCT. (paper)

  7. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  8. Reliability-Based and Cost-Oriented Product Optimization Integrating Fuzzy Reasoning Petri Nets, Interval Expert Evaluation and Cultural-Based DMOPSO Using Crowding Distance Sorting

    Directory of Open Access Journals (Sweden)

    Zhaoxi Hong

    2017-08-01

    Full Text Available In reliability-based and cost-oriented product optimization, the target product reliability is apportioned to subsystems or components to achieve the maximum reliability and minimum cost. Main challenges to conducting such optimization design lie in how to simultaneously consider subsystem division, uncertain evaluation provided by experts for essential factors, and dynamic propagation of product failure. To overcome these problems, a reliability-based and cost-oriented product optimization method integrating fuzzy reasoning Petri net (FRPN, interval expert evaluation and cultural-based dynamic multi-objective particle swarm optimization (DMOPSO using crowding distance sorting is proposed in this paper. Subsystem division is performed based on failure decoupling, and then subsystem weights are calculated with FRPN reflecting dynamic and uncertain failure propagation, as well as interval expert evaluation considering six essential factors. A mathematical model of reliability-based and cost-oriented product optimization is established, and the cultural-based DMOPSO with crowding distance sorting is utilized to obtain the optimized design scheme. The efficiency and effectiveness of the proposed method are demonstrated by the numerical example of the optimization design for a computer numerically controlled (CNC machine tool.

  9. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  10. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  11. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. A pseudo-optimal inexact stochastic interval T2 fuzzy sets approach for energy and environmental systems planning under uncertainty: A case study for Xiamen City of China

    International Nuclear Information System (INIS)

    Jin, L.; Huang, G.H.; Fan, Y.R.; Wang, L.; Wu, T.

    2015-01-01

    Highlights: • Propose a new energy PIS-IT2FSLP model for Xiamen City under uncertainties. • Analyze the energy supply, demand, and its flow structure of this city. • Use real energy statistics to prove the superiority of PIS-IT2FSLP method. • Obtain optimal solutions that reflect environmental requirements. • Help local authorities devise an optimal energy strategy for this local area. - Abstract: In this study, a new Pseudo-optimal Inexact Stochastic Interval Type-2 Fuzzy Sets Linear Programming (PIS-IT2FSLP) energy model is developed to support energy system planning and environment requirements under uncertainties for Xiamen City. The PIS-IT2FSLP model is based on an integration of interval Type 2 (T2) Fuzzy Sets (FS) boundary programming and stochastic linear programming techniques, enables it to have robust abilities to the tackle uncertainties expressed as T2 FS intervals and probabilistic distributions within a general optimization framework. This new model can sophisticatedly facilitate system analysis of energy supply and energy conversion processes, and environmental requirements as well as provide capacity expansion options with multiple periods. The PIS-IT2FSLP model was applied to a real case study of Xiamen energy systems. Based on a robust two-step solution algorithm, reasonable solutions have been obtained, which reflect tradeoffs between economic and environmental requirements, and among seasonal volatility energy demands of the right hand side constraints of Xiamen energy system. Thus, the lower and upper solutions of PIS-IT2FSLP would then help local energy authorities adjust current energy patterns, and discover an optimal energy strategy for the development of Xiamen City

  2. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  3. A new optimization tool path planning for 3-axis end milling of free-form surfaces based on efficient machining intervals

    Science.gov (United States)

    Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter

    2018-05-01

    A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the

  4. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  5. Optimal preparation-to-colonoscopy interval in split-dose PEG bowel preparation determines satisfactory bowel preparation quality: an observational prospective study.

    Science.gov (United States)

    Seo, Eun Hee; Kim, Tae Oh; Park, Min Jae; Joo, Hee Rin; Heo, Nae Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo

    2012-03-01

    Several factors influence bowel preparation quality. Recent studies have indicated that the time interval between bowel preparation and the start of colonoscopy is also important in determining bowel preparation quality. To evaluate the influence of the preparation-to-colonoscopy (PC) interval (the interval of time between the last polyethylene glycol dose ingestion and the start of the colonoscopy) on bowel preparation quality in the split-dose method for colonoscopy. Prospective observational study. University medical center. A total of 366 consecutive outpatients undergoing colonoscopy. Split-dose bowel preparation and colonoscopy. The quality of bowel preparation was assessed by using the Ottawa Bowel Preparation Scale according to the PC interval, and other factors that might influence bowel preparation quality were analyzed. Colonoscopies with a PC interval of 3 to 5 hours had the best bowel preparation quality score in the whole, right, mid, and rectosigmoid colon according to the Ottawa Bowel Preparation Scale. In multivariate analysis, the PC interval (odds ratio [OR] 1.85; 95% CI, 1.18-2.86), the amount of PEG ingested (OR 4.34; 95% CI, 1.08-16.66), and compliance with diet instructions (OR 2.22l 95% CI, 1.33-3.70) were significant contributors to satisfactory bowel preparation. Nonrandomized controlled, single-center trial. The optimal time interval between the last dose of the agent and the start of colonoscopy is one of the important factors to determine satisfactory bowel preparation quality in split-dose polyethylene glycol bowel preparation. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  6. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  7. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  8. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  9. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  10. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  11. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  12. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  13. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  14. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  15. Long-term lifestyle intervention with optimized high-intensity interval training improves body composition, cardiometabolic risk, and exercise parameters in patients with abdominal obesity.

    Science.gov (United States)

    Gremeaux, Vincent; Drigny, Joffrey; Nigam, Anil; Juneau, Martin; Guilbeault, Valérie; Latour, Elise; Gayda, Mathieu

    2012-11-01

    The aim of this study was to study the impact of a combined long-term lifestyle and high-intensity interval training intervention on body composition, cardiometabolic risk, and exercise tolerance in overweight and obese subjects. Sixty-two overweight and obese subjects (53.3 ± 9.7 yrs; mean body mass index, 35.8 ± 5 kg/m(2)) were retrospectively identified at their entry into a 9-mo program consisting of individualized nutritional counselling, optimized high-intensity interval exercise, and resistance training two to three times a week. Anthropometric measurements, cardiometabolic risk factors, and exercise tolerance were measured at baseline and program completion. Adherence rate was 97%, and no adverse events occurred with high-intensity interval exercise training. Exercise training was associated with a weekly energy expenditure of 1582 ± 284 kcal. Clinically and statistically significant improvements were observed for body mass (-5.3 ± 5.2 kg), body mass index (-1.9 ± 1.9 kg/m(2)), waist circumference (-5.8 ± 5.4 cm), and maximal exercise capacity (+1.26 ± 0.84 metabolic equivalents) (P high-density lipoprotein ratio were also significantly improved (P body mass and waist circumference loss were baseline body mass index and resting metabolic rate; those for body mass index decrease were baseline waist circumference and triglyceride/high-density lipoprotein cholesterol ratio. A long-term lifestyle intervention with optimized high-intensity interval exercise improves body composition, cardiometabolic risk, and exercise tolerance in obese subjects. This intervention seems safe, efficient, and well tolerated and could improve adherence to exercise training in this population.

  16. Characterisation of the optimal hydric interval for a Yellow Argisol cultivated with sugarcane on the coastal plains of Alagoas, Brazil

    Directory of Open Access Journals (Sweden)

    Ismar Lima de Farias

    Full Text Available This work had as its objective to study the optimum water range (OWR of a Yellow Argisol of the coastal plains, planted with sugarcane, when subjected to different levels of compaction. For the laboratory tests soil samples with a non-preserved structure were used, removed from depths of from 0.20 m to 0.40 m and 0.40 m to 0.60 m, representing the AB and Bt horizons respectively. The treatments consisted of different soil densities represented by specimens contained in volumetric rings. The critical densities of the AB and Bt horizons for samples of upturned soil, were 1.84 and 1.63 Mg m-3 respectively. In undisturbed soil a critical density of 1.63 and 1.64 Mg m-3 was observed for the same horizons. However, the soil density at which root development begins to be restricted was 1.61 Mg m-3 for samples of upturned soil, and 1.50 Mg m-3 for samples of undisturbed soil at a depth of from 0.20 to 0.40 m. From 0.40 to 0.60 m the critical density was 1.45 and 1.18 Mg m-3 for samples of upturned and undisturbed soil respectively. It can be concluded that upturning the soil increased the IHO of the AB and Bt horizons of the Yellow Argisol, compared to the undisturbed soil cultivated with sugarcane. The subsurface movement of the studied Argisol increases the IHO at higher densities, due to the increase in the critical density of the AB and Bt horizons, improving their hydro-mechanical behavior.

  17. A spreadsheet template compatible with Microsoft Excel and iWork Numbers that returns the simultaneous confidence intervals for all pairwise differences between multiple sample means.

    Science.gov (United States)

    Brown, Angus M

    2010-04-01

    The objective of the method described in this paper is to develop a spreadsheet template for the purpose of comparing multiple sample means. An initial analysis of variance (ANOVA) test on the data returns F--the test statistic. If F is larger than the critical F value drawn from the F distribution at the appropriate degrees of freedom, convention dictates rejection of the null hypothesis and allows subsequent multiple comparison testing to determine where the inequalities between the sample means lie. A variety of multiple comparison methods are described that return the 95% confidence intervals for differences between means using an inclusive pairwise comparison of the sample means. 2009 Elsevier Ireland Ltd. All rights reserved.

  18. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  19. Multiobjective optimization of the inspection intervals of a nuclear safety system: A clustering-based framework for reducing the Pareto Front

    International Nuclear Information System (INIS)

    Zio, E.; Bazzo, R.

    2010-01-01

    In this paper, a framework is developed for identifying a limited number of representative solutions of a multiobjective optimization problem concerning the inspection intervals of the components of a safety system of a nuclear power plant. Pareto Front solutions are first clustered into 'families', which are then synthetically represented by a 'head of the family' solution. Three clustering methods are analyzed. Level Diagrams are then used to represent, analyse and interpret the Pareto Fronts reduced to their head-of-the-family solutions. Two decision situations are considered: without or with decision maker preferences, the latter implying the introduction of a scoring system to rank the solutions with respect to the different objectives: a fuzzy preference assignment is then employed to this purpose. The results of the application of the framework of analysis to the problem of optimizing the inspection intervals of a nuclear power plant safety system show that the clustering-based reduction maintains the Pareto Front shape and relevant characteristics, while making it easier for the decision maker to select the final solution.

  20. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  1. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  2. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  3. Optimization of Sample Preparation processes of Bone Material for Raman Spectroscopy.

    Science.gov (United States)

    Chikhani, Madelen; Wuhrer, Richard; Green, Hayley

    2018-03-30

    Raman spectroscopy has recently been investigated for use in the calculation of postmortem interval from skeletal material. The fluorescence generated by samples, which affects the interpretation of Raman data, is a major limitation. This study compares the effectiveness of two sample preparation techniques, chemical bleaching and scraping, in the reduction of fluorescence from bone samples during testing with Raman spectroscopy. Visual assessment of Raman spectra obtained at 1064 nm excitation following the preparation protocols indicates an overall reduction in fluorescence. Results demonstrate that scraping is more effective at resolving fluorescence than chemical bleaching. The scraping of skeletonized remains prior to Raman analysis is a less destructive method and allows for the preservation of a bone sample in a state closest to its original form, which is beneficial in forensic investigations. It is recommended that bone scraping supersedes chemical bleaching as the preferred method for sample preparation prior to Raman spectroscopy. © 2018 American Academy of Forensic Sciences.

  4. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  5. Reference Intervals for Urinary Cotinine Levels and the Influence of Sampling Time and Other Predictors on Its Excretion Among Italian Schoolchildren

    Directory of Open Access Journals (Sweden)

    Carmela Protano

    2018-04-01

    Full Text Available (1 Background: Environmental Tobacco Smoke (ETS exposure remains a public health problem worldwide. The aims are to establish urinary (u- cotinine reference values for healthy Italian children, to evaluate the role of the sampling time and of other factors on children’s u-cotinine excretion. (2 Methods: A cross-sectional study was performed on 330 children. Information on participants was gathered by a questionnaire and u-cotinine was determined in two samples for each child, collected during the evening and the next morning. (3 Results: Reference intervals (as the 2.5th and 97.5th percentiles of the distribution in evening and morning samples were respectively equal to 0.98–4.29 and 0.91–4.50 µg L−1 (ETS unexposed and 1.39–16.34 and 1.49–20.95 µg L−1 (ETS exposed. No statistical differences were recovered between median values found in evening and morning samples, both in ETS unexposed and exposed. Significant predictors of u-cotinine excretions were ponderal status according to body mass index of children (β = 0.202; p-value = 0.041 for evening samples; β = 0.169; p-value = 0.039 for morning samples and paternal educational level (β = −0.258; p-value = 0.010; for evening samples; β = −0.013; p-value = 0.003 for morning samples. (4 Conclusions: The results evidenced the need of further studies for assessing the role of confounding factors on ETS exposure, and the necessity of educational interventions on smokers for rising their awareness about ETS.

  6. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  7. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  8. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  9. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  10. Isolation and identification of phytase-producing strains from soil samples and optimization of production parameters

    Directory of Open Access Journals (Sweden)

    Masoud Mohammadi

    2017-09-01

    Discussion and conclusion: Penicillium sp. isolated from a soil sample near Qazvin, was able to produce highly active phytase in optimized environmental conditions, which could be a suitable candidate for commercial production of phytase to be used as complement in poultry feeding industries.

  11. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  13. Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.

    Science.gov (United States)

    Dunlap, Aimee S; Stephens, David W

    2012-02-01

    Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  15. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  16. Daily Average Wind Power Interval Forecasts Based on an Optimal Adaptive-Network-Based Fuzzy Inference System and Singular Spectrum Analysis

    Directory of Open Access Journals (Sweden)

    Zhongrong Zhang

    2016-01-01

    Full Text Available Wind energy has increasingly played a vital role in mitigating conventional resource shortages. Nevertheless, the stochastic nature of wind poses a great challenge when attempting to find an accurate forecasting model for wind power. Therefore, precise wind power forecasts are of primary importance to solve operational, planning and economic problems in the growing wind power scenario. Previous research has focused efforts on the deterministic forecast of wind power values, but less attention has been paid to providing information about wind energy. Based on an optimal Adaptive-Network-Based Fuzzy Inference System (ANFIS and Singular Spectrum Analysis (SSA, this paper develops a hybrid uncertainty forecasting model, IFASF (Interval Forecast-ANFIS-SSA-Firefly Alogorithm, to obtain the upper and lower bounds of daily average wind power, which is beneficial for the practical operation of both the grid company and independent power producers. To strengthen the practical ability of this developed model, this paper presents a comparison between IFASF and other benchmarks, which provides a general reference for this aspect for statistical or artificially intelligent interval forecast methods. The comparison results show that the developed model outperforms eight benchmarks and has a satisfactory forecasting effectiveness in three different wind farms with two time horizons.

  17. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  18. A Sensitivity Study of Human Errors in Optimizing Surveillance Test Interval (STI) and Allowed Outage Time (AOT) of Standby Safety System

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang

    1998-01-01

    In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)

  19. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    Science.gov (United States)

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  20. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  1. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  2. Path Planning for Unmanned Underwater Vehicle in 3D Space with Obstacles Using Spline-Imperialist Competitive Algorithm and Optimal Interval Type-2 Fuzzy Logic Controller

    Directory of Open Access Journals (Sweden)

    Ehsan Zakeri

    Full Text Available Abstract In this research, generation of a short and smooth path in three-dimensional space with obstacles for guiding an Unmanned Underwater Vehicle (UUV without collision is investigated. This is done by utilizing spline technique, in which the spline control points positions are determined by Imperialist Competitive Algorithm (ICA in three-dimensional space such that the shortest possible path from the starting point to the target point without colliding with obstacles is achieved. Furthermore, for guiding the UUV in the generated path, an Interval Type-2 Fuzzy Logic Controller (IT2FLC, the coefficients of which are optimized by considering an objective function that includes quadratic terms of the input forces and state error of the system, is used. Selecting such objective function reduces the control error and also the force applied to the UUV, which consequently leads to reduction of energy consumption. Therefore, by using a special method, desired signals of UUV state are obtained from generated three-dimensional optimal path such that tracking these signals by the controller leads to the tracking of this path by UUV. In this paper, the dynamical model of the UUV, entitled as "mUUV-WJ-1" , is derived and its hydrodynamic coefficients are calculated by CFD in order to be used in the simulations. For simulation by the method presented in this study, three environments with different obstacles are intended in order to check the performance of the IT2FLC controller in generating optimal paths for guiding the UUV. In this article, in addition to ICA, Particle Swarm Optimization (PSO and Artificial Bee Colony (ABC are also used for generation of the paths and the results are compared with each other. The results show the appropriate performance of ICA rather than ABC and PSO. Moreover, to evaluate the performance of the IT2FLC, optimal Type-1 Fuzzy Logic Controller (T1FLC and Proportional Integrator Differentiator (PID controller are designed

  3. A comparison of confidence/credible interval methods for the area under the ROC curve for continuous diagnostic tests with small sample size.

    Science.gov (United States)

    Feng, Dai; Cortese, Giuliana; Baumgartner, Richard

    2017-12-01

    The receiver operating characteristic (ROC) curve is frequently used as a measure of accuracy of continuous markers in diagnostic tests. The area under the ROC curve (AUC) is arguably the most widely used summary index for the ROC curve. Although the small sample size scenario is common in medical tests, a comprehensive study of small sample size properties of various methods for the construction of the confidence/credible interval (CI) for the AUC has been by and large missing in the literature. In this paper, we describe and compare 29 non-parametric and parametric methods for the construction of the CI for the AUC when the number of available observations is small. The methods considered include not only those that have been widely adopted, but also those that have been less frequently mentioned or, to our knowledge, never applied to the AUC context. To compare different methods, we carried out a simulation study with data generated from binormal models with equal and unequal variances and from exponential models with various parameters and with equal and unequal small sample sizes. We found that the larger the true AUC value and the smaller the sample size, the larger the discrepancy among the results of different approaches. When the model is correctly specified, the parametric approaches tend to outperform the non-parametric ones. Moreover, in the non-parametric domain, we found that a method based on the Mann-Whitney statistic is in general superior to the others. We further elucidate potential issues and provide possible solutions to along with general guidance on the CI construction for the AUC when the sample size is small. Finally, we illustrate the utility of different methods through real life examples.

  4. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  6. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  7. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  8. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  9. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  10. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  11. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  12. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  13. Optimal screening interval for men with low baseline prostate-specific antigen levels (≤1.0 ng/mL) in a prostate cancer screening program.

    Science.gov (United States)

    Urata, Satoko; Kitagawa, Yasuhide; Matsuyama, Satoko; Naito, Renato; Yasuda, Kenji; Mizokami, Atsushi; Namiki, Mikio

    2017-04-01

    To optimize the rescreening schedule for men with low baseline prostate-specific antigen (PSA) levels, we evaluated men with baseline PSA levels of ≤1.0 ng/mL in PSA-based population screening. We enrolled 8086 men aged 55-69 years with baseline PSA levels of ≤1.0 ng/mL, who were screened annually. The relationships of baseline PSA and age with the cumulative risks and clinicopathological features of screening-detected cancer were investigated. Among the 8086 participants, 28 (0.35 %) and 18 (0.22 %) were diagnosed with prostate cancer and cancer with a Gleason score (GS) of ≥7 during the observation period, respectively. The cumulative probabilities of prostate cancer at 12 years were 0.42, 1.0, 3.4, and 4.3 % in men with baseline PSA levels of 0.0-0.4, 0.5-0.6, 0.7-0.8, and 0.9-1.0 ng/mL, respectively. Those with GS of ≥7 had cumulative probabilities of 0.42, 0.73, 2.8, and 1.9 %, respectively. The cumulative probabilities of prostate cancer were significantly lower when baseline PSA levels were 0.0-0.6 ng/mL compared with 0.7-1.0 ng/mL. Prostate cancer with a GS of ≥7 was not detected during the first 10 years of screening when baseline PSA levels were 0.0-0.6 ng/mL and was not detected during the first 2 years when baseline PSA levels were 0.7-1.0 ng/mL. Our study demonstrated that men with baseline PSA levels of 0.0-0.6 ng/mL might benefit from longer screening intervals than those recommended in the guidelines of the Japanese Urological Association. Further investigation is needed to confirm the optimal screening interval for men with low baseline PSA levels.

  14. Reference Ranges of Amniotic Fluid Index in Late Third Trimester of Pregnancy: What Should the Optimal Interval between Two Ultrasound Examinations Be?

    Directory of Open Access Journals (Sweden)

    Shripad Hebbar

    2015-01-01

    Full Text Available Background. Amniotic fluid index (AFI is one of the major and deciding components of fetal biophysical profile and by itself it can predict pregnancy outcome. Very low values are associated with intrauterine growth restriction and renal anomalies of fetus, whereas high values may indicate fetal GI anomalies, maternal diabetes mellitus, and so forth. However, before deciding the cut-off standards for abnormal values for a local population, what constitutes a normal range for specific gestational age and the ideal interval of testing should be defined. Objectives. To establish reference standards for AFI for local population after 34 weeks of pregnancy and to decide an optimal scan interval for AFI estimation in third trimester in low risk antenatal women. Materials and Methods. A prospective estimation of AFI was done in 50 healthy pregnant women from 34 to 40 weeks at weekly intervals. The trend of amniotic fluid volume was studied with advancing gestational age. Only low risk singleton pregnancies with accurately established gestational age who were available for all weekly scan from 34 to 40 weeks were included in the study. Women with gestational or overt diabetes mellitus, hypertensive disorders of the pregnancy, prelabour rupture of membranes, and congenital anomalies in the foetus and those who delivered before 40 completed weeks were excluded from the study. For the purpose of AFI measurement, the uterine cavity was arbitrarily divided into four quadrants by a vertical and horizontal line running through umbilicus. Linear array transabdominal probe was used to measure the largest vertical pocket (in cm in perpendicular plane to the abdominal skin in each quadrant. Amniotic fluid index was obtained by adding these four measurements. Statistical analysis was done using SPSS software (Version 16, Chicago, IL. Percentile curves (5th, 50th, and 95th centiles were constructed for comparison with other studies. Cohen’s d coefficient was used

  15. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  16. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  17. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    International Nuclear Information System (INIS)

    Bontha, J.R.; Golcar, G.R.; Hannigan, N.

    2000-01-01

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%

  18. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  19. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  20. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  1. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  2. Short rest interval lengths between sets optimally enhance body composition and performance with 8 weeks of strength resistance training in older men.

    Science.gov (United States)

    Villanueva, Matthew G; Lane, Christianne Joy; Schroeder, E Todd

    2015-02-01

    To determine if 8 weeks of periodized strength resistance training (RT) utilizing relatively short rest interval lengths (RI) in between sets (SS) would induce greater improvements in body composition and muscular performance, compared to the same RT program utilizing extended RI (SL). 22 male volunteers (SS: n = 11, 65.6 ± 3.4 years; SL: n = 11, 70.3 ± 4.9 years) were assigned to one of two strength RT groups, following 4 weeks of periodized hypertrophic RT (PHRT): strength RT with 60-s RI (SS) or strength RT with 4-min RI (SL). Prior to randomization, all 22 study participants trained 3 days/week, for 4 weeks, targeting hypertrophy; from week 4 to week 12, SS and SL followed the same periodized strength RT program for 8 weeks, with RI the only difference in their RT prescription. Following PHRT, all study participants experienced increases in lean body mass (LBM) (p body strength (p body fat (p high-intensity strength RT with shortened RI induces significantly greater enhancements in body composition, muscular performance, and functional performance, compared to the same RT prescription with extended RI, in older men. Applied professionals may optimize certain RT-induced adaptations, by incorporating shortened RI.

  3. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    International Nuclear Information System (INIS)

    Carpy, R; Picker, G; Amann, B; Ranebo, H; Vincent-Bonnieu, S; Minster, O; Winter, J; Dettmann, J; Castiglione, L; Höhler, R; Langevin, D

    2011-01-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of 'wet foams' have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 3 . These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  4. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  5. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  6. Direct Interval Forecasting of Wind Power

    DEFF Research Database (Denmark)

    Wan, Can; Xu, Zhao; Pinson, Pierre

    2013-01-01

    This letter proposes a novel approach to directly formulate the prediction intervals of wind power generation based on extreme learning machine and particle swarm optimization, where prediction intervals are generated through direct optimization of both the coverage probability and sharpness...

  7. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  8. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  9. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  10. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  11. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  12. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  13. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  14. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  15. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    Science.gov (United States)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 40).

  16. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  17. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  19. Neutron activation analysis for the optimal sampling and extraction of extractable organohalogens in human hari

    International Nuclear Information System (INIS)

    Zhang, H.; Chai, Z.F.; Sun, H.B.; Xu, H.F.

    2005-01-01

    Many persistent organohalogen compounds such as DDTs and polychlorinated biphenyls have caused seriously environmental pollution problem that now involves all life. It is know that neutron activation analysis (NAA) is a very convenient method for halogen analysis and is also the only method currently available for simultaneously determining organic chlorine, bromine and iodine in one extract. Human hair is a convenient material to evaluate the burden of such compounds in human body and dan be easily collected from people over wide ranges of age, sex, residential areas, eating habits and working environments. To effectively extract organohalogen compounds from human hair, in present work the optimal Soxhelt-extraction time of extractable organohalogen (EOX) and extractable persistent organohalogen (EPOX) from hair of different lengths were studied by NAA. The results indicated that the optimal Soxhelt-extraction time of EOX and EPOX from human hair was 8-11 h, and the highest EOX and EPOX contents were observed in hair powder extract. The concentrations of both EOX and EPOX in different hair sections were in the order of hair powder ≥ 2 mm > 5 mm, which stated that hair samples milled into hair powder or cut into very short sections were not only for homogeneous. hair sample but for the best hair extraction efficiency.

  20. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  1. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  2. Optimizing Water Allocation under Uncertain System Conditions for Water and Agriculture Future Scenarios in Alfeios River Basin (Greece—Part B: Fuzzy-Boundary Intervals Combined with Multi-Stage Stochastic Programming Model

    Directory of Open Access Journals (Sweden)

    Eleni Bekri

    2015-11-01

    Full Text Available Optimal water allocation within a river basin still remains a great modeling challenge for engineers due to various hydrosystem complexities, parameter uncertainties and their interactions. Conventional deterministic optimization approaches have given their place to stochastic, fuzzy and interval-parameter programming approaches and their hybrid combinations for overcoming these difficulties. In many countries, including Mediterranean countries, water resources management is characterized by uncertain, imprecise and limited data because of the absence of permanent measuring systems, inefficient river monitoring and fragmentation of authority responsibilities. A fuzzy-boundary-interval linear programming methodology developed by Li et al. (2010 is selected and applied in the Alfeios river basin (Greece for optimal water allocation under uncertain system conditions. This methodology combines an ordinary multi-stage stochastic programming with uncertainties expressed as fuzzy-boundary intervals. Upper- and lower-bound solution intervals for optimized water allocation targets and probabilistic water allocations and shortages are estimated under a baseline scenario and four water and agricultural policy future scenarios for an optimistic and a pessimistic attitude of the decision makers. In this work, the uncertainty of the random water inflows is incorporated through the simultaneous generation of stochastic equal-probability hydrologic scenarios at various inflow positions instead of using a scenario-tree approach in the original methodology.

  3. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  4. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation

    International Nuclear Information System (INIS)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A.; Bouquerel, Hélène

    2016-01-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L"−"1 and 10% for 10 mBq L"−"1. While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L"−"1, a conservative experimental estimate is rather 5 mBq L"−"1, corresponding to 0.14 fg g"−"1. The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. - Highlights: • Radium-226 concentration measured with optimized accumulation in a container. • Radon-222 in air measured precisely with scintillation flasks and long countings. • Method tested by repetition tests, dilution experiments, and successful blind tests. • Estimated conservative detection limit without pre-concentration is 5 mBq L"−"1. • Method is portable, cost

  5. Laqueadura intraparto e de intervalo Intrapartum and interval tubal sterilization: characteristics correlated with the procedure and regret in a sample of women from a public hospital

    Directory of Open Access Journals (Sweden)

    Arlete Maria dos Santos Fernandes

    2006-10-01

    foi cesárea. Não se detectou diferença nas taxas de satisfação e arrependimento após o procedimento entre os grupos.BACKGROUND: Brazil is a country with a high prevalence of tubal ligation, which is frequently performed at the time of delivery. In recent years, an increase in tubal reversal has been noticed, primarily among young women. OBJECTIVES: To study characteristics correlated with the procedure, determine frequency of intrapartum tubal ligation, measure patient satisfaction rates and tubal sterilization regret, in a sample of post-tubal patients. METHODS: Three hundred and thirty-five women underwent tubal ligation. The variables studied were related to the procedure: age at tubal ligation, whether ligation was performed intrapartum (vaginal or cesarean section or after an interval (other than the intrapartum and puerperal period, health service performing the sterilization, medical expenses paid for the procedure, reason stated for choosing the method and causes related to satisfaction/regret: desire to become pregnant after sterilization, search for treatment and performance of tubal ligation reversal. The women were divided into two groups, a group undergoing ligation in the intrapartum period and a second group ligated after an interval, to evaluate the association between variables by using Fisher's exact test and chi-squared calculation with Yates' correction. The study was approved by the Ethics Committee of the institution. RESULTS: There was a predominance of Caucasian women over 35 years of age, married, and with a low level of education of which 43.5% had undergone sterilization before 30 years of age. Two hundred and forty-five women underwent intrapartum tubal ligation, 91.2% of them had cesarean delivery and 44.6% vaginal delivery. In both groups undergoing intrapartum tubal ligation and ligation after an interval, 82.0% and 80.8% reported satisfaction with the method. Although 14.6% expressed a desire to become pregnant at some time after

  6. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  7. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  8. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  9. Factorial-based response-surface modeling with confidence intervals for optimizing thermal-optical transmission analysis of atmospheric black carbon

    International Nuclear Information System (INIS)

    Conny, J.M.; Norris, G.A.; Gould, T.R.

    2009-01-01

    Thermal-optical transmission (TOT) analysis measures black carbon (BC) in atmospheric aerosol on a fibrous filter. The method pyrolyzes organic carbon (OC) and employs laser light absorption to distinguish BC from the pyrolyzed OC; however, the instrument does not necessarily separate the two physically. In addition, a comprehensive temperature protocol for the analysis based on the Beer-Lambert Law remains elusive. Here, empirical response-surface modeling was used to show how the temperature protocol in TOT analysis can be modified to distinguish pyrolyzed OC from BC based on the Beer-Lambert Law. We determined the apparent specific absorption cross sections for pyrolyzed OC (σ Char ) and BC (σ BC ), which accounted for individual absorption enhancement effects within the filter. Response-surface models of these cross sections were derived from a three-factor central-composite factorial experimental design: temperature and duration of the high-temperature step in the helium phase, and the heating increase in the helium-oxygen phase. The response surface for σ BC , which varied with instrument conditions, revealed a ridge indicating the correct conditions for OC pyrolysis in helium. The intersection of the σ BC and σ Char surfaces indicated the conditions where the cross sections were equivalent, satisfying an important assumption upon which the method relies. 95% confidence interval surfaces defined a confidence region for a range of pyrolysis conditions. Analyses of wintertime samples from Seattle, WA revealed a temperature between 830 deg. C and 850 deg. C as most suitable for the helium high-temperature step lasting 150 s. However, a temperature as low as 750 deg. C could not be rejected statistically

  10. Does the time interval between antimüllerian hormone serum sampling and initiation of ovarian stimulation affect its predictive ability in in vitro fertilization-intracytoplasmic sperm injection cycles with a gonadotropin-releasing hormone antagonist?

    DEFF Research Database (Denmark)

    Polyzos, Nikolaos P; Nelson, Scott M; Stoop, Dominic

    2013-01-01

    To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response.......To investigate whether the time interval between serum antimüllerian hormone (AMH) sampling and initiation of ovarian stimulation for in vitro fertilization-intracytoplasmic sperm injection (IVF-ICSI) may affect the predictive ability of the marker for low and excessive ovarian response....

  11. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  12. Relationship among RR interval, optimal reconstruction phase, temporal resolution, and image quality of end-systolic reconstruction of coronary CT angiography in patients with high heart rates. In search of the optimal acquisition protocol

    International Nuclear Information System (INIS)

    Sano, Tomonari; Matsutani, Hideyuki; Kondo, Takeshi; Fujimoto, Shinichiro; Sekine, Takako; Arai, Takehiro; Morita, Hitomi; Takase, Shinichi

    2011-01-01

    The purpose of this study is to elucidate the relationship among RR interval (RR), the optimal reconstruction phase, and adequate temporal resolution (TR) to obtain coronary CT angiography images of acceptable quality using 64-multi detector-row CT (MDCT) (Aquilion 64) of end-systolic reconstruction in 407 patients with high heart rates. Image quality was classified into 3 groups [rank A (excellent): 161, rank B (acceptable): 207, and rank C (unacceptable): 39 patients]. The optimal absolute phase (OAP) significantly correlated with RR [OAP (ms)=119-0.286 RR (ms), r=0.832, p<0.0001], and the optimal relative phase (ORP) also significantly correlated with RR [ORP (%)=62-0.023 RR (ms), r=0.656, p<0.0001], and the correlation coefficient of OAP was significantly (p<0.0001) higher than that of ORP. The OAP range (±2 standard deviation (SD)) in which it is highly possible to get a static image was from [119-0.286 RR (ms)-46] to [119-0.286 RR (ms)+46]. The TR was significantly different among ranks A (97±22 ms), B (111±31 ms) and C (135±34 ms). The TR significantly correlated with RR in ranks A (TR=-16+0.149 RR, r=0.767, p<0.0001), B (TR=-15+0.166 RR, r=0.646, p<0.0001), and C (TR=52+0.117 RR, r=0.425, p=0.0069). Rank C was distinguished from ranks A or B by linear discriminate analysis (TR=-46+0.21 RR), and the discriminate rate was 82.6%. In conclusion, both the OAP and adequate TR depend on RR, and the OAP range (±2 SD) can be calculated using the formula [119-0.286 RR (ms)-46] to [119-0.286 RR (ms) +46], and an adequate TR value would be less than (-46+0.21 RR). (author)

  13. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  14. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  15. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  16. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  17. Statistical intervals a guide for practitioners

    CERN Document Server

    Hahn, Gerald J

    2011-01-01

    Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric stati

  18. Testing of Alignment Parameters for Ancient Samples: Evaluating and Optimizing Mapping Parameters for Ancient Samples Using the TAPAS Tool

    Directory of Open Access Journals (Sweden)

    Ulrike H. Taron

    2018-03-01

    Full Text Available High-throughput sequence data retrieved from ancient or other degraded samples has led to unprecedented insights into the evolutionary history of many species, but the analysis of such sequences also poses specific computational challenges. The most commonly used approach involves mapping sequence reads to a reference genome. However, this process becomes increasingly challenging with an elevated genetic distance between target and reference or with the presence of contaminant sequences with high sequence similarity to the target species. The evaluation and testing of mapping efficiency and stringency are thus paramount for the reliable identification and analysis of ancient sequences. In this paper, we present ‘TAPAS’, (Testing of Alignment Parameters for Ancient Samples, a computational tool that enables the systematic testing of mapping tools for ancient data by simulating sequence data reflecting the properties of an ancient dataset and performing test runs using the mapping software and parameter settings of interest. We showcase TAPAS by using it to assess and improve mapping strategy for a degraded sample from a banded linsang (Prionodon linsang, for which no closely related reference is currently available. This enables a 1.8-fold increase of the number of mapped reads without sacrificing mapping specificity. The increase of mapped reads effectively reduces the need for additional sequencing, thus making more economical use of time, resources, and sample material.

  19. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  20. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  1. Extended Endoscopic Endonasal Resection of a Suprasellar and Third Ventricular Retrochiasmatic Craniopharyngioma with a Narrow Pituitary Gland-Optic Chiasm Interval: Techniques to Optimize Resection.

    Science.gov (United States)

    Kenning, Tyler J; Pinheiro-Neto, Carlos D

    2018-04-01

    The extended endoscopic endonasal approach can be utilized to surgically treat pathology within the suprasellar space. This relies on a sufficient corridor and interval between the superior aspect of the pituitary gland and the optic chiasm. Tumors located in the retrochiasmatic space and within the third ventricle, however, may not have a widened interval through which to work. With mass effect on the superior and posterior aspect of the optic chiasm, the corridor between the chiasm and the pituitary gland might even be further narrowed. This may negate the possibility of utilizing the endoscopic endonasal approach for the management of pathology in this location. We present a case of a retrochiasmatic craniopharyngioma with a narrow resection corridor that was treated with the extended endoscopic approach and we review techniques to potentially overcome this limitation. The link to the video can be found at: https://youtu.be/ogRZj-aBqeQ .

  2. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  5. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...

  6. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  7. The optimal amount and allocation of of sampling effort for plant health inspection

    NARCIS (Netherlands)

    Surkov, I.; Oude Lansink, A.G.J.M.; Werf, van der W.

    2009-01-01

    Plant import inspection can prevent the introduction of exotic pests and diseases, thereby averting economic losses. We explore the optimal allocation of a fixed budget, taking into account risk differentials, and the optimal-sized budget to minimise total pest costs. A partial-equilibrium market

  8. CLSI-based transference of the CALIPER database of pediatric reference intervals from Abbott to Beckman, Ortho, Roche and Siemens Clinical Chemistry Assays: direct validation using reference samples from the CALIPER cohort.

    Science.gov (United States)

    Estey, Mathew P; Cohen, Ashley H; Colantonio, David A; Chan, Man Khun; Marvasti, Tina Binesh; Randell, Edward; Delvin, Edgard; Cousineau, Jocelyne; Grey, Vijaylaxmi; Greenway, Donald; Meng, Qing H; Jung, Benjamin; Bhuiyan, Jalaluddin; Seccombe, David; Adeli, Khosrow

    2013-09-01

    The CALIPER program recently established a comprehensive database of age- and sex-stratified pediatric reference intervals for 40 biochemical markers. However, this database was only directly applicable for Abbott ARCHITECT assays. We therefore sought to expand the scope of this database to biochemical assays from other major manufacturers, allowing for a much wider application of the CALIPER database. Based on CLSI C28-A3 and EP9-A2 guidelines, CALIPER reference intervals were transferred (using specific statistical criteria) to assays performed on four other commonly used clinical chemistry platforms including Beckman Coulter DxC800, Ortho Vitros 5600, Roche Cobas 6000, and Siemens Vista 1500. The resulting reference intervals were subjected to a thorough validation using 100 reference specimens (healthy community children and adolescents) from the CALIPER bio-bank, and all testing centers participated in an external quality assessment (EQA) evaluation. In general, the transferred pediatric reference intervals were similar to those established in our previous study. However, assay-specific differences in reference limits were observed for many analytes, and in some instances were considerable. The results of the EQA evaluation generally mimicked the similarities and differences in reference limits among the five manufacturers' assays. In addition, the majority of transferred reference intervals were validated through the analysis of CALIPER reference samples. This study greatly extends the utility of the CALIPER reference interval database which is now directly applicable for assays performed on five major analytical platforms in clinical use, and should permit the worldwide application of CALIPER pediatric reference intervals. Copyright © 2013 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  9. Digitally Available Interval-Specific Rock-Sample Data Compiled from Historical Records, Nevada Test Site and Vicinity, Nye County, Nevada.

    Energy Technology Data Exchange (ETDEWEB)

    David B. Wood

    2007-10-24

    Between 1951 and 1992, 828 underground tests were conducted on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.

  10. Digitally Available Interval-Specific Rock-Sample Data Compiled from Historical Records, Nevada Test Site and Vicinity, Nye County, Nevada

    Energy Technology Data Exchange (ETDEWEB)

    David B. Wood

    2009-10-08

    Between 1951 and 1992, underground nuclear weapons testing was conducted at 828 sites on the Nevada Test Site, Nye County, Nevada. Prior to and following these nuclear tests, holes were drilled and mined to collect rock samples. These samples are organized and stored by depth of borehole or drift at the U.S. Geological Survey Core Library and Data Center at Mercury, Nevada, on the Nevada Test Site. From these rock samples, rock properties were analyzed and interpreted and compiled into project files and in published reports that are maintained at the Core Library and at the U.S. Geological Survey office in Henderson, Nevada. These rock-sample data include lithologic descriptions, physical and mechanical properties, and fracture characteristics. Hydraulic properties also were compiled from holes completed in the water table. Rock samples are irreplaceable because pre-test, in-place conditions cannot be recreated and samples cannot be recollected from the many holes destroyed by testing. Documenting these data in a published report will ensure availability for future investigators.

  11. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  12. Optimizing the interval between G-CSF therapy and F-18 FDG PET imaging in children and young adults receiving chemotherapy for sarcoma

    International Nuclear Information System (INIS)

    Trout, Andrew T.; Sharp, Susan E.; Gelfand, Michael J.; Turpin, Brian K.; Zhang, Bin

    2015-01-01

    Granulocyte colony-stimulating factors (G-CSF) speed recovery from chemotherapy-induced myelosuppression but the marrow stimulation they cause can interfere with interpretation of F-18 fluorodeoxyglucose positron emission tomography (F-18 FDG PET) exams. To assess the frequency of interfering G-CSF-induced bone marrow activity on FDG PET imaging in children and young adults with Ewing sarcoma and rhabdomyosarcoma and to define an interval between G-CSF administration and FDG PET imaging that limits marrow interference. Blinded, retrospective review of FDG PET exams performed in patients treated with long-acting G-CSF as part of their chemotherapeutic regimen. Exams were subjectively scored by two reviewers (R1 and R2) who assessed the level of marrow uptake of FDG and measured standardized uptake values in the marrow, liver, spleen and blood pool. FDG PET findings were correlated with time since G-CSF administration and with blood cell counts. Thirty-eight FDG PET exams performed in 17 patients were reviewed with 47.4% (18/38) of exams having marrow uptake of FDG sufficient to interfere with image interpretation. Primary predictors of marrow uptake of FDG were patient age (P = 0.0037) and time since G-CSF exposure (P = 0.0028 for subjective marrow uptake of FDG, P = 0.008 [R1] and P = 0.004 [R2] for measured maximum standardized uptake value (SUVmax)). The median interval between G-CSF administration and PET imaging in cases with marrow activity considered normal or not likely to interfere was 19.5 days (range: 7-55 days). In pediatric and young adult patients with Ewing sarcoma and rhabdomyosarcoma, an interval of 20 days between administration of the long-acting form of G-CSF and FDG PET imaging should limit interference by stimulated marrow. (orig.)

  13. Optimizing the interval between G-CSF therapy and F-18 FDG PET imaging in children and young adults receiving chemotherapy for sarcoma

    Energy Technology Data Exchange (ETDEWEB)

    Trout, Andrew T.; Sharp, Susan E.; Gelfand, Michael J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, Cincinnati, OH (United States); Turpin, Brian K. [Cincinnati Children' s Hospital Medical Center, Cancer and Blood Diseases Institute, Division of Oncology, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Division of Biostatistics and Epidemiology, Cincinnati, OH (United States)

    2015-07-15

    Granulocyte colony-stimulating factors (G-CSF) speed recovery from chemotherapy-induced myelosuppression but the marrow stimulation they cause can interfere with interpretation of F-18 fluorodeoxyglucose positron emission tomography (F-18 FDG PET) exams. To assess the frequency of interfering G-CSF-induced bone marrow activity on FDG PET imaging in children and young adults with Ewing sarcoma and rhabdomyosarcoma and to define an interval between G-CSF administration and FDG PET imaging that limits marrow interference. Blinded, retrospective review of FDG PET exams performed in patients treated with long-acting G-CSF as part of their chemotherapeutic regimen. Exams were subjectively scored by two reviewers (R1 and R2) who assessed the level of marrow uptake of FDG and measured standardized uptake values in the marrow, liver, spleen and blood pool. FDG PET findings were correlated with time since G-CSF administration and with blood cell counts. Thirty-eight FDG PET exams performed in 17 patients were reviewed with 47.4% (18/38) of exams having marrow uptake of FDG sufficient to interfere with image interpretation. Primary predictors of marrow uptake of FDG were patient age (P = 0.0037) and time since G-CSF exposure (P = 0.0028 for subjective marrow uptake of FDG, P = 0.008 [R1] and P = 0.004 [R2] for measured maximum standardized uptake value (SUVmax)). The median interval between G-CSF administration and PET imaging in cases with marrow activity considered normal or not likely to interfere was 19.5 days (range: 7-55 days). In pediatric and young adult patients with Ewing sarcoma and rhabdomyosarcoma, an interval of 20 days between administration of the long-acting form of G-CSF and FDG PET imaging should limit interference by stimulated marrow. (orig.)

  14. Transmission characteristics and optimal diagnostic samples to detect an FMDV infection in vaccinated and non-vaccinated sheep

    NARCIS (Netherlands)

    Eble, P.L.; Orsel, K.; Kluitenberg-van Hemert, F.; Dekker, A.

    2015-01-01

    We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2

  15. Adding high-intensity interval training to conventional training modalities: optimizing health-related outcomes during chemotherapy for breast cancer: the OptiTrain randomized controlled trial.

    Science.gov (United States)

    Mijwel, Sara; Backman, Malin; Bolam, Kate A; Jervaeus, Anna; Sundberg, Carl Johan; Margolin, Sara; Browall, Maria; Rundqvist, Helene; Wengström, Yvonne

    2018-02-01

    Exercise training is an effective and safe way to counteract cancer-related fatigue (CRF) and to improve health-related quality of life (HRQoL). High-intensity interval training has proven beneficial for the health of clinical populations. The aim of this randomized controlled trial was to compare the effects of resistance and high-intensity interval training (RT-HIIT), and moderate-intensity aerobic and high-intensity interval training (AT-HIIT) to usual care (UC) in women with breast cancer undergoing chemotherapy. The primary endpoint was CRF and the secondary endpoints were HRQoL and cancer treatment-related symptoms. Two hundred and forty women planned to undergo chemotherapy were randomized to supervised RT-HIIT, AT-HIIT, or UC. Measurements were performed at baseline and at 16 weeks. Questionnaires included Piper Fatigue Scale, EORTC-QLQ-C30, and Memorial Symptom Assessment Scale. The RT-HIIT group was superior to UC for CRF: total CRF (p = 0.02), behavior/daily life (p = 0.01), and sensory/physical (p = 0.03) CRF. Role functioning significantly improved while cognitive functioning was unchanged for RT-HIIT compared to declines shown in the UC group (p = 0.04). AT-HIIT significantly improved emotional functioning versus UC (p = 0.01) and was superior to UC for pain symptoms (p = 0.03). RT-HIIT reported a reduced symptom burden, while AT-HIIT remained stable compared to deteriorations shown by UC (p HIIT was superior to UC for total symptoms (p HIIT was effective in preventing increases in CRF and in reducing symptom burden for patients during chemotherapy for breast cancer. These findings add to a growing body of evidence supporting the inclusion of structured exercise prescriptions, including HIIT, as a vital component of cancer rehabilitation. Clinicaltrials.gov Registration Number: NCT02522260.

  16. Ionizing radiation as optimization method for aluminum detection from drinking water samples

    International Nuclear Information System (INIS)

    Bazante-Yamguish, Renata; Geraldo, Aurea Beatriz C.; Moura, Eduardo; Manzoli, Jose Eduardo

    2013-01-01

    The presence of organic compounds in water samples is often responsible for metal complexation; depending on the analytic method, the organic fraction may dissemble the evaluation of the real values of metal concentration. Pre-treatment of the samples is advised when organic compounds are interfering agents, and thus sample mineralization may be accomplished by several chemical and/or physical methods. Here, the ionizing radiation was used as an advanced oxidation process (AOP), for sample pre-treatment before the analytic determination of total and dissolved aluminum by ICP-OES in drinking water samples from wells and spring source located at Billings dam region. Before irradiation, the spring source and wells' samples showed aluminum levels of 0.020 mg/l and 0.2 mg/l respectively; after irradiation, both samples showed a 8-fold increase of aluminum concentration. These results are discussed considering other physical and chemical parameters and peculiarities of sample sources. (author)

  17. Optimal sampling period of the digital control system for the nuclear power plant steam generator water level control

    International Nuclear Information System (INIS)

    Hur, Woo Sung; Seong, Poong Hyun

    1995-01-01

    A great effort has been made to improve the nuclear plant control system by use of digital technologies and a long term schedule for the control system upgrade has been prepared with an aim to implementation in the next generation nuclear plants. In case of digital control system, it is important to decide the sampling period for analysis and design of the system, because the performance and the stability of a digital control system depend on the value of the sampling period of the digital control system. There is, however, currently no systematic method used universally for determining the sampling period of the digital control system. Generally, a traditional way to select the sampling frequency is to use 20 to 30 times the bandwidth of the analog control system which has the same system configuration and parameters as the digital one. In this paper, a new method to select the sampling period is suggested which takes into account of the performance as well as the stability of the digital control system. By use of the Irving's model steam generator, the optimal sampling period of an assumptive digital control system for steam generator level control is estimated and is actually verified in the digital control simulation system for Kori-2 nuclear power plant steam generator level control. Consequently, we conclude the optimal sampling period of the digital control system for Kori-2 nuclear power plant steam generator level control is 1 second for all power ranges. 7 figs., 3 tabs., 8 refs. (Author)

  18. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  19. Sampling optimization trade-offs for long-term monitoring of gamma dose rates

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhöfel, C.J.W.; Stöhlker, U.

    2008-01-01

    This paper applies a recently developed optimization method to examine the design of networks that monitor radiation under routine conditions. Annual gamma dose rates were modelled by combining regression with interpolation of the regression residuals using spatially exhaustive predictors and an

  20. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  1. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  2. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Science.gov (United States)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  3. Interval selection with machine-dependent intervals

    OpenAIRE

    Bohmova K.; Disser Y.; Mihalak M.; Widmayer P.

    2013-01-01

    We study an offline interval scheduling problem where every job has exactly one associated interval on every machine. To schedule a set of jobs, exactly one of the intervals associated with each job must be selected, and the intervals selected on the same machine must not intersect.We show that deciding whether all jobs can be scheduled is NP-complete already in various simple cases. In particular, by showing the NP-completeness for the case when all the intervals associated with the same job...

  4. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  5. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  6. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    Science.gov (United States)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  7. Photogeneration of reactive transient species upon irradiation of natural water samples: Formation quantum yields in different spectral intervals, and implications for the photochemistry of surface waters.

    Science.gov (United States)

    Marchisio, Andrea; Minella, Marco; Maurino, Valter; Minero, Claudio; Vione, Davide

    2015-04-15

    Chromophoric dissolved organic matter (CDOM) in surface waters is a photochemical source of several transient species such as CDOM triplet states ((3)CDOM*), singlet oxygen ((1)O2) and the hydroxyl radical (OH). By irradiation of lake water samples, it is shown here that the quantum yields for the formation of these transients by CDOM vary depending on the irradiation wavelength range, in the order UVB > UVA > blue. A possible explanation is that radiation at longer wavelengths is preferentially absorbed by the larger CDOM fractions, which show lesser photoactivity compared to smaller CDOM moieties. The quantum yield variations in different spectral ranges were definitely more marked for (3)CDOM* and OH compared to (1)O2. The decrease of the quantum yields with increasing wavelength has important implications for the photochemistry of surface waters, because long-wavelength radiation penetrates deeper in water columns compared to short-wavelength radiation. The average steady-state concentrations of the transients ((3)CDOM*, (1)O2 and OH) were modelled in water columns of different depths, based on the experimentally determined wavelength trends of the formation quantum yields. Important differences were found between such modelling results and those obtained in a wavelength-independent quantum yield scenario. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  9. Optimal sample preparation for nanoparticle metrology (statistical size measurements) using atomic force microscopy

    International Nuclear Information System (INIS)

    Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.

    2010-01-01

    Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.

  10. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling

    DEFF Research Database (Denmark)

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R

    2015-01-01

    OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN......: Prospective clinical laboratory study. SETTING: University assisted reproductive technology (ART) laboratory. PATIENT(S): A total of 594 patients undergoing semen analysis and cryopreservation. INTERVENTION(S): Semen analysis, cryopreservation with different intermediate steps and in different volumes (50......-1,000 μL), and long-term storage in LN2 or VN2. MAIN OUTCOME MEASURE(S): Optimal TV volume, prediction of cryosurvival (CS) in ART procedure vials (ARTVs) with pre-freeze semen parameters and TV CS, post-thaw motility after two- or three-step semen cryopreservation and cryostorage in VN2 and LN2. RESULT...

  11. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  12. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  13. Optimization Extracting Technology of Cynomorium songaricum Rupr. Saponins by Ultrasonic and Determination of Saponins Content in Samples with Different Source

    OpenAIRE

    Xiaoli Wang; Qingwei Wei; Xinqiang Zhu; Chunmei Wang; Yonggang Wang; Peng Lin; Lin Yang

    2015-01-01

    Extraction process was optimized by single factor and orthogonal experiment (L9 (34)). Moreover, the content determination was studied in methodology. The optimum ultrasonic extraction conditions were: ethanol concentration of 75%, ultrasonic power of 420 w, the solid-liquid ratio of 1:15, extraction duration of 45 min, extraction temperature of 90°C and extraction for 2 times. Saponins content in Guazhou samples was significantly higher than those in Xinjiang and Inner Mongolia. Meanwhile, G...

  14. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  15. Interval methods: An introduction

    DEFF Research Database (Denmark)

    Achenie, L.E.K.; Kreinovich, V.; Madsen, Kaj

    2006-01-01

    This chapter contains selected papers presented at the Minisymposium on Interval Methods of the PARA'04 Workshop '' State-of-the-Art in Scientific Computing ''. The emphasis of the workshop was on high-performance computing (HPC). The ongoing development of ever more advanced computers provides...... the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of HPC is the development of software that optimizes the performance of a given computer....... An important characteristic of the computer performance in scientific computing is the accuracy of the Computation results. Often, we can estimate this accuracy by using traditional statistical techniques. However, in many practical situations, we do not know the probability distributions of different...

  16. Optimized Clinical Use of RNALater and FFPE Samples for Quantitative Proteomics

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Kastaniegaard, Kenneth; Padurariu, Simona

    2015-01-01

    Introduction and Objectives The availability of patient samples is essential for clinical proteomic research. Biobanks worldwide store mainly samples stabilized in RNAlater as well as formalin-fixed and paraffin embedded (FFPE) biopsies. Biobank material is a potential source for clinical...... we compare to FFPE and frozen samples being the control. Methods From the sigmoideum of two healthy participants’ twenty-four biopsies were extracted using endoscopy. The biopsies was stabilized either by being directly frozen, RNAlater, FFPE or incubated for 30 min at room temperature prior to FFPE...... information. Conclusion We have demonstrated that quantitative proteome analysis and pathway mapping of samples stabilized in RNAlater as well as by FFPE is feasible with minimal impact on the quality of protein quantification and post-translational modifications....

  17. COARSE: Convex Optimization based autonomous control for Asteroid Rendezvous and Sample Exploration, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Sample return missions, by nature, require high levels of spacecraft autonomy. Developments in hardware avionics have led to more capable real-time onboard computing...

  18. Departure Interval Optimization of Electric Bus Rapid Transit Considering Level of Service%考虑服务水平的纯电动快速公交发车间隔优化研究

    Institute of Scientific and Technical Information of China (English)

    王雪然; 刘文峰; 李斌; 张沫

    2017-01-01

    With the electric vehicles applied in the field of bus rapid transit,due to the special operational needs of charging or battery swap for electric vehicles,the traditional methods of departure interval optimization are no longer applicable.Considering the level of service and charging time constraints,this paper studies the departure interval optimization of electric bus rapid transit.Firstly,the service level evaluation system associated with energy consumption is established by analyzing the correlation between service level indexes and energy consumption;secondly,considering the level of service and charging time constraints,the departure interval optimization model is established by setting energy consumption as the objective function;finally,taking line 1 of Jinhua E-BRT as an example to examine the effect of the optimization model,the optimization results demonstrate that compared with the current operation plans the energy consumption is saved by 6.21% under the premise of the same level of service.Therefore,the departure interval optimization method of electric bus rapid transit considering level of service established in this paper has certain practical significance and provides an important guidance for reducing the energy consumption under a certain service level.%随着电动汽车被引入城市快速公交领域,由于其充换电等特殊的运营调度需求,在发车间隔计划等制定过程中传统方法不再适用.本文在综合考虑服务水平和车辆充电时间约束条件下,对纯电动快速公交(E-BRT)发车间隔优化进行了研究.根据分析服务水平指标项与车辆能耗的相关性,建立了面向能耗评估的服务水平评价体系;综合考虑服务水平和充电时间约束条件,以车辆运营能耗为目标函数,建立了发车间隔优化模型;基于金华市E-BRT1号线运营数据进行实例验证,结果表明,在满足相同服务水平情况下,本文提出的优化模型可以节约能耗6.21%.

  19. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  20. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  1. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  2. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  3. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    DEFF Research Database (Denmark)

    Kirkeby, Carsten; Stockmarr, Anders; Bødker, Rene

    2013-01-01

    BACKGROUND: Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light...... absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. CONCLUSIONS: Despite spatial clustering in vector abundance, we found no effect of increasing...... monitoring programmes on fields with grazing animals....

  4. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  5. Optimizing sampling strategy for radiocarbon dating of Holocene fluvial systems in a vertically aggrading setting

    International Nuclear Information System (INIS)

    Toernqvist, T.E.; Dijk, G.J. Van

    1993-01-01

    The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs

  6. Sterile Reverse Osmosis Water Combined with Friction Are Optimal for Channel and Lever Cavity Sample Collection of Flexible Duodenoscopes

    Directory of Open Access Journals (Sweden)

    Michelle J. Alfa

    2017-11-01

    Full Text Available IntroductionSimulated-use buildup biofilm (BBF model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes.Materials and methodsBBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa. Sterile reverse osmosis (RO water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey–Engley were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes.ResultsDespite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF extraction provided significantly better recovery of E. coli (p = 0.02 from duodenoscope lever cavities compared to the CDC flush method.Discussion and conclusionWe recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.

  7. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  8. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  9. Cadmium and lead determination by ICPMS: Method optimization and application in carabao milk samples

    Directory of Open Access Journals (Sweden)

    Riza A. Magbitang

    2012-06-01

    Full Text Available A method utilizing inductively coupled plasma mass spectrometry (ICPMS as the element-selective detector with microwave-assisted nitric acid digestion as the sample pre-treatment technique was developed for the simultaneous determination of cadmium (Cd and lead (Pb in milk samples. The estimated detection limits were 0.09ìg kg-1 and 0.33ìg kg-1 for Cd and Pb, respectively. The method was linear in the concentration range 0.01 to 500ìg kg-1with correlation coefficients of 0.999 for both analytes.The method was validated using certified reference material BCR 150 and the determined values for Cd and Pb were 18.24 ± 0.18 ìg kg-1 and 807.57 ± 7.07ìg kg-1, respectively. Further validation using another certified reference material, NIST 1643e, resulted in determined concentrations of 6.48 ± 0.10 ìg L-1 for Cd and 21.96 ± 0.87 ìg L-1 for Pb. These determined values agree well with the certified values in the reference materials.The method was applied to processed and raw carabao milk samples collected in Nueva Ecija, Philippines.The Cd levels determined in the samples were in the range 0.11 ± 0.07 to 5.17 ± 0.13 ìg kg-1 for the processed milk samples, and 0.11 ± 0.07 to 0.45 ± 0.09 ìg kg-1 for the raw milk samples. The concentrations of Pb were in the range 0.49 ± 0.21 to 5.82 ± 0.17 ìg kg-1 for the processed milk samples, and 0.72 ± 0.18 to 6.79 ± 0.20 ìg kg-1 for the raw milk samples.

  10. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    Science.gov (United States)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  11. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  12. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  13. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available decontamination, for example, acid-generating minerals. Acid rock drainage can adversely have an impact on the quality of drinking water and the health of riparian ecosystems. To assess or monitor environmental impact of mining, sampling of mine waste is required...

  14. Robust, Sensitive, and Automated Phosphopeptide Enrichment Optimized for Low Sample Amounts Applied to Primary Hippocampal Neurons

    NARCIS (Netherlands)

    Post, Harm; Penning, Renske; Fitzpatrick, Martin; Garrigues, L.B.; Wu, W.; Mac Gillavry, H.D.; Hoogenraad, C.C.; Heck, A.J.R.; Altelaar, A.F.M.

    2017-01-01

    Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC–MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts,

  15. Optimal sampling strategies to assess inulin clearance in children by the inulin single-injection method

    NARCIS (Netherlands)

    van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.

    2003-01-01

    Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method

  16. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  17. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  18. Optimization of fecal cytology in the dog: comparison of three sampling methods.

    Science.gov (United States)

    Frezoulis, Petros S; Angelidou, Elisavet; Diakou, Anastasia; Rallis, Timoleon S; Mylonakis, Mathios E

    2017-09-01

    Dry-mount fecal cytology (FC) is a component of the diagnostic evaluation of gastrointestinal diseases. There is limited information on the possible effect of the sampling method on the cytologic findings of healthy dogs or dogs admitted with diarrhea. We aimed to: (1) establish sampling method-specific expected values of selected cytologic parameters (isolated or clustered epithelial cells, neutrophils, lymphocytes, macrophages, spore-forming rods) in clinically healthy dogs; (2) investigate if the detection of cytologic abnormalities differs among methods in dogs admitted with diarrhea; and (3) investigate if there is any association between FC abnormalities and the anatomic origin (small- or large-bowel diarrhea) or the chronicity of diarrhea. Sampling with digital examination (DE), rectal scraping (RS), and rectal lavage (RL) was prospectively assessed in 37 healthy and 34 diarrheic dogs. The median numbers of isolated ( p = 0.000) or clustered ( p = 0.002) epithelial cells, and of lymphocytes ( p = 0.000), differed among the 3 methods in healthy dogs. In the diarrheic dogs, the RL method was the least sensitive in detecting neutrophils, and isolated or clustered epithelial cells. Cytologic abnormalities were not associated with the origin or the chronicity of diarrhea. Sampling methods differed in their sensitivity to detect abnormalities in FC; DE or RS may be of higher sensitivity compared to RL. Anatomic origin or chronicity of diarrhea do not seem to affect the detection of cytologic abnormalities.

  19. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  20. Convex Interval Games

    NARCIS (Netherlands)

    Alparslan-Gok, S.Z.; Brânzei, R.; Tijs, S.H.

    2008-01-01

    In this paper, convex interval games are introduced and some characterizations are given. Some economic situations leading to convex interval games are discussed. The Weber set and the Shapley value are defined for a suitable class of interval games and their relations with the interval core for

  1. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    Science.gov (United States)

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  2. [Optimization of solid-phase extraction for enrichment of toxic organic compounds in water samples].

    Science.gov (United States)

    Zhang, Ming-quan; Li, Feng-min; Wu, Qian-yuan; Hu, Hong-ying

    2013-05-01

    A concentration method for enrichment of toxic organic compounds in water samples has been developed based on combined solid-phase extraction (SPE) to reduce impurities and improve recoveries of target compounds. This SPE method was evaluated in every stage to identify the source of impurities. Based on the analysis of Waters Oasis HLB without water samples, the eluent of SPE sorbent after dichloromethane and acetone contributed 85% of impurities during SPE process. In order to reduce the impurities from SPE sorbent, soxhlet extraction of dichloromethane followed by acetone and lastly methanol was applied to the sorbents for 24 hours and the results had proven that impurities were reduced significantly. In addition to soxhlet extraction, six types of prevalent SPE sorbents were used to absorb 40 target compounds, the lgK(ow) values of which were within the range of 1.46 and 8.1, and recovery rates were compared. It was noticed and confirmed that Waters Oasis HLB had shown the best recovery results for most of the common testing samples among all three styrenedivinylbenzene (SDB) polymer sorbents, which were 77% on average. Furthermore, Waters SepPak AC-2 provided good recovery results for pesticides among three types of activated carbon sorbents and the average recovery rates reached 74%. Therefore, Waters Oasis HLB and Waters SepPak AC-2 were combined to obtain a better recovery and the average recovery rate for the tested 40 compounds of this new SPE method was 87%.

  3. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  4. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Science.gov (United States)

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.

    2017-01-01

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p health behaviors and outcomes. PMID:28282878

  5. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  6. Determination of Ergot Alkaloids: Purity and Stability Assessment of Standards and Optimization of Extraction Conditions for Cereal Samples

    DEFF Research Database (Denmark)

    Krska, R.; Berthiller, F.; Schuhmacher, R.

    2008-01-01

    as those that are the most common and physiologically active. The purity of the standards was investigated by means of liquid chromatography with diode array detection, electrospray ionization, and time-of-flight mass spectrometry (LC-DAD-ESI-TOF-MS). All of the standards assessed showed purity levels...... (PSA) before LC/MS/MS. Based on the results obtained from these optimization studies, a mixture of acetonitrile with ammonium carbonate buffer was used as extraction solvent, as recoveries for all analyzed ergot alkaloids were significantly higher than those with the other solvents. Different sample...

  7. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Directory of Open Access Journals (Sweden)

    Joel Adu-Brimpong

    2017-03-01

    Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  8. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  9. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    Science.gov (United States)

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling

  11. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  12. Optimal sample size for predicting viability of cabbage and radish seeds based on near infrared spectra of single seeds

    DEFF Research Database (Denmark)

    Shetty, Nisha; Min, Tai-Gi; Gislum, René

    2011-01-01

    The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub......-sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both cabbage...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...

  13. Immunosuppressant therapeutic drug monitoring by LC-MS/MS: workflow optimization through automated processing of whole blood samples.

    Science.gov (United States)

    Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario

    2013-11-01

    Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.

  14. Optimal ABC inventory classification using interval programming

    NARCIS (Netherlands)

    Rezaei, J.; Salimi, N.

    2015-01-01

    Inventory classification is one of the most important activities in inventory management, whereby inventories are classified into three or more classes. Several inventory classifications have been proposed in the literature, almost all of which have two main shortcomings in common. That is, the

  15. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  16. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  17. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  18. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  19. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  20. Optimal sample size of signs for classification of radiational and oily soils

    International Nuclear Information System (INIS)

    Babayev, M.P.; Iskenderov, S.M.; Aghayev, R.A.

    2012-01-01

    Full text : This article tells about classification of radiational and oily soils that should be in essence a compact intelligence system which contains maximum information on classes of soil objects in the accepted feature space. The stored experience shows that the volume of the most informative soil signs can make up maximum 7-8 indexes. More correct approach to our opinion for a sample of the most informative (most important) indexes is the method of testing and mistakes, that is the experimental method, allowing to make use a wide experience and intuition of the researcher, or group of the researchers, engaged for many years in the field of soil science. At this operational stage of the formal device of soils classification, to say more concrete, the assessment section of selfdescriptiveness of soil signs of this formal device, in our opinion, is purely mathematized and in some cases even not reflect the true picture. In this case it will be calculated 21 pair of correlative elements between the selected soil signs as a measure of the linear communication. The volume of the correlative row will be equal to 6, as the increase in volume of the correlative row can sharply increase the volume calculation. Pertinently to note that, it is the first time an attempt is made to create correlative matrixes of the most important signs of radiation and oily soils

  1. A boundary-optimized rejection region test for the two-sample binomial problem.

    Science.gov (United States)

    Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A

    2018-03-30

    Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  2. Optimized pre-thinning procedures of ion-beam thinning for TEM sample preparation by magnetorheological polishing.

    Science.gov (United States)

    Luo, Hu; Yin, Shaohui; Zhang, Guanhua; Liu, Chunhui; Tang, Qingchun; Guo, Meijian

    2017-10-01

    Ion-beam-thinning is a well-established sample preparation technique for transmission electron microscopy (TEM), but tedious procedures and labor consuming pre-thinning could seriously reduce its efficiency. In this work, we present a simple pre-thinning technique by using magnetorheological (MR) polishing to replace manual lapping and dimpling, and demonstrate the successful preparation of electron-transparent single crystal silicon samples after MR polishing and single-sided ion milling. Dimples pre-thinned to less than 30 microns and with little mechanical surface damage were repeatedly produced under optimized MR polishing conditions. Samples pre-thinned by both MR polishing and traditional technique were ion-beam thinned from the rear side until perforation, and then observed by optical microscopy and TEM. The results show that the specimen pre-thinned by MR technique was free from dimpling related defects, which were still residual in sample pre-thinned by conventional technique. Nice high-resolution TEM images could be acquired after MR polishing and one side ion-thinning. MR polishing promises to be an adaptable and efficient method for pre-thinning in preparation of TEM specimens, especially for brittle ceramics. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    Science.gov (United States)

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  4. Novel synthesis of nanocomposite for the extraction of Sildenafil Citrate (Viagra) from water and urine samples: Process screening and optimization.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Purkait, Mihir Kumar

    2017-09-01

    A sensitive analytical method is investigated to concentrate and determine trace level of Sildenafil Citrate (SLC) present in water and urine samples. The method is based on a sample treatment using dispersive solid-phase micro-extraction (DSPME) with laboratory-made Mn@ CuS/ZnS nanocomposite loaded on activated carbon (Mn@ CuS/ZnS-NCs-AC) as a sorbent for the target analyte. The efficiency was enhanced by ultrasound-assisted (UA) with dispersive nanocomposite solid-phase micro-extraction (UA-DNSPME). Four significant variables affecting SLC recovery like; pH, eluent volume, sonication time and adsorbent mass were selected by the Plackett-Burman design (PBD) experiments. These selected factors were optimized by the central composite design (CCD) to maximize extraction of SLC. The results exhibited that the optimum conditions for maximizing extraction of SLC were 6.0 pH, 300μL eluent (acetonitrile) volume, 10mg of adsorbent and 6min sonication time. Under optimized conditions, virtuous linearity of SLC was ranged from 30 to 4000ngmL -1 with R 2 of 0.99. The limit of detection (LOD) was 2.50ngmL -1 and the recoveries at two spiked levels were ranged from 97.37 to 103.21% with the relative standard deviation (RSD) less than 4.50% (n=15). The enhancement factor (EF) was 81.91. The results show that the combination UAE with DNSPME is a suitable method for the determination of SLC in water and urine samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  6. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  7. Optimization and application of octadecyl-modified monolithic silica for solid-phase extraction of drugs in whole blood samples.

    Science.gov (United States)

    Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka

    2017-09-29

    Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  9. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers.

    Science.gov (United States)

    Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles

    2014-01-15

    The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn; Zhu, Weiliang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [ACS Key Laboratory of Receptor Research, Drug Discovery and Design Center, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 555 Zuchongzhi Road, Shanghai 201203 (China); Shi, Jiye, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [UCB Pharma, 216 Bath Road, Slough SL1 4EN (United Kingdom)

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.

  11. Design and sampling plan optimization for RT-qPCR experiments in plants: a case study in blueberry

    Directory of Open Access Journals (Sweden)

    Jose V Die

    2016-03-01

    Full Text Available The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  12. Programming with Intervals

    Science.gov (United States)

    Matsakis, Nicholas D.; Gross, Thomas R.

    Intervals are a new, higher-level primitive for parallel programming with which programmers directly construct the program schedule. Programs using intervals can be statically analyzed to ensure that they do not deadlock or contain data races. In this paper, we demonstrate the flexibility of intervals by showing how to use them to emulate common parallel control-flow constructs like barriers and signals, as well as higher-level patterns such as bounded-buffer producer-consumer. We have implemented intervals as a publicly available library for Java and Scala.

  13. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age-specific refe......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-specific reference intervals for coagulation tests during normal pregnancy. Eight hundred one women with expected normal pregnancies were included in the study. Of these women, 391 had no complications during pregnancy, vaginal delivery, or postpartum period. Plasma samples were obtained at gestational weeks 13......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  14. Optimizing sample pretreatment for compound-specific stable carbon isotopic analysis of amino sugars in marine sediment

    Science.gov (United States)

    Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.

    2014-09-01

    Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.

  15. Optimization of pressurized liquid extraction (PLE) of dioxin-furans and dioxin-like PCBs from environmental samples.

    Science.gov (United States)

    Antunes, Pedro; Viana, Paula; Vinhas, Tereza; Capelo, J L; Rivera, J; Gaspar, Elvira M S M

    2008-05-30

    Pressurized liquid extraction (PLE) applying three extraction cycles, temperature and pressure, improved the efficiency of solvent extraction when compared with the classical Soxhlet extraction. Polychlorinated-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like PCBs (coplanar polychlorinated biphenyls (Co-PCBs)) in two Certified Reference Materials [DX-1 (sediment) and BCR 529 (soil)] and in two contaminated environmental samples (sediment and soil) were extracted by ASE and Soxhlet methods. Unlike data previously reported by other authors, results demonstrated that ASE using n-hexane as solvent and three extraction cycles, 12.4 MPa (1800 psi) and 150 degrees C achieves similar recovery results than the classical Soxhlet extraction for PCDFs and Co-PCBs, and better recovery results for PCDDs. ASE extraction, performed in less time and with less solvent proved to be, under optimized conditions, an excellent extraction technique for the simultaneous analysis of PCDD/PCDFs and Co-PCBs from environmental samples. Such fast analytical methodology, having the best cost-efficiency ratio, will improve the control and will provide more information about the occurrence of dioxins and the levels of toxicity and thereby will contribute to increase human health.

  16. Optimization of loop-mediated isothermal amplification (LAMP) assays for the detection of Leishmania DNA in human blood samples.

    Science.gov (United States)

    Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon

    2016-10-01

    Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Performance of two liquids scintillation and optimization of a Wallac 1411 counter in the tritium quantification in aqueous samples

    International Nuclear Information System (INIS)

    Contreras de la Cruz, E. de J.; Lopez del Rio, H.; Davila R, J. I.; Mireles G, F.; Pinedo V, J. L.

    2014-10-01

    The optimization of a liquid scintillation counting Wallac 1411 is presented as well as the performance of the liquids scintillation miscible in water OptiPhase Hi Safe 3 and Last Gold Ab, in the tritium quantification in aqueous samples. The luminescence effect, the quenching, the solution ph and the level of pulse amplitude comparator (Pac) were evaluated in the response of both liquids scintillation in the tritium measurement. The quenching and the luminescence modify the scintillators response; in the first of them the counting efficiency decreases and the minimum detectable activity increases; the second interferes in the tritium quantification in the interest window, but the effect disappears after 4 hours of darkness of the samples. The maximum counting efficiency was of 24% for OptiPhase Hi Safe 3 and 31% for Last Gold Ab, diminishing with the quenching until values of 8 and 11%, respectively. For a counting time of 6 hours and lower quenching, the minimum detectable concentration for OptiPhase Hi Safe 3 was of 13.4 ± 0.2 Bq/L and 9.9 ± 0.1 Bq/L for Last Gold Ab. Both scintillators responded appropriately to sour and basic solutions, being only presented chemiluminescence in Last Gold Ab to ph highly basic. The Pac application that varies between 1 and 256 does not have effect in the tritium measurement until values above 90. (Author)

  18. Rapid, sensitive and reproducible method for point-of-collection screening of liquid milk for adulterants using a portable Raman spectrometer with novel optimized sample well

    Science.gov (United States)

    Nieuwoudt, Michel K.; Holroyd, Steve E.; McGoverin, Cushla M.; Simpson, M. Cather; Williams, David E.

    2017-02-01

    Point-of-care diagnostics are of interest in the medical, security and food industry, the latter particularly for screening food adulterated for economic gain. Milk adulteration continues to be a major problem worldwide and different methods to detect fraudulent additives have been investigated for over a century. Laboratory based methods are limited in their application to point-of-collection diagnosis and also require expensive instrumentation, chemicals and skilled technicians. This has encouraged exploration of spectroscopic methods as more rapid and inexpensive alternatives. Raman spectroscopy has excellent potential for screening of milk because of the rich complexity inherent in its signals. The rapid advances in photonic technologies and fabrication methods are enabling increasingly sensitive portable mini-Raman systems to be placed on the market that are both affordable and feasible for both point-of-care and point-of-collection applications. We have developed a powerful spectroscopic method for rapidly screening liquid milk for sucrose and four nitrogen-rich adulterants (dicyandiamide (DCD), ammonium sulphate, melamine, urea), using a combined system: a small, portable Raman spectrometer with focusing fibre optic probe and optimized reflective focusing wells, simply fabricated in aluminium. The reliable sample presentation of this system enabled high reproducibility of 8% RSD (residual standard deviation) within four minutes. Limit of detection intervals for PLS calibrations ranged between 140 - 520 ppm for the four N-rich compounds and between 0.7 - 3.6 % for sucrose. The portability of the system and reliability and reproducibility of this technique opens opportunities for general, reagentless adulteration screening of biological fluids as well as milk, at point-of-collection.

  19. Haemostatic reference intervals in pregnancy

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Jørgensen, Maja; Klajnbard, Anna

    2010-01-01

    largely unchanged during pregnancy, delivery, and postpartum and were within non-pregnant reference intervals. However, levels of fibrinogen, D-dimer, and coagulation factors VII, VIII, and IX increased markedly. Protein S activity decreased substantially, while free protein S decreased slightly and total......Haemostatic reference intervals are generally based on samples from non-pregnant women. Thus, they may not be relevant to pregnant women, a problem that may hinder accurate diagnosis and treatment of haemostatic disorders during pregnancy. In this study, we establish gestational age......-20, 21-28, 29-34, 35-42, at active labor, and on postpartum days 1 and 2. Reference intervals for each gestational period using only the uncomplicated pregnancies were calculated in all 391 women for activated partial thromboplastin time (aPTT), fibrinogen, fibrin D-dimer, antithrombin, free protein S...

  20. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  1. Optimization of PMAxx pretreatment to distinguish between human norovirus with intact and altered capsids in shellfish and sewage samples.

    Science.gov (United States)

    Randazzo, Walter; Khezri, Mohammad; Ollivier, Joanna; Le Guyader, Françoise S; Rodríguez-Díaz, Jesús; Aznar, Rosa; Sánchez, Gloria

    2018-02-02

    Shellfish contamination by human noroviruses (HuNoVs) is a serious health and economic problem. Recently an ISO procedure based on RT-qPCR for the quantitative detection of HuNoVs in shellfish has been issued, but these procedures cannot discriminate between inactivated and potentially infectious viruses. The aim of the present study was to optimize a pretreatment using PMAxx to better discriminate between intact and heat-treated HuNoVs in shellfish and sewage. To this end, the optimal conditions (30min incubation with 100μM of PMAxx and 0.5% of Triton, and double photoactivation) were applied to mussels, oysters and cockles artificially inoculated with thermally-inactivated (99°C for 5min) HuNoV GI and GII. This pretreatment reduced the signal of thermally-inactivated HuNoV GI in cockles and HuNoV GII in mussels by >3 log. Additionally, this pretreatment reduced the signal of thermally-inactivated HuNoV GI and GII between 1 and 1.5 log in oysters. Thermal inactivation of HuNoV GI and GII in PBS, sewage and bioaccumulated oysters was also evaluated by the PMAxx-Triton pretreatment. Results showed significant differences between reductions observed in the control and PMAxx-treated samples in PBS following treatment at 72 and 95°C for 15min. In sewage, the RT-qPCR signal of HuNoV GI was completely removed by the PMAxx pretreatment after heating at 72 and 95°C, while the RT-qPCR signal for HuNoV GII was completely eliminated only at 95°C. Finally, the PMAxx-Triton pretreatment was applied to naturally contaminated sewage and oysters, resulting in most of the HuNoV genomes quantified in sewage and oyster samples (12 out of 17) corresponding to undamaged capsids. Although this procedure may still overestimate infectivity, the PMAxx-Triton pretreatment represents a step forward to better interpret the quantification of intact HuNoVs in complex matrices, such as sewage and shellfish, and it could certainly be included in the procedures based on RT-qPCR. Copyright

  2. Correct Bayesian and frequentist intervals are similar

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1986-01-01

    This paper argues that Bayesians and frequentists will normally reach numerically similar conclusions, when dealing with vague data or sparse data. It is shown that both statistical methodologies can deal reasonably with vague data. With sparse data, in many important practical cases Bayesian interval estimates and frequentist confidence intervals are approximately equal, although with discrete data the frequentist intervals are somewhat longer. This is not to say that the two methodologies are equally easy to use: The construction of a frequentist confidence interval may require new theoretical development. Bayesians methods typically require numerical integration, perhaps over many variables. Also, Bayesian can easily fall into the trap of over-optimism about their amount of prior knowledge. But in cases where both intervals are found correctly, the two intervals are usually not very different. (orig.)

  3. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  4. Optimizing Frozen Sample Preparation for Laser Microdissection: Assessment of CryoJane Tape-Transfer System®.

    Directory of Open Access Journals (Sweden)

    Yelena G Golubeva

    Full Text Available Laser microdissection is an invaluable tool in medical research that facilitates collecting specific cell populations for molecular analysis. Diversity of research targets (e.g., cancerous and precancerous lesions in clinical and animal research, cell pellets, rodent embryos, etc. and varied scientific objectives, however, present challenges toward establishing standard laser microdissection protocols. Sample preparation is crucial for quality RNA, DNA and protein retrieval, where it often determines the feasibility of a laser microdissection project. The majority of microdissection studies in clinical and animal model research are conducted on frozen tissues containing native nucleic acids, unmodified by fixation. However, the variable morphological quality of frozen sections from tissues containing fat, collagen or delicate cell structures can limit or prevent successful harvest of the desired cell population via laser dissection. The CryoJane Tape-Transfer System®, a commercial device that improves cryosectioning outcomes on glass slides has been reported superior for slide preparation and isolation of high quality osteocyte RNA (frozen bone during laser dissection. Considering the reported advantages of CryoJane for laser dissection on glass slides, we asked whether the system could also work with the plastic membrane slides used by UV laser based microdissection instruments, as these are better suited for collection of larger target areas. In an attempt to optimize laser microdissection slide preparation for tissues of different RNA stability and cryosectioning difficulty, we evaluated the CryoJane system for use with both glass (laser capture microdissection and membrane (laser cutting microdissection slides. We have established a sample preparation protocol for glass and membrane slides including manual coating of membrane slides with CryoJane solutions, cryosectioning, slide staining and dissection procedure, lysis and RNA extraction

  5. Optimization of a method by liquid chromatography of high resolution to determine residues of ethilenthiourea in samples of tomato

    International Nuclear Information System (INIS)

    Mora, D.; Rodriguez, O.M.

    2002-01-01

    A method was optimized to determine the present residues of ethilenthiourea in samples of tomatoes. The method consisted of three stages: a extraction of ultrasonic bath with methanol; a cleaning of the extract through a glass column of 11 mm of diameter stuffed with 2,5 g of neutral aluminium's mixture and activated coal (97,5:2,5) and 2,5 g of pure-neutral aluminium, it is dissolved with 250 ml of methanol. The third stage was of quantification by HPLC in a C 18 ' column with a methanol and water mixture (90:10) like a mobile phase to an flow of 2,0 ml/min. and with UV detection to 232 nm. The retention's time under theses conditions was of 2,15 minutes. The merit's parameters of the method were determined, proving and lineal sphere between 1,0 and 28,0 (g/ml of ETU; some quantification and detection limits have been calculated by the method of Hubaux and Vos (22) of 0,153 and 0,306 mg/ml respectively and a recuperation of 84%. (Author) [es

  6. Generalized Confidence Intervals and Fiducial Intervals for Some Epidemiological Measures

    Directory of Open Access Journals (Sweden)

    Ionut Bebu

    2016-06-01

    Full Text Available For binary outcome data from epidemiological studies, this article investigates the interval estimation of several measures of interest in the absence or presence of categorical covariates. When covariates are present, the logistic regression model as well as the log-binomial model are investigated. The measures considered include the common odds ratio (OR from several studies, the number needed to treat (NNT, and the prevalence ratio. For each parameter, confidence intervals are constructed using the concepts of generalized pivotal quantities and fiducial quantities. Numerical results show that the confidence intervals so obtained exhibit satisfactory performance in terms of maintaining the coverage probabilities even when the sample sizes are not large. An appealing feature of the proposed solutions is that they are not based on maximization of the likelihood, and hence are free from convergence issues associated with the numerical calculation of the maximum likelihood estimators, especially in the context of the log-binomial model. The results are illustrated with a number of examples. The overall conclusion is that the proposed methodologies based on generalized pivotal quantities and fiducial quantities provide an accurate and unified approach for the interval estimation of the various epidemiological measures in the context of binary outcome data with or without covariates.

  7. Morphometric and immunocytochemical analysis of melanoma samples for individual optimization of therapy for boron neutron capture (BNCT)

    International Nuclear Information System (INIS)

    Carpano, M; Dagrosa, A; Brandizzi, D; Nievas, S; Olivera, M S; Perona, M; Rodriguez, C; Cabrini, R; Juvenal, G; Pisarev, M

    2012-01-01

    Introduction: Tumors from different patients with the same histological diagnosis can show different responses to ionizing radiation including BNCT. Further knowledge about individual tumor characteristics is needed in order to optimize the individual application of this therapy. In previous studies we have shown different patterns of boron intracellular concentration in three human melanoma cell lines. When we performed xenografts with these cell lines in nude mice a wide range of boron concentrations in tumor was observed. We also evaluated the tumor temperature obtained by thermography. Objectives: The aim of this study was to evaluate the differences in the BPA uptake related to different histological and thermal characteristics of each tumor in nude mice bearing human melanoma. We also studied the proliferation and the vasculature in tumors by immunohistochemical studies and the relationship with the BPA uptake. Materials and Methodos: NIH nude mice of 6-8 weeks were implanted (s.c.) into the back right flank with 3.106 human melanoma cells (MELJ). To evaluate the BPA uptake, animals were injected at a dose of 350 mg/Kg b.w. (ip) and sacrificed 2 h post administration. Each sample of tumor was divided into two equal parts, one for uptake of B and another for histological studies. Boron measurements in tissues were performed by ICP-OES. For the histological studies, samples from the tumors were fixed in buffered 10% formaldehyde, embedded in paraffin and stained with hematoxylin and eosin (HE). Infrared imaging studies were performed the day before the biodistribution, measuring the tumor and body temperatures. Immunohistochemical studies were performed with antibodies Ki-67 and CD31. The first one is a marker of proliferative rate and the second one is a specific marker of endothelial cells which allows to identify the vasculature. Formaldehyde-fixed paraffin-embedded tissues and avidin biotin complex immunostaining were used. Results: Tumor BPA uptake showed

  8. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    International Nuclear Information System (INIS)

    Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential

  9. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  10. Sleep and optimism: A longitudinal study of bidirectional causal relationship and its mediating and moderating variables in a Chinese student sample.

    Science.gov (United States)

    Lau, Esther Yuet Ying; Hui, C Harry; Lam, Jasmine; Cheung, Shu-Fai

    2017-01-01

    While both sleep and optimism have been found to be predictive of well-being, few studies have examined their relationship with each other. Neither do we know much about the mediators and moderators of the relationship. This study investigated (1) the causal relationship between sleep quality and optimism in a college student sample, (2) the role of symptoms of depression, anxiety, and stress as mediators, and (3) how circadian preference might moderate the relationship. Internet survey data were collected from 1,684 full-time university students (67.6% female, mean age = 20.9 years, SD = 2.66) at three time-points, spanning about 19 months. Measures included the Attributional Style Questionnaire, the Pittsburgh Sleep Quality Index, the Composite Scale of Morningness, and the Depression Anxiety Stress Scale-21. Moderate correlations were found among sleep quality, depressive mood, stress symptoms, anxiety symptoms, and optimism. Cross-lagged analyses showed a bidirectional effect between optimism and sleep quality. Moreover, path analyses demonstrated that anxiety and stress symptoms partially mediated the influence of optimism on sleep quality, while depressive mood partially mediated the influence of sleep quality on optimism. In support of our hypothesis, sleep quality affects mood symptoms and optimism differently for different circadian preferences. Poor sleep results in depressive mood and thus pessimism in non-morning persons only. In contrast, the aggregated (direct and indirect) effects of optimism on sleep quality were invariant of circadian preference. Taken together, people who are pessimistic generally have more anxious mood and stress symptoms, which adversely affect sleep while morningness seems to have a specific protective effect countering the potential damage poor sleep has on optimism. In conclusion, optimism and sleep quality were both cause and effect of each other. Depressive mood partially explained the effect of sleep quality on optimism

  11. Actualización de los límites críticos del intervalo hídrico óptimo Review of the critical limits of the optimal hydric interval

    Directory of Open Access Journals (Sweden)

    Miguel Angel PIlatti

    2012-07-01

    Full Text Available El Intervalo Hídrico Óptimo (IHO es la fracción de agua edáfica fácilmente utilizable por los cultivos, durante la cual el suelo puede ser penetrado por las raíces sin mayor resistencia y la aeración no limita la respiración radical. En este trabajo se discuten los límites superior e inferior del IHO. El primero es èCC -agua retenida en capacidad de campo- si garantiza una aceptable capacidad de aire (èa; de lo contrario, el límite se alcanza cuando èa no restringe la respiración radical. El límite inferior queda determinado por la variable de mayor valor entre èRP (contenido hídrico edáfico por debajo del cual las raíces restringen su crecimiento, y èFU (agua fácilmente utilizable por debajo del cual comienza el estrés hídrico. Se analiza y discute la validez de los límites, y las dificultades metodológicas que implican sus determinaciones. Se comparan valores del IHO obtenidos por otros autores, que utilizaron diferentes límites, con los calculados con los límites críticos aquí propuestos. Cada situación agronómica (combinación de suelos, clima, cultivo requiere valores particulares de IHO que deberán ser determinados para cada región. Para el norte de la Región Pampeana (Argentina y sus cultivos habituales proponemos los siguientes valores críticos: èCC = contenido hídrico a -10 kPa; èa = 15%; èRP = 2,5 a 6 MPa (según el porcentaje de arcilla y èFU = -0,17 MPa.The Optimal Hydric Interval (IHO is the interval of easily available soil water for the crops, during which soil resistance and aeration do not limit root growth. In this paper, the upper and lower limits of the IHO are discussed. The upper limit is èCC (soil water content at field capacity when air capacity (èa is not restrictive for root respiration; otherwise, the limit is èa. The lower limit is determined by the variable of greater value between èRP (soil water content at which soil resistance reduces root elongation and èFU (soil water

  12. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  13. Overconfidence in Interval Estimates

    Science.gov (United States)

    Soll, Jack B.; Klayman, Joshua

    2004-01-01

    Judges were asked to make numerical estimates (e.g., "In what year was the first flight of a hot air balloon?"). Judges provided high and low estimates such that they were X% sure that the correct answer lay between them. They exhibited substantial overconfidence: The correct answer fell inside their intervals much less than X% of the time. This…

  14. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    Science.gov (United States)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The

  15. Applications of interval computations

    CERN Document Server

    Kreinovich, Vladik

    1996-01-01

    Primary Audience for the Book • Specialists in numerical computations who are interested in algorithms with automatic result verification. • Engineers, scientists, and practitioners who desire results with automatic verification and who would therefore benefit from the experience of suc­ cessful applications. • Students in applied mathematics and computer science who want to learn these methods. Goal Of the Book This book contains surveys of applications of interval computations, i. e. , appli­ cations of numerical methods with automatic result verification, that were pre­ sented at an international workshop on the subject in EI Paso, Texas, February 23-25, 1995. The purpose of this book is to disseminate detailed and surveyed information about existing and potential applications of this new growing field. Brief Description of the Papers At the most fundamental level, interval arithmetic operations work with sets: The result of a single arithmetic operation is the set of all possible results as the o...

  16. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  17. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    International Nuclear Information System (INIS)

    Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.

    2011-01-01

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3 , the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  18. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    Energy Technology Data Exchange (ETDEWEB)

    Brum, Daniel M.; Lima, Claudio F. [Departamento de Quimica, Universidade Federal de Vicosa, A. Peter Henry Rolfs s/n, Vicosa/MG, 36570-000 (Brazil); Robaina, Nicolle F. [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil); Fonseca, Teresa Cristina O. [Petrobras, Cenpes/PDEDS/QM, Av. Horacio Macedo 950, Ilha do Fundao, Rio de Janeiro/RJ, 21941-915 (Brazil); Cassella, Ricardo J., E-mail: cassella@vm.uff.br [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil)

    2011-05-15

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO{sub 3}, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO{sub 3} medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  19. Using the confidence interval confidently.

    Science.gov (United States)

    Hazra, Avijit

    2017-10-01

    Biomedical research is seldom done with entire populations but rather with samples drawn from a population. Although we work with samples, our goal is to describe and draw inferences regarding the underlying population. It is possible to use a sample statistic and estimates of error in the sample to get a fair idea of the population parameter, not as a single value, but as a range of values. This range is the confidence interval (CI) which is estimated on the basis of a desired confidence level. Calculation of the CI of a sample statistic takes the general form: CI = Point estimate ± Margin of error, where the margin of error is given by the product of a critical value (z) derived from the standard normal curve and the standard error of point estimate. Calculation of the standard error varies depending on whether the sample statistic of interest is a mean, proportion, odds ratio (OR), and so on. The factors affecting the width of the CI include the desired confidence level, the sample size and the variability in the sample. Although the 95% CI is most often used in biomedical research, a CI can be calculated for any level of confidence. A 99% CI will be wider than 95% CI for the same sample. Conflict between clinical importance and statistical significance is an important issue in biomedical research. Clinical importance is best inferred by looking at the effect size, that is how much is the actual change or difference. However, statistical significance in terms of P only suggests whether there is any difference in probability terms. Use of the CI supplements the P value by providing an estimate of actual clinical effect. Of late, clinical trials are being designed specifically as superiority, non-inferiority or equivalence studies. The conclusions from these alternative trial designs are based on CI values rather than the P value from intergroup comparison.

  20. Optimization of a method based on micro-matrix solid-phase dispersion (micro-MSPD for the determination of PCBs in mussel samples

    Directory of Open Access Journals (Sweden)

    Nieves Carro

    2017-03-01

    Full Text Available This paper reports the development and optimization of micro-matrix solid-phase dispersion (micro-MSPD of nine polychlorinated biphenyls (PCBs in mussel samples (Mytilus galloprovincialis by using a two-level factorial design. Four variables (amount of sample, anhydrous sodium sulphate, Florisil and solvent volume were considered as factors in the optimization process. The results suggested that only the interaction between the amount of anhydrous sodium sulphate and the solvent volume was statistically significant for the overall recovery of a trichlorinated compound, CB 28. Generally most of the considered species exhibited a similar behaviour, the sample and Florisil amounts had a positive effect on PCBs extractions and solvent volume and sulphate amount had a negative effect. The analytical determination and confirmation of PCBs were carried out by using GC-ECD and GC-MS/MS, respectively. The method was validated having satisfactory precision and accuracy with RSD values below 6% and recoveries between 81 and 116% for all congeners. The optimized method was applied to the extraction of real mussel samples from two Galician Rías.

  1. Optimization of basic parameters in temperature-programmed gas chromatographic separations of multi-component samples within a given time

    NARCIS (Netherlands)

    Repka, D.; Krupcik, J.; Brunovska, A.; Leclercq, P.A.; Rijks, J.A.

    1989-01-01

    A new procedure is introduced for the optimization of column peak capacity in a given time. The opitmization focuses on temperature-programmed operating conditions, notably the initial temperature and hold time, and the programming rate. Based conceptually upon Lagrange functions, experiments were

  2. Magnetic Resonance Fingerprinting with short relaxation intervals.

    Science.gov (United States)

    Amthor, Thomas; Doneva, Mariya; Koken, Peter; Sommer, Karsten; Meineke, Jakob; Börnert, Peter

    2017-09-01

    The aim of this study was to investigate a technique for improving the performance of Magnetic Resonance Fingerprinting (MRF) in repetitive sampling schemes, in particular for 3D MRF acquisition, by shortening relaxation intervals between MRF pulse train repetitions. A calculation method for MRF dictionaries adapted to short relaxation intervals and non-relaxed initial spin states is presented, based on the concept of stationary fingerprints. The method is applicable to many different k-space sampling schemes in 2D and 3D. For accuracy analysis, T 1 and T 2 values of a phantom are determined by single-slice Cartesian MRF for different relaxation intervals and are compared with quantitative reference measurements. The relevance of slice profile effects is also investigated in this case. To further illustrate the capabilities of the method, an application to in-vivo spiral 3D MRF measurements is demonstrated. The proposed computation method enables accurate parameter estimation even for the shortest relaxation intervals, as investigated for different sampling patterns in 2D and 3D. In 2D Cartesian measurements, we achieved a scan acceleration of more than a factor of two, while maintaining acceptable accuracy: The largest T 1 values of a sample set deviated from their reference values by 0.3% (longest relaxation interval) and 2.4% (shortest relaxation interval). The largest T 2 values showed systematic deviations of up to 10% for all relaxation intervals, which is discussed. The influence of slice profile effects for multislice acquisition is shown to become increasingly relevant for short relaxation intervals. In 3D spiral measurements, a scan time reduction of 36% was achieved, maintaining the quality of in-vivo T1 and T2 maps. Reducing the relaxation interval between MRF sequence repetitions using stationary fingerprint dictionaries is a feasible method to improve the scan efficiency of MRF sequences. The method enables fast implementations of 3D spatially

  3. Specific amplification of bacterial DNA by optimized so-called universal bacterial primers in samples rich of plant DNA.

    Science.gov (United States)

    Dorn-In, Samart; Bassitta, Rupert; Schwaiger, Karin; Bauer, Johann; Hölzel, Christina S

    2015-06-01

    Universal primers targeting the bacterial 16S-rRNA-gene allow quantification of the total bacterial load in variable sample types by qPCR. However, many universal primer pairs also amplify DNA of plants or even of archaea and other eukaryotic cells. By using these primers, the total bacterial load might be misevaluated, whenever samples contain high amounts of non-target DNA. Thus, this study aimed to provide primer pairs which are suitable for quantification and identification of bacterial DNA in samples such as feed, spices and sample material from digesters. For 42 primers, mismatches to the sequence of chloroplasts and mitochondria of plants were evaluated. Six primer pairs were further analyzed with regard to the question whether they anneal to DNA of archaea, animal tissue and fungi. Subsequently they were tested with sample matrix such as plants, feed, feces, soil and environmental samples. To this purpose, the target DNA in the samples was quantified by qPCR. The PCR products of plant and feed samples were further processed for the Single Strand Conformation Polymorphism method followed by sequence analysis. The sequencing results revealed that primer pair 335F/769R amplified only bacterial DNA in samples such as plants and animal feed, in which the DNA of plants prevailed. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  5. Chaos on the interval

    CERN Document Server

    Ruette, Sylvie

    2017-01-01

    The aim of this book is to survey the relations between the various kinds of chaos and related notions for continuous interval maps from a topological point of view. The papers on this topic are numerous and widely scattered in the literature; some of them are little known, difficult to find, or originally published in Russian, Ukrainian, or Chinese. Dynamical systems given by the iteration of a continuous map on an interval have been broadly studied because they are simple but nevertheless exhibit complex behaviors. They also allow numerical simulations, which enabled the discovery of some chaotic phenomena. Moreover, the "most interesting" part of some higher-dimensional systems can be of lower dimension, which allows, in some cases, boiling it down to systems in dimension one. Some of the more recent developments such as distributional chaos, the relation between entropy and Li-Yorke chaos, sequence entropy, and maps with infinitely many branches are presented in book form for the first time. The author gi...

  6. Interval Solution for Nonlinear Programming of Maximizing the Fatigue Life of V-Belt under Polymorphic Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Zhong Wan

    2013-01-01

    Full Text Available In accord with the practical engineering design conditions, a nonlinear programming model is constructed for maximizing the fatigue life of V-belt drive in which some polymorphic uncertainties are incorporated. For a given satisfaction level and a confidence level, an equivalent formulation of this uncertain optimization model is obtained where only interval parameters are involved. Based on the concepts of maximal and minimal range inequalities for describing interval inequality, the interval parameter model is decomposed into two standard nonlinear programming problems, and an algorithm, called two-step based sampling algorithm, is developed to find an interval optimal solution for the original problem. Case study is employed to demonstrate the validity and practicability of the constructed model and the algorithm.

  7. Multichannel interval timer

    International Nuclear Information System (INIS)

    Turko, B.T.

    1983-10-01

    A CAMAC based modular multichannel interval timer is described. The timer comprises twelve high resolution time digitizers with a common start enabling twelve independent stop inputs. Ten time ranges from 2.5 μs to 1.3 μs can be preset. Time can be read out in twelve 24-bit words either via CAMAC Crate Controller or an external FIFO register. LSB time calibration is 78.125 ps. An additional word reads out the operational status of twelve stop channels. The system consists of two modules. The analog module contains a reference clock and 13 analog time stretchers. The digital module contains counters, logic and interface circuits. The timer has an excellent differential linearity, thermal stability and crosstalk free performance

  8. Experimenting with musical intervals

    Science.gov (United States)

    Lo Presto, Michael C.

    2003-07-01

    When two tuning forks of different frequency are sounded simultaneously the result is a complex wave with a repetition frequency that is the fundamental of the harmonic series to which both frequencies belong. The ear perceives this 'musical interval' as a single musical pitch with a sound quality produced by the harmonic spectrum responsible for the waveform. This waveform can be captured and displayed with data collection hardware and software. The fundamental frequency can then be calculated and compared with what would be expected from the frequencies of the tuning forks. Also, graphing software can be used to determine equations for the waveforms and predict their shapes. This experiment could be used in an introductory physics or musical acoustics course as a practical lesson in superposition of waves, basic Fourier series and the relationship between some of the ear's subjective perceptions of sound and the physical properties of the waves that cause them.

  9. Optimization of the Analytical Method Using HPLC with Fluorescence Detection to Determine Selected Polycyclic Aromatic Compounds in Clean Water Samples

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the comparison and evaluation of 3 miniaturized extraction methods for the determination of selected PACs in clear waters is presented. Three types of liquid-liquid extraction were used for chromatographic analysis by HPLC with fluorescence detection. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the scope of the methods developed at low and high levels of concentration and intermediate precision. (Author)

  10. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Directory of Open Access Journals (Sweden)

    Thadeous J Kacmarczyk

    Full Text Available Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads. Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  11. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    Science.gov (United States)

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  12. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Science.gov (United States)

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  13. Determination of As, Cd, and Pb in Tap Water and Bottled Water Samples by Using Optimized GFAAS System with Pd-Mg and Ni as Matrix Modifiers

    Directory of Open Access Journals (Sweden)

    Sezgin Bakırdere

    2013-01-01

    Full Text Available Arsenic, lead, and cadmium were determined in tap and bottled water samples consumed in the west part of Turkey at trace levels. Graphite furnace atomic absorption spectrometry (GFAAS was used in all detections. All of the system parameters for each element were optimized to increase sensitivity. Pd-Mg mixture was selected as the best matrix modifier for As, while the highest signals were obtained for Pb and Cd in the case of Ni used as matrix modifier. Detection limits for As, Cd, and Pb were found to be 2.0, 0.036, and 0.25 ng/mL, respectively. 78 tap water and 17 different brands of bottled water samples were analyzed for their As, Cd, and Pb contents under the optimized conditions. In all water samples, concentration of cadmium was found to be lower than detection limits. Lead concentration in the samples analyzed varied between N.D. and 12.66 ± 0.68 ng/mL. The highest concentration of arsenic was determined as 11.54 ± 2.79 ng/mL. Accuracy of the methods was verified by using a certified reference material, namely, Trace Element in Water, 1643e. Results found for As, Cd, and Pb in reference materials were in satisfactory agreement with the certified values.

  14. Gas chromatographic-mass spectrometric analysis of urinary volatile organic metabolites: Optimization of the HS-SPME procedure and sample storage conditions.

    Science.gov (United States)

    Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica

    2018-01-01

    Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. An Optimized Set of Fluorescence In Situ Hybridization Probes for Detection of Pancreatobiliary Tract Cancer in Cytology Brush Samples.

    Science.gov (United States)

    Barr Fritcher, Emily G; Voss, Jesse S; Brankley, Shannon M; Campion, Michael B; Jenkins, Sarah M; Keeney, Matthew E; Henry, Michael R; Kerr, Sarah M; Chaiteerakij, Roongruedee; Pestova, Ekaterina V; Clayton, Amy C; Zhang, Jun; Roberts, Lewis R; Gores, Gregory J; Halling, Kevin C; Kipp, Benjamin R

    2015-12-01

    Pancreatobiliary cancer is detected by fluorescence in situ hybridization (FISH) of pancreatobiliary brush samples with UroVysion probes, originally designed to detect bladder cancer. We designed a set of new probes to detect pancreatobiliary cancer and compared its performance with that of UroVysion and routine cytology analysis. We tested a set of FISH probes on tumor tissues (cholangiocarcinoma or pancreatic carcinoma) and non-tumor tissues from 29 patients. We identified 4 probes that had high specificity for tumor vs non-tumor tissues; we called this set of probes pancreatobiliary FISH. We performed a retrospective analysis of brush samples from 272 patients who underwent endoscopic retrograde cholangiopancreatography for evaluation of malignancy at the Mayo Clinic; results were available from routine cytology and FISH with UroVysion probes. Archived residual specimens were retrieved and used to evaluate the pancreatobiliary FISH probes. Cutoff values for FISH with the pancreatobiliary probes were determined using 89 samples and validated in the remaining 183 samples. Clinical and pathologic evidence of malignancy in the pancreatobiliary tract within 2 years of brush sample collection was used as the standard; samples from patients without malignancies were used as negative controls. The validation cohort included 85 patients with malignancies (46.4%) and 114 patients with primary sclerosing cholangitis (62.3%). Samples containing cells above the cutoff for polysomy (copy number gain of ≥2 probes) were classified as positive in FISH with the UroVysion and pancreatobiliary probes. Multivariable logistic regression was used to estimate associations between clinical and pathology findings and results from FISH. The combination of FISH probes 1q21, 7p12, 8q24, and 9p21 identified cancer cells with 93% sensitivity and 100% specificity in pancreatobiliary tissue samples and were therefore included in the pancreatobiliary probe set. In the validation cohort of

  16. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    International Nuclear Information System (INIS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-01-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors. (paper)

  17. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    Science.gov (United States)

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  18. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    Science.gov (United States)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.

  19. Optimization of microwave assisted digestion procedure for the determination of zinc, copper and nickel in tea samples employing flame atomic absorption spectrometry

    International Nuclear Information System (INIS)

    Soylak, Mustafa; Tuzen, Mustafa; Souza, Anderson Santos; Korn, Maria das Gracas Andrade; Ferreira, Sergio Luis Costa

    2007-01-01

    The present paper describes the development of a microwave assisted digestion procedure for the determination of zinc, copper and nickel in tea samples employing flame atomic absorption spectrometry (FAAS). The optimization step was performed using a full factorial design (2 3 ) involving the factors: composition of the acid mixture (CMA), microwave power (MP) and radiation time (RT). The experiments of this factorial were carried out using a certified reference material of tea GBW 07605 furnished by National Research Centre for Certified Reference Materials, China, being the metal recoveries considered as response. The relative standard deviations of the method were found below 8% for the three elements. The procedure proposed was used for the determination of copper, zinc and nickel in several samples of tea from Turkey. For 10 tea samples analyzed, the concentration achieved for copper, zinc and nickel varied at 6.4-13.1, 7.0-16.5 and 3.1-5.7 (μg g -1 ), respectively

  20. Sampling Development

    Science.gov (United States)

    Adolph, Karen E.; Robinson, Scott R.

    2011-01-01

    Research in developmental psychology requires sampling at different time points. Accurate depictions of developmental change provide a foundation for further empirical studies and theories about developmental mechanisms. However, overreliance on widely spaced sampling intervals in cross-sectional and longitudinal designs threatens the validity of…

  1. Sample pretreatment optimization for the analysis of short chain chlorinated paraffins in soil with gas chromatography-electron capture negative ion-mass spectrometry.

    Science.gov (United States)

    Chen, Laiguo; Huang, Yumei; Han, Shuang; Feng, Yongbin; Jiang, Guo; Tang, Caiming; Ye, Zhixiang; Zhan, Wei; Liu, Ming; Zhang, Sukun

    2013-01-25

    Accurately quantifying short chain chlorinated paraffins (SCCPs) in soil samples with gas chromatograph coupled with electron capture negative ionization mass spectrometry (GC-ECNI-MS) is difficult because many other polychlorinated pollutants are present in the sample matrices. These pollutants (e.g., polychlorinated biphenyls (PCBs), organochlorine pesticides (OCPs) and toxaphene) can cause serious interferences during SCCPs analysis with GC-MS. Four main columns packed with different adsorbents, including silica gel, Florisil and alumina, were investigated in this study to determine their performance for separating interfering pollutants from SCCPs. These experimental results suggest that the optimum cleanup procedure uses a silica gel column and a multilayer silica gel-Florisil composite column. This procedure completely separated 22 PCB congeners, 23 OCPs and three toxaphene congeners from SCCPs. However, p,p'-DDD, cis-nonachlor and o,p'-DDD were not completely removed and only 53% of the total toxaphene was removed. This optimized method was successfully and effectively applied for removing interfering pollutants from real soil samples. SCCPs in 17 soil samples from different land use areas within a suburban region were analyzed with the established method. The concentrations of SCCPs in these samples were between 7 and 541 ng g(-1) (mean: 84 ng g(-1)). Similar homologue SCCPs patterns were observed between the soil samples collected from different land use areas. In addition, lower chlorinated (Cl(6/7)) C(10)- and C(11)- SCCPs were the dominant congeners. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Two-dimensional T2 distribution mapping in rock core plugs with optimal k-space sampling.

    Science.gov (United States)

    Xiao, Dan; Balcom, Bruce J

    2012-07-01

    Spin-echo single point imaging has been employed for 1D T(2) distribution mapping, but a simple extension to 2D is challenging since the time increase is n fold, where n is the number of pixels in the second dimension. Nevertheless 2D T(2) mapping in fluid saturated rock core plugs is highly desirable because the bedding plane structure in rocks often results in different pore properties within the sample. The acquisition time can be improved by undersampling k-space. The cylindrical shape of rock core plugs yields well defined intensity distributions in k-space that may be efficiently determined by new k-space sampling patterns that are developed in this work. These patterns acquire 22.2% and 11.7% of the k-space data points. Companion density images may be employed, in a keyhole imaging sense, to improve image quality. T(2) weighted images are fit to extract T(2) distributions, pixel by pixel, employing an inverse Laplace transform. Images reconstructed with compressed sensing, with similar acceleration factors, are also presented. The results show that restricted k-space sampling, in this application, provides high quality results. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Improvements of the Vis-NIRS Model in the Prediction of Soil Organic Matter Content Using Spectral Pretreatments, Sample Selection, and Wavelength Optimization

    Science.gov (United States)

    Lin, Z. D.; Wang, Y. B.; Wang, R. J.; Wang, L. S.; Lu, C. P.; Zhang, Z. Y.; Song, L. T.; Liu, Y.

    2017-07-01

    A total of 130 topsoil samples collected from Guoyang County, Anhui Province, China, were used to establish a Vis-NIR model for the prediction of organic matter content (OMC) in lime concretion black soils. Different spectral pretreatments were applied for minimizing the irrelevant and useless information of the spectra and increasing the spectra correlation with the measured values. Subsequently, the Kennard-Stone (KS) method and sample set partitioning based on joint x-y distances (SPXY) were used to select the training set. Successive projection algorithm (SPA) and genetic algorithm (GA) were then applied for wavelength optimization. Finally, the principal component regression (PCR) model was constructed, in which the optimal number of principal components was determined using the leave-one-out cross validation technique. The results show that the combination of the Savitzky-Golay (SG) filter for smoothing and multiplicative scatter correction (MSC) can eliminate the effect of noise and baseline drift; the SPXY method is preferable to KS in the sample selection; both the SPA and the GA can significantly reduce the number of wavelength variables and favorably increase the accuracy, especially GA, which greatly improved the prediction accuracy of soil OMC with Rcc, RMSEP, and RPD up to 0.9316, 0.2142, and 2.3195, respectively.

  4. Application of Chitosan-Zinc Oxide Nanoparticles for Lead Extraction From Water Samples by Combining Ant Colony Optimization with Artificial Neural Network

    Science.gov (United States)

    Khajeh, M.; Pourkarami, A.; Arefnejad, E.; Bohlooli, M.; Khatibi, A.; Ghaffari-Moghaddam, M.; Zareian-Jahromi, S.

    2017-09-01

    Chitosan-zinc oxide nanoparticles (CZPs) were developed for solid-phase extraction. Combined artificial neural network-ant colony optimization (ANN-ACO) was used for the simultaneous preconcentration and determination of lead (Pb2+) ions in water samples prior to graphite furnace atomic absorption spectrometry (GF AAS). The solution pH, mass of adsorbent CZPs, amount of 1-(2-pyridylazo)-2-naphthol (PAN), which was used as a complexing agent, eluent volume, eluent concentration, and flow rates of sample and eluent were used as input parameters of the ANN model, and the percentage of extracted Pb2+ ions was used as the output variable of the model. A multilayer perception network with a back-propagation learning algorithm was used to fit the experimental data. The optimum conditions were obtained based on the ACO. Under the optimized conditions, the limit of detection for Pb2+ ions was found to be 0.078 μg/L. This procedure was also successfully used to determine the amounts of Pb2+ ions in various natural water samples.

  5. Systematic approach to optimize a pretreatment method for ultrasensitive liquid chromatography with tandem mass spectrometry analysis of multiple target compounds in biological samples.

    Science.gov (United States)

    Togashi, Kazutaka; Mutaguchi, Kuninori; Komuro, Setsuko; Kataoka, Makoto; Yamazaki, Hiroshi; Yamashita, Shinji

    2016-08-01

    In current approaches for new drug development, highly sensitive and robust analytical methods for the determination of test compounds in biological samples are essential. These analytical methods should be optimized for every target compound. However, for biological samples that contain multiple compounds as new drug candidates obtained by cassette dosing tests, it would be preferable to develop a single method that allows the determination of all compounds at once. This study aims to establish a systematic approach that enables a selection of the most appropriate pretreatment method for multiple target compounds without the use of their chemical information. We investigated the retention times of 27 known compounds under different mobile phase conditions and determined the required pretreatment of human plasma samples using several solid-phase and liquid-liquid extractions. From the relationship between retention time and recovery in a principal component analysis, appropriate pretreatments were categorized into several types. Based on the category, we have optimized a pretreatment method for the identification of three calcium channel blockers in human plasma. Plasma concentrations of these drugs in a cassette-dose clinical study at microdose level were successfully determined with a lower limit of quantitation of 0.2 pg/mL for diltiazem, 1 pg/mL for nicardipine, and 2 pg/mL for nifedipine. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. An Optimal Sample Data Usage Strategy to Minimize Overfitting and Underfitting Effects in Regression Tree Models Based on Remotely-Sensed Data

    Directory of Open Access Journals (Sweden)

    Yingxin Gu

    2016-11-01

    Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  7. Optimization of the sample dilution for minimization of the matrix effects and achieving maximal sensitivity in XRFA

    International Nuclear Information System (INIS)

    Dimov, L.; Benova, M.

    1989-01-01

    The method of neutral medium dilution can lead to practically full leveling of the matrix but the high degree of dilution at which this effect is achieved will inevitably result in loss of sensitivity. In the XRFA of heavy elements in a light matrix the dependence of the fluorescence intensity upon concentration is characterized by gradually decreasing steepness reacting saturation (as in analysis of Pb in a Pb-concentrate). The dilution by neutral medium can shift the concentration range to a zone of greater steepness but this results in an increase of sensitivity towards the concentration in the initial (undiluted) material only to a certain degree of dilution. This work presents an optimization of the degree of dilution by neutral medium which achieves a sufficient leveling of the matrix when using different materials and also maximal sensitivity towards higher concentrations. Furthermore an original solution is found to the problem of selecting a neutral medium enabling to achieve good homogenization with various materials and particularly - with marble flour. (author)

  8. EXPERIMENTS TOWARDS DETERMINING BEST TRAINING SAMPLE SIZE FOR AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS THROUGH SEQUENTIAL MINIMAL OPTIMIZATION

    Directory of Open Access Journals (Sweden)

    Sunil Kumar C

    2014-01-01

    Full Text Available With number of students growing each year there is a strong need to automate systems capable of evaluating descriptive answers. Unfortunately, there aren’t many systems capable of performing this task. In this paper, we use a machine learning tool called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. Our experiments are designed to cater to our primary goal of identifying the optimum training sample size so as to get optimum auto scoring. Besides the technical overview and the experiments design, the paper also covers challenges, benefits of the system. We also discussed interdisciplinary areas for future research on this topic.

  9. Is Using the Strengths and Difficulties Questionnaire in a Community Sample the Optimal Way to Assess Mental Health Functioning?

    Science.gov (United States)

    Vaz, Sharmila; Cordier, Reinie; Boyes, Mark; Parsons, Richard; Joosten, Annette; Ciccarelli, Marina; Falkmer, Marita; Falkmer, Torbjorn

    2016-01-01

    An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted.

  10. Is Using the Strengths and Difficulties Questionnaire in a Community Sample the Optimal Way to Assess Mental Health Functioning?

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available An important characteristic of a screening tool is its discriminant ability or the measure's accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a Intraclass correlations between parent and teacher ratings of children's mental health using the SDQ at person level was fair on individual child level; b The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted.

  11. Numerical calculation of economic uncertainty by intervals and fuzzy numbers

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2010-01-01

    This paper emphasizes that numerically correct calculation of economic uncertainty with intervals and fuzzy numbers requires implementation of global optimization techniques in contrast to straightforward application of interval arithmetic. This is demonstrated by both a simple case from managerial...... World Academic Press, UK. All rights reserved....

  12. An Optimization-Based Reconfigurable Design for a 6-Bit 11-MHz Parallel Pipeline ADC with Double-Sampling S&H

    Directory of Open Access Journals (Sweden)

    Wilmar Carvajal

    2012-01-01

    Full Text Available This paper presents a 6 bit, 11 MS/s time-interleaved pipeline A/D converter design. The specification process, from block level to elementary circuits, is gradually covered to draw a design methodology. Both power consumption and mismatch between the parallel chain elements are intended to be reduced by using some techniques such as double and bottom-plate sampling, fully differential circuits, RSD digital correction, and geometric programming (GP optimization of the elementary analog circuits (OTAs and comparators design. Prelayout simulations of the complete ADC are presented to characterize the designed converter, which consumes 12 mW while sampling a 500 kHz input signal. Moreover, the block inside the ADC with the most stringent requirements in power, speed, and precision was sent to fabrication in a CMOS 0.35 μm AMS technology, and some postlayout results are shown.

  13. Optimization and Comparison of ESI and APCI LC-MS/MS Methods: A Case Study of Irgarol 1051, Diuron, and their Degradation Products in Environmental Samples

    Science.gov (United States)

    Maragou, Niki C.; Thomaidis, Nikolaos S.; Koupparis, Michael A.

    2011-10-01

    A systematic and detailed optimization strategy for the development of atmospheric pressure ionization (API) LC-MS/MS methods for the determination of Irgarol 1051, Diuron, and their degradation products (M1, DCPMU, DCPU, and DCA) in water, sediment, and mussel is described. Experimental design was applied for the optimization of the ion sources parameters. Comparison of ESI and APCI was performed in positive- and negative-ion mode, and the effect of the mobile phase on ionization was studied for both techniques. Special attention was drawn to the ionization of DCA, which presents particular difficulty in API techniques. Satisfactory ionization of this small molecule is achieved only with ESI positive-ion mode using acetonitrile in the mobile phase; the instrumental detection limit is 0.11 ng/mL. Signal suppression was qualitatively estimated by using purified and non-purified samples. The sample preparation for sediments and mussels is direct and simple, comprising only solvent extraction. Mean recoveries ranged from 71% to 110%, and the corresponding (%) RSDs ranged between 4.1 and 14%. The method limits of detection ranged between 0.6 and 3.5 ng/g for sediment and mussel and from 1.3 to 1.8 ng/L for sea water. The method was applied to sea water, marine sediment, and mussels, which were obtained from marinas in Attiki, Greece. Ion ratio confirmation was used for the identification of the compounds.

  14. Performance of optimized McRAPD in identification of 9 yeast species frequently isolated from patient samples: potential for automation.

    Science.gov (United States)

    Trtkova, Jitka; Pavlicek, Petr; Ruskova, Lenka; Hamal, Petr; Koukalova, Dagmar; Raclavsky, Vladislav

    2009-11-10

    Rapid, easy, economical and accurate species identification of yeasts isolated from clinical samples remains an important challenge for routine microbiological laboratories, because susceptibility to antifungal agents, probability to develop resistance and ability to cause disease vary in different species. To overcome the drawbacks of the currently available techniques we have recently proposed an innovative approach to yeast species identification based on RAPD genotyping and termed McRAPD (Melting curve of RAPD). Here we have evaluated its performance on a broader spectrum of clinically relevant yeast species and also examined the potential of automated and semi-automated interpretation of McRAPD data for yeast species identification. A simple fully automated algorithm based on normalized melting data identified 80% of the isolates correctly. When this algorithm was supplemented by semi-automated matching of decisive peaks in first derivative plots, 87% of the isolates were identified correctly. However, a computer-aided visual matching of derivative plots showed the best performance with average 98.3% of the accurately identified isolates, almost matching the 99.4% performance of traditional RAPD fingerprinting. Since McRAPD technique omits gel electrophoresis and can be performed in a rapid, economical and convenient way, we believe that it can find its place in routine identification of medically important yeasts in advanced diagnostic laboratories that are able to adopt this technique. It can also serve as a broad-range high-throughput technique for epidemiological surveillance.

  15. Ultrasound assisted extraction of Maxilon Red GRL dye from water samples using cobalt ferrite nanoparticles loaded on activated carbon as sorbent: Optimization and modeling.

    Science.gov (United States)

    Mehrabi, Fatemeh; Vafaei, Azam; Ghaedi, Mehrorang; Ghaedi, Abdol Mohammad; Alipanahpour Dil, Ebrahim; Asfaram, Arash

    2017-09-01

    In this research, a selective, simple and rapid ultrasound assisted dispersive solid-phase micro-microextraction (UA-DSPME) was developed using cobalt ferrite nanoparticles loaded on activated carbon (CoFe 2 O 4 -NPs-AC) as an efficient sorbent for the preconcentration and determination of Maxilon Red GRL (MR-GRL) dye. The properties of sorbent are characterized by X-ray diffraction (XRD), Transmission Electron Microscopy (TEM), Vibrating sample magnetometers (VSM), Fourier transform infrared spectroscopy (FTIR), Particle size distribution (PSD) and Scanning Electron Microscope (SEM) techniques. The factors affecting on the determination of MR-GRL dye were investigated and optimized by central composite design (CCD) and artificial neural networks based on genetic algorithm (ANN-GA). CCD and ANN-GA were used for optimization. Using ANN-GA, optimum conditions were set at 6.70, 1.2mg, 5.5min and 174μL for pH, sorbent amount, sonication time and volume of eluent, respectively. Under the optimized conditions obtained from ANN-GA, the method exhibited a linear dynamic range of 30-3000ngmL -1 with a detection limit of 5.70ngmL -1 . The preconcentration factor and enrichment factor were 57.47 and 93.54, respectively with relative standard deviations (RSDs) less than 4.0% (N=6). The interference effect of some ions and dyes was also investigated and the results show a good selectivity for this method. Finally, the method was successfully applied to the preconcentration and determination of Maxilon Red GRL in water and wastewater samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. The effect of an optimized imaging flow cytometry analysis template on sample throughput in the reduced culture cytokinesis-block micronucleus assay

    International Nuclear Information System (INIS)

    Rodrigues, M.A.; Beaton-Green, L.A.; Wilkins, R.C.; Probst, C.E.

    2016-01-01

    In cases of overexposure to ionizing radiation, the cytokinesis-block micronucleus (CBMN) assay can be performed in order to estimate the dose of radiation to an exposed individual. However, in the event of a large-scale radiation accident with many potentially exposed casualties, the assay must be able to generate accurate dose estimates to within ±0.5 Gy as quickly as possible. The assay has been adapted to, validated and optimized on the ImageStream"X imaging flow cyto-meter. The ease of running this automated version of the CBMN assay allowed investigation into the accuracy of dose estimates after reducing the volume of whole blood cultured to 200 μl and reducing the culture time to 48 h. The data analysis template used to identify binucleated lymphocyte cells (BNCs) and micronuclei (MN) has since been optimized to improve the sensitivity and specificity of BNC and MN detection. This paper presents a re-analysis of existing data using this optimized analysis template to demonstrate that dose estimations from blinded samples can be obtained to the same level of accuracy in a shorter data collection time. Here, we show that dose estimates from blinded samples were obtained to within ±0.5 Gy of the delivered dose when data collection time was reduced by 30 min at standard culture conditions and by 15 min at reduced culture conditions. Reducing data collection time while retaining the same level of accuracy in our imaging flow cytometry-based version of the CBMN assay results in higher throughput and further increases the relevancy of the CBMN assay as a radiation bio-dosimeter. (authors)

  17. Interval Size and Affect: An Ethnomusicological Perspective

    Directory of Open Access Journals (Sweden)

    Sarha Moore

    2013-08-01

    Full Text Available This commentary addresses Huron and Davis's question of whether "The Harmonic Minor Provides an Optimum Way of Reducing Average Melodic Interval Size, Consistent with Sad Affect Cues" within any non-Western musical cultures. The harmonic minor scale and other semitone-heavy scales, such as Bhairav raga and Hicaz makam, are featured widely in the musical cultures of North India and the Middle East. Do melodies from these genres also have a preponderance of semitone intervals and low incidence of the augmented second interval, as in Huron and Davis's sample? Does the presence of more semitone intervals in a melody affect its emotional connotations in different cultural settings? Are all semitone intervals equal in their effect? My own ethnographic research within these cultures reveals comparable connotations in melodies that linger on semitone intervals, centered on concepts of tension and metaphors of falling. However, across different musical cultures there may also be neutral or lively interpretations of these same pitch sets, dependent on context, manner of performance, and tradition. Small pitch movement may also be associated with social functions such as prayer or lullabies, and may not be described as "sad." "Sad," moreover may not connote the same affect cross-culturally.

  18. In-well time-of-travel approach to evaluate optimal purge duration during low-flow sampling of monitoring wells

    Science.gov (United States)

    Harte, Philip T.

    2017-01-01

    A common assumption with groundwater sampling is that low (time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.

  19. Optimization and analysis of a quantitative real-time PCR-based technique to determine microRNA expression in formalin-fixed paraffin-embedded samples

    Directory of Open Access Journals (Sweden)

    Reis Patricia P

    2010-06-01

    Full Text Available Abstract Background MicroRNAs (miRs are non-coding RNA molecules involved in post-transcriptional regulation, with diverse functions in tissue development, differentiation, cell proliferation and apoptosis. miRs may be less prone to degradation during formalin fixation, facilitating miR expression studies in formalin-fixed paraffin-embedded (FFPE tissue. Results Our study demonstrates that the TaqMan Human MicroRNA Array v1.0 (Early Access platform is suitable for miR expression analysis in FFPE tissue with a high reproducibility (correlation coefficients of 0.95 between duplicates, p 35, we show that reproducibility between technical replicates, equivalent dilutions, and FFPE vs. frozen samples is best in the high abundance stratum. We also demonstrate that the miR expression profiles of FFPE samples are comparable to those of fresh-frozen samples, with a correlation of up to 0.87 (p Conclusion Our study thus demonstrates the utility, reproducibility, and optimization steps needed in miR expression studies using FFPE samples on a high-throughput quantitative PCR-based miR platform, opening up a realm of research possibilities for retrospective studies.

  20. Tuning for temporal interval in human apparent motion detection.

    Science.gov (United States)

    Bours, Roger J E; Stuur, Sanne; Lankheet, Martin J M

    2007-01-08

    Detection of apparent motion in random dot patterns requires correlation across time and space. It has been difficult to study the temporal requirements for the correlation step because motion detection also depends on temporal filtering preceding correlation and on integration at the next levels. To specifically study tuning for temporal interval in the correlation step, we performed an experiment in which prefiltering and postintegration were held constant and in which we used a motion stimulus containing coherent motion for a single interval value only. The stimulus consisted of a sparse random dot pattern in which each dot was presented in two frames only, separated by a specified interval. On each frame, half of the dots were refreshed and the other half was a displaced reincarnation of the pattern generated one or several frames earlier. Motion energy statistics in such a stimulus do not vary from frame to frame, and the directional bias in spatiotemporal correlations is similar for different interval settings. We measured coherence thresholds for left-right direction discrimination by varying motion coherence levels in a Quest staircase procedure, as a function of both step size and interval. Results show that highest sensitivity was found for an interval of 17-42 ms, irrespective of viewing distance. The falloff at longer intervals was much sharper than previously described. Tuning for temporal interval was largely, but not completely, independent of step size. The optimal temporal interval slightly decreased with increasing step size. Similarly, the optimal step size decreased with increasing temporal interval.

  1. Performance of an Optimized Paper-Based Test for Rapid Visual Measurement of Alanine Aminotransferase (ALT in Fingerstick and Venipuncture Samples.

    Directory of Open Access Journals (Sweden)

    Sidhartha Jain

    Full Text Available A paper-based, multiplexed, microfluidic assay has been developed to visually measure alanine aminotransferase (ALT in a fingerstick sample, generating rapid, semi-quantitative results. Prior studies indicated a need for improved accuracy; the device was subsequently optimized using an FDA-approved automated platform (Abaxis Piccolo Xpress as a comparator. Here, we evaluated the performance of the optimized paper test for measurement of ALT in fingerstick blood and serum, as compared to Abaxis and Roche/Hitachi platforms. To evaluate feasibility of remote results interpretation, we also compared reading cell phone camera images of completed tests to reading the device in real time.96 ambulatory patients with varied baseline ALT concentration underwent fingerstick testing using the paper device; cell phone images of completed devices were taken and texted to a blinded off-site reader. Venipuncture serum was obtained from 93/96 participants for routine clinical testing (Roche/Hitachi; subsequently, 88/93 serum samples were captured and applied to paper and Abaxis platforms. Paper test and reference standard results were compared by Bland-Altman analysis.For serum, there was excellent agreement between paper test and Abaxis results, with negligible bias (+4.5 U/L. Abaxis results were systematically 8.6% lower than Roche/Hitachi results. ALT values in fingerstick samples tested on paper were systematically lower than values in paired serum tested on paper (bias -23.6 U/L or Abaxis (bias -18.4 U/L; a correction factor was developed for the paper device to match fingerstick blood to serum. Visual reads of cell phone images closely matched reads made in real time (bias +5.5 U/L.The paper ALT test is highly accurate for serum testing, matching the reference method against which it was optimized better than the reference methods matched each other. A systematic difference exists between ALT values in fingerstick and paired serum samples, and can be

  2. k-space sampling optimization for ultrashort TE imaging of cortical bone: Applications in radiation therapy planning and MR-based PET attenuation correction

    International Nuclear Information System (INIS)

    Hu, Lingzhi; Traughber, Melanie; Su, Kuan-Hao; Pereira, Gisele C.; Grover, Anu; Traughber, Bryan; Muzic, Raymond F. Jr.

    2014-01-01

    Purpose: The ultrashort echo-time (UTE) sequence is a promising MR pulse sequence for imaging cortical bone which is otherwise difficult to image using conventional MR sequences and also poses strong attenuation for photons in radiation therapy and PET imaging. The authors report here a systematic characterization of cortical bone signal decay and a scanning time optimization strategy for the UTE sequence through k-space undersampling, which can result in up to a 75% reduction in acquisition time. Using the undersampled UTE imaging sequence, the authors also attempted to quantitatively investigate the MR properties of cortical bone in healthy volunteers, thus demonstrating the feasibility of using such a technique for generating bone-enhanced images which can be used for radiation therapy planning and attenuation correction with PET/MR. Methods: An angularly undersampled, radially encoded UTE sequence was used for scanning the brains of healthy volunteers. Quantitative MR characterization of tissue properties, including water fraction and R2 ∗ = 1/T2 ∗ , was performed by analyzing the UTE images acquired at multiple echo times. The impact of different sampling rates was evaluated through systematic comparison of the MR image quality, bone-enhanced image quality, image noise, water fraction, and R2 ∗ of cortical bone. Results: A reduced angular sampling rate of the UTE trajectory achieves acquisition durations in proportion to the sampling rate and in as short as 25% of the time required for full sampling using a standard Cartesian acquisition, while preserving unique MR contrast within the skull at the cost of a minimal increase in noise level. The R2 ∗ of human skull was measured as 0.2–0.3 ms −1 depending on the specific region, which is more than ten times greater than the R2 ∗ of soft tissue. The water fraction in human skull was measured to be 60%–80%, which is significantly less than the >90% water fraction in brain. High-quality, bone

  3. Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)

    2012-07-15

    Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the

  4. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  5. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  6. Optimized mtDNA Control Region Primer Extension Capture Analysis for Forensically Relevant Samples and Highly Compromised mtDNA of Different Age and Origin

    Directory of Open Access Journals (Sweden)

    Mayra Eduardoff

    2017-09-01

    Full Text Available The analysis of mitochondrial DNA (mtDNA has proven useful in forensic genetics and ancient DNA (aDNA studies, where specimens are often highly compromised and DNA quality and quantity are low. In forensic genetics, the mtDNA control region (CR is commonly sequenced using established Sanger-type Sequencing (STS protocols involving fragment sizes down to approximately 150 base pairs (bp. Recent developments include Massively Parallel Sequencing (MPS of (multiplex PCR-generated libraries using the same amplicon sizes. Molecular genetic studies on archaeological remains that harbor more degraded aDNA have pioneered alternative approaches to target mtDNA, such as capture hybridization and primer extension capture (PEC methods followed by MPS. These assays target smaller mtDNA fragment sizes (down to 50 bp or less, and have proven to be substantially more successful in obtaining useful mtDNA sequences from these samples compared to electrophoretic methods. Here, we present the modification and optimization of a PEC method, earlier developed for sequencing the Neanderthal mitochondrial genome, with forensic applications in mind. Our approach was designed for a more sensitive enrichment of the mtDNA CR in a single tube assay and short laboratory turnaround times, thus complying with forensic practices. We characterized the method using sheared, high quantity mtDNA (six samples, and tested challenging forensic samples (n = 2 as well as compromised solid tissue samples (n = 15 up to 8 kyrs of age. The PEC MPS method produced reliable and plausible mtDNA haplotypes that were useful in the forensic context. It yielded plausible data in samples that did not provide results with STS and other MPS techniques. We addressed the issue of contamination by including four generations of negative controls, and discuss the results in the forensic context. We finally offer perspectives for future research to enable the validation and accreditation of the PEC MPS

  7. Optimized methods for total nucleic acid extraction and quantification of the bat white-nose syndrome fungus, Pseudogymnoascus destructans, from swab and environmental samples.

    Science.gov (United States)

    Verant, Michelle L; Bohuski, Elizabeth A; Lorch, Jeffery M; Blehert, David S

    2016-03-01

    The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid from P. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer-based qPCR test for P. destructans to refine quantification capabilities of this assay. © 2016 The Author(s).

  8. Optimized methods for total nucleic acid extraction and quantification of the bat white-nose syndrome fungus, Pseudogymnoascus destructans, from swab and environmental samples

    Science.gov (United States)

    Verant, Michelle; Bohuski, Elizabeth A.; Lorch, Jeffrey M.; Blehert, David

    2016-01-01

    The continued spread of white-nose syndrome and its impacts on hibernating bat populations across North America has prompted nationwide surveillance efforts and the need for high-throughput, noninvasive diagnostic tools. Quantitative real-time polymerase chain reaction (qPCR) analysis has been increasingly used for detection of the causative fungus, Pseudogymnoascus destructans, in both bat- and environment-associated samples and provides a tool for quantification of fungal DNA useful for research and monitoring purposes. However, precise quantification of nucleic acid fromP. destructans is dependent on effective and standardized methods for extracting nucleic acid from various relevant sample types. We describe optimized methodologies for extracting fungal nucleic acids from sediment, guano, and swab-based samples using commercial kits together with a combination of chemical, enzymatic, and mechanical modifications. Additionally, we define modifications to a previously published intergenic spacer–based qPCR test for P. destructans to refine quantification capabilities of this assay.

  9. The influence of optimism, social support and anxiety on aggression in a sample of dermatology patients: an analysis of cross-sectional data.

    Science.gov (United States)

    Coneo, A M C; Thompson, A R; Lavda, A

    2017-05-01

    Individuals with visible skin conditions often experience stigmatization and discrimination. This may trigger maladaptive responses such as feelings of anger and hostility, with negative consequences to social interactions and relationships. To identify psychosocial factors contributing to aggression levels in dermatology patients. Data were obtained from 91 participants recruited from outpatient clinics in the north of England, U.K. This study used dermatology-specific data extracted from a large U.K. database of medical conditions collected by The Appearance Research Collaboration. This study looked at the impact of optimism, perceptions of social support and social acceptance, fear of negative evaluation, appearance concern, appearance discrepancy, social comparison and well-being on aggression levels in a sample of dermatology patients. In order to assess the relationship between variables, a hierarchical regression analysis was performed. Dispositional style (optimism) was shown to have a strong negative relationship with aggression (β = -0·37, t = -2·97, P = 0·004). Higher levels of perceived social support were significantly associated with lower levels of aggression (β = -0·26, t = -2·26, P = 0·02). Anxiety was also found to have a significant positive relationship with aggression (β = 0·36, t = 2·56, P = 0·01). This study provides evidence for the importance of perceived social support and optimism in psychological adjustment to skin conditions. Psychosocial interventions provided to dermatology patients might need to address aggression levels and seek to enhance social support and the ability to be optimistic. © 2016 British Association of Dermatologists.

  10. Interpregnancy interval and risk of autistic disorder.

    Science.gov (United States)

    Gunnes, Nina; Surén, Pål; Bresnahan, Michaeline; Hornig, Mady; Lie, Kari Kveim; Lipkin, W Ian; Magnus, Per; Nilsen, Roy Miodini; Reichborn-Kjennerud, Ted; Schjølberg, Synnve; Susser, Ezra Saul; Øyen, Anne-Siri; Stoltenberg, Camilla

    2013-11-01

    A recent California study reported increased risk of autistic disorder in children conceived within a year after the birth of a sibling. We assessed the association between interpregnancy interval and risk of autistic disorder using nationwide registry data on pairs of singleton full siblings born in Norway. We defined interpregnancy interval as the time from birth of the first-born child to conception of the second-born child in a sibship. The outcome of interest was autistic disorder in the second-born child. Analyses were restricted to sibships in which the second-born child was born in 1990-2004. Odds ratios (ORs) were estimated by fitting ordinary logistic models and logistic generalized additive models. The study sample included 223,476 singleton full-sibling pairs. In sibships with interpregnancy intervals autistic disorder, compared with 0.13% in the reference category (≥ 36 months). For interpregnancy intervals shorter than 9 months, the adjusted OR of autistic disorder in the second-born child was 2.18 (95% confidence interval 1.42-3.26). The risk of autistic disorder in the second-born child was also increased for interpregnancy intervals of 9-11 months in the adjusted analysis (OR = 1.71 [95% CI = 1.07-2.64]). Consistent with a previous report from California, interpregnancy intervals shorter than 1 year were associated with increased risk of autistic disorder in the second-born child. A possible explanation is depletion of micronutrients in mothers with closely spaced pregnancies.

  11. Use of constrained mixture design for optimization of method for determination of zinc and manganese in tea leaves employing slurry sampling

    Energy Technology Data Exchange (ETDEWEB)

    Almeida Bezerra, Marcos, E-mail: mbezerra47@yahoo.com.br [Universidade Estadual do Sudoeste da Bahia, Laboratorio de Quimica Analitica, 45200-190, Jequie, Bahia (Brazil); Teixeira Castro, Jacira [Universidade Federal do Reconcavo da Bahia, Centro de Ciencias Exatas e Tecnologicas, 44380-000, Cruz das Almas, Bahia (Brazil); Coelho Macedo, Reinaldo; Goncalves da Silva, Douglas [Universidade Estadual do Sudoeste da Bahia, Laboratorio de Quimica Analitica, 45200-190, Jequie, Bahia (Brazil)

    2010-06-18

    A slurry suspension sampling technique has been developed for manganese and zinc determination in tea leaves by using flame atomic absorption spectrometry. The proportions of liquid-phase of the slurries composed by HCl, HNO{sub 3} and Triton X-100 solutions have been optimized applying a constrained mixture design. The optimized conditions were 200 mg of sample ground in a tungsten carbide balls mill (particle size < 100 {mu}m), dilution in a liquid-phase composed by 2.0 mol L{sup -1} nitric, 2.0 mol L{sup -1} hydrochloric acid and 2.5% Triton X-100 solutions (in the proportions of 50%, 12% and 38% respectively), sonication time of 10 min and final slurry volume of 50.0 mL. This method allowed the determination of manganese and zinc by FAAS, with detection limits of 0.46 and 0.66 {mu}g g{sup -1}, respectively. The precisions, expressed as relative standard deviation (RSD), are 6.9 and 5.5% (n = 10), for concentrations of manganese and zinc of 20 and 40 {mu}g g{sup -1}, respectively. The accuracy of the method was confirmed by analysis of the certified apple leaves (NIST 1515) and spinach leaves (NIST 1570a). The proposed method was applied for the determination of manganese and zinc in tea leaves used for the preparation of infusions. The obtained concentrations varied between 42 and 118 {mu}g g{sup -1} and 18.6 and 90 {mu}g g{sup -1}, respectively, for manganese and zinc. The results were compared with those obtained by an acid digestion procedure and determination of the elements by FAAS. There was no significant difference between the results obtained by the two methods based on a paired t-test (at 95% confidence level).

  12. Optimization of a radiochemistry method for plutonium determination in biological samples; Optimizacion del metodo radioquimico para determinar plutonio en muestras biologicas

    Energy Technology Data Exchange (ETDEWEB)

    Cerchetti, Maria L; Arguelles, Maria G [Comision Nacional de Energia Atomica, Ezeiza (Argentina). Laboratorio de Dosimetria Personal y de Area

    2005-07-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO{sub 4}), 71% for ion-exchange (resins AG 1x8 Cl{sup -} 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H{sub 2}SO{sub 4}-NH{sub 4}OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  13. A probabilistic approach for representation of interval uncertainty

    International Nuclear Information System (INIS)

    Zaman, Kais; Rangavajhala, Sirisha; McDonald, Mark P.; Mahadevan, Sankaran

    2011-01-01

    In this paper, we propose a probabilistic approach to represent interval data for input variables in reliability and uncertainty analysis problems, using flexible families of continuous Johnson distributions. Such a probabilistic representation of interval data facilitates a unified framework for handling aleatory and epistemic uncertainty. For fitting probability distributions, methods such as moment matching are commonly used in the literature. However, unlike point data where single estimates for the moments of data can be calculated, moments of interval data can only be computed in terms of upper and lower bounds. Finding bounds on the moments of interval data has been generally considered an NP-hard problem because it includes a search among the combinations of multiple values of the variables, including interval endpoints. In this paper, we present efficient algorithms based on continuous optimization to find the bounds on second and higher moments of interval data. With numerical examples, we show that the proposed bounding algorithms are scalable in polynomial time with respect to increasing number of intervals. Using the bounds on moments computed using the proposed approach, we fit a family of Johnson distributions to interval data. Furthermore, using an optimization approach based on percentiles, we find the bounding envelopes of the family of distributions, termed as a Johnson p-box. The idea of bounding envelopes for the family of Johnson distributions is analogous to the notion of empirical p-box in the literature. Several sets of interval data with different numbers of intervals and type of overlap are presented to demonstrate the proposed methods. As against the computationally expensive nested analysis that is typically required in the presence of interval variables, the proposed probabilistic representation enables inexpensive optimization-based strategies to estimate bounds on an output quantity of interest.

  14. Fuzzy solution of the linear programming problem with interval coefficients in the constraints

    OpenAIRE

    Dorota Kuchta

    2005-01-01

    A fuzzy concept of solving the linear programming problem with interval coefficients is proposed. For each optimism level of the decision maker (where the optimism concerns the certainty that no errors have been committed in the estimation of the interval coefficients and the belief that optimistic realisations of the interval coefficients will occur) another interval solution of the problem will be generated and the decision maker will be able to choose the final solution having a complete v...

  15. Interval stability for complex systems

    Science.gov (United States)

    Klinshov, Vladimir V.; Kirillov, Sergey; Kurths, Jürgen; Nekorkin, Vladimir I.

    2018-04-01

    Stability of dynamical systems against strong perturbations is an important problem of nonlinear dynamics relevant to many applications in various areas. Here, we develop a novel concept of interval stability, referring to the behavior of the perturbed system during a finite time interval. Based on this concept, we suggest new measures of stability, namely interval basin stability (IBS) and interval stability threshold (IST). IBS characterizes the likelihood that the perturbed system returns to the stable regime (attractor) in a given time. IST provides the minimal magnitude of the perturbation capable to disrupt the stable regime for a given interval of time. The suggested measures provide important information about the system susceptibility to external perturbations which may be useful for practical applications. Moreover, from a theoretical viewpoint the interval stability measures are shown to bridge the gap between linear and asymptotic stability. We also suggest numerical algorithms for quantification of the interval stability characteristics and demonstrate their potential for several dynamical systems of various nature, such as power grids and neural networks.

  16. Large Sample Confidence Intervals for Item Response Theory Reliability Coefficients

    Science.gov (United States)

    Andersson, Björn; Xin, Tao

    2018-01-01

    In applications of item response theory (IRT), an estimate of the reliability of the ability estimates or sum scores is often reported. However, analytical expressions for the standard errors of the estimators of the reliability coefficients are not available in the literature and therefore the variability associated with the estimated reliability…

  17. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    International Nuclear Information System (INIS)

    Davarani, Saied Saeed Hosseiny; Rezayati zad, Zeinab; Taheri, Ali Reza; Rahmatian, Nasrin

    2017-01-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ max 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L ‐1 respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine-molecular imprinting

  18. Optimization of sample preparation and chromatography for the determination of perfluoroalkyl acids in sediments from the Yangtze Estuary and East China Sea.

    Science.gov (United States)

    Wang, Qian-Wen; Yang, Gui-Peng; Zhang, Ze-Ming; Zhang, Jing

    2018-08-01

    Perfluoroalkyl acids (PFAAs) are ubiquitous pollutants present in various environmental media, including marine sediments. A method was proposed for the determination of 17 target PFAA analytes in marine sediment samples (n = 49) collected from the Yangtze Estuary and East China Sea. The proposed method involves the use of an optimized pretreatment procedure and ultrahigh-performance liquid chromatography electrospray ionization-tandem mass spectrometry in dynamic multiple reaction monitoring mode. The method relied on extraction cycles using methanol followed by concentration, filtration, and small volume injection to UHPLC-MS/MS. The recovery, time efficiency, and detection limit of the proposed method are improved relative to those of traditional methods. Limits of detection varied from 0.003 to 0.045 ng/g, and spike recoveries to sediment ranged from 90% to 110% with suitable precisions (1.7%-14.6%). PFAAs were widely present in the samples, and ΣPFAAs ranged from 0.67 ng/g dw to 36.75 ng/g dw. Results indicated that terrigenous input strongly influences PFAA distribution in sediments from the study areas. Perfluorooctanoic acid (PFOA) and perfluorooctanesulfonate (PFOS) were identified as the dominant perfluorocarboxylic acid (PFCA) and perfluoroalkylsulfonate (PFSA) in sediment samples from the Yangtze Estuary and the East China Sea. Preliminary environmental risk assessment indicated that PFOS may pose a higher environmental risk than PFOA. Furthermore, risk quotient values indicated that PFOS poses a significant risk to the aquatic ecosystem of the study areas. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Robotic fish tracking method based on suboptimal interval Kalman filter

    Science.gov (United States)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  20. Optimized sampling of hydroperoxides and investigations of the water vapour dependence of hydroperoxide formation during ozonolysis of alkenes; Optimierung der Probenahme von Hydroperoxiden und Untersuchungen zur Wasserdampfabhaengigkeit der Bildung von Hydroperoxiden bei der Ozonolyse von Alkenen

    Energy Technology Data Exchange (ETDEWEB)

    Becker, K.H.; Plagens, H.

    1997-06-01

    There are several sampling methods for hydroperoxides none of which is particularly reliable. The authors therefore tested three new methods in order to optimize hydroperoxide sampling and, using the optimized sampling procedure, to investigate the water vapour dependence of hydroperoxide formation during ozonolysis of alkenes. (orig.) [Deutsch] Fuer die Probenahme von Hydroperoxiden existieren verschiedene Verfahren, von denen bisher keines als besonders zuverlaessig angesehen werden konnte. Daher wurden in dieser Arbeit drei Verfahren getestet, um die Probenahme von Hydroperoxiden zu optimieren und mit dem entsprechenden Verfahren die Wasserdampfabhaengigkeit der Bildung von Hydroperoxiden bei der Ozonolyse von Alkenen zu untersuchen. (orig.)

  1. [Determination of 51 carbamate pesticide residues in vegetables by liquid chromatography-tandem mass spectrometry based on optimization of QuEChERS sample preparation method].

    Science.gov (United States)

    Wang, Lianzhu; Zhou, Yu; Huang, Xiaoyan; Wang, Ruilong; Lin, Zixu; Chen, Yong; Wang, Dengfei; Lin, Dejuan; Xu, Dunming

    2013-12-01

    The raw extracts of six vegetables (tomato, green bean, shallot, broccoli, ginger and carrot) were analyzed using gas chromatography-mass spectrometry (GC-MS) in full scan mode combined with NIST library search to confirm main matrix compounds. The effects of cleanup and adsorption mechanisms of primary secondary amine (PSA) , octadecylsilane (C18) and PSA + C18 on co-extractives were studied by the weight of evaporation residue for extracts before and after cleanup. The suitability of the two versions of QuEChERS method for sample preparation was evaluated for the extraction of 51 carbamate pesticides in the six vegetables. One of the QuEChERS methods was the original un-buffered method published in 2003, and the other was AOAC Official Method 2007.01 using acetate buffer. As a result, the best effects were obtained from using the combination of C18 and PSA for extract cleanup in vegetables. The acetate-buffered version was suitable for the determination of all pesticides except dioxacarb. Un-buffered QuEChERS method gave satisfactory results for determining dioxacarb. Based on these results, the suitable QuEChERS sample preparation method and liquid chromatography-positive electrospray ionization-tandem mass spectrometry under the optimized conditions were applied to determine the 51 carbamate pesticide residues in six vegetables. The analytes were quantified by matrix-matched standard solution. The recoveries at three levels of 10, 20 and 100 microg/kg spiked in six vegetables ranged from 58.4% to 126% with the relative standard deviations of 3.3%-26%. The limits of quantification (LOQ, S/N > or = 10) were 0.2-10 microg/kg except that the LOQs of cartap and thiofanox were 50 microg/kg. The method is highly efficient, sensitive and suitable for monitoring the 51 carbamate pesticide residues in vegetables.

  2. Assessment of pituitary micro-lesions using 3D sampling perfection with application-optimized contrasts using different flip-angle evolutions.

    Science.gov (United States)

    Wang, Jing; Wu, Yue; Yao, Zhenwei; Yang, Zhong

    2014-12-01

    The aim of this study was to explore the value of three-dimensional sampling perfection with application-optimized contrasts using different flip-angle evolutions (3D-SPACE) sequence in assessment of pituitary micro-lesions. Coronal 3D-SPACE as well as routine T1- and dynamic contrast-enhanced (DCE) T1-weighted images of the pituitary gland were acquired in 52 patients (48 women and four men; mean age, 32 years; age range, 17-50 years) with clinically suspected pituitary abnormality at 3.0 T, retrospectively. The interobserver agreement of assessment results was analyzed with K-statistics. Qualitative analyses were compared using Wilcoxon signed-rank test. There was good interobserver agreement of the independent evaluations for 3D-SPACE images (k = 0.892), fair for routine MR images (k = 0.649). At 3.0 T, 3D-SPACE provided significantly better images than routine MR images in terms of the boundary of pituitary gland, definition of pituitary lesions, and overall image quality. The evaluation of pituitary micro-lesions using combined routine and 3D-SPACE MR imaging was superior to that using only routine or 3D-SPACE imaging. The 3D-SPACE sequence can be used for appropriate and successful evaluation of the pituitary gland. We suggest 3D-SPACE sequence to be a powerful supplemental sequence in MR examinations with suspected pituitary micro-lesions.

  3. An optimal dynamic interval stabbing-max data structure?

    DEFF Research Database (Denmark)

    Agarwal, Pankaj Kumar; Arge, Lars; Yi, Ke

    2005-01-01

    In this paper we consider the dynamic stabbing-max problem, that is, the problem of dynamically maintaining a set S of n axis-parallel hyper-rectangles in Rd, where each rectangle s ∈ S has a weight w(s) ∈ R, so that the rectangle with the maximum weight containing a query point can be determined...

  4. Inverse Interval Matrix: A Survey

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Farhadsefat, R.

    2011-01-01

    Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf

  5. Square-wave anodic-stripping voltammetric determination of Cd, Pb and Cu in wine: Set-up and optimization of sample pre-treatment and instrumental parameters

    International Nuclear Information System (INIS)

    Illuminati, Silvia; Annibaldi, Anna; Truzzi, Cristina; Finale, Carolina; Scarponi, Giuseppe

    2013-01-01

    For the first time, square-wave anodic-stripping voltammetry (SWASV) was set up and optimized for the determination of Cd, Pb and Cu in white wine after UV photo-oxidative digestion of the sample. The best procedure for the sample pre-treatment consisted in a 6-h UV irradiation of diluted, acidified wine, with the addition of ultrapure H 2 O 2 (three sequential additions during the irradiation). Due to metal concentration differences, separate measurements were carried out for Cd (deposition potential −950 mV vs. Ag/AgCl/3 M KCl deposition time 15 min) and simultaneously for Pb and Cu (E d −750 mV, t d 30 s). The optimum set-up of the main instrumental parameters, evaluated also in terms of the signal-to-noise ratio, were as follows: E SW 20 mV, f 100 Hz, ΔE step 8 mV, t step 100 ms, t wait 60 ms, t delay 2 ms, t meas 3 ms. The electrochemical behaviour was reversible bielectronic for Cd and Pb, and kinetically controlled monoelectronic for Cu. Good accuracy was found both when the recovery procedure was used and when the results were compared with data obtained by differential pulse anodic stripping voltammetry. The linearity of the response was verified up to ∼4 μg L −1 for Cd and Pb and ∼15 μg L −1 for Cu. The detection limits for t d = 5 min in the 10 times diluted, UV digested sample were (ng L −1 ): Cd 7.0, Pb 1.2 and Cu 6.6, which are well below currently applied methods. Application to a Verdicchio dei Castelli di Jesi white wine revealed concentration levels of Cd ∼0.2, Pb ∼10, Cu ∼30 μg L −1 with repeatabilities of (±RSD%) Cd ±6%, Pb ±5%, Cu ±10%

  6. Highly selective solid phase extraction and preconcentration of Azathioprine with nano-sized imprinted polymer based on multivariate optimization and its trace determination in biological and pharmaceutical samples

    Energy Technology Data Exchange (ETDEWEB)

    Davarani, Saied Saeed Hosseiny, E-mail: ss-hosseiny@cc.sbu.ac.ir [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Rezayati zad, Zeinab [Faculty of Chemistry, Shahid Beheshti University, G. C., P.O. Box 19839-4716, Tehran (Iran, Islamic Republic of); Taheri, Ali Reza; Rahmatian, Nasrin [Islamic Azad University, Ilam Branch, Ilam (Iran, Islamic Republic of)

    2017-02-01

    In this research, for first time selective separation and determination of Azathioprine is demonstrated using molecularly imprinted polymer as the solid-phase extraction adsorbent, measured by spectrophotometry at λ{sub max} 286 nm. The selective molecularly imprinted polymer was produced using Azathioprine and methacrylic acid as a template molecule and monomer, respectively. A molecularly imprinted solid-phase extraction procedure was performed in column for the analyte from pharmaceutical and serum samples. The synthesized polymers were characterized by infrared spectroscopy (IR), field emission scanning electron microscopy (FESEM). In order to investigate the effect of independent variables on the extraction efficiency, the response surface methodology (RSM) based on Box–Behnken design (BBD) was employed. The analytical parameters such as precision, accuracy and linear working range were also determined in optimal experimental conditions and the proposed method was applied to analysis of Azathioprine. The linear dynamic range and limits of detection were 2.5–0.01 and 0.008 mg L{sup ‐1} respectively. The recoveries for analyte were higher than 95% and relative standard deviation values were found to be in the range of 0.83–4.15%. This method was successfully applied for the determination of Azathioprine in biological and pharmaceutical samples. - Graphical abstract: A new-nano sized imprinted polymer was synthesized and applied as sorbent in SPE in order to selective recognition, preconcentration, and determination of Azathioprine with the response surface methodology based on Box–Behnken design and was successfully investigated for the clean-up of human blood serum and pharmaceutical samples. - Highlights: • The nanosized-imprinted polymer has been synthesized by precipitation polymerization technique. • A molecularly imprinted solid-phase extraction procedure was performed for determination of Azathioprine. • The Azathioprine

  7. Dynamic Properties of QT Intervals

    Czech Academy of Sciences Publication Activity Database

    Halámek, Josef; Jurák, Pavel; Vondra, Vlastimil; Lipoldová, J.; Leinveber, Pavel; Plachý, M.; Fráňa, P.; Kára, T.

    2009-01-01

    Roč. 36, - (2009), s. 517-520 ISSN 0276-6574 R&D Projects: GA ČR GA102/08/1129; GA MŠk ME09050 Institutional research plan: CEZ:AV0Z20650511 Keywords : QT Intervals * arrhythmia diagnosis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering http://cinc.mit.edu/archives/2009/pdf/0517.pdf

  8. Robust misinterpretation of confidence intervals

    NARCIS (Netherlands)

    Hoekstra, Rink; Morey, Richard; Rouder, Jeffrey N.; Wagenmakers, Eric-Jan

    2014-01-01

    Null hypothesis significance testing (NHST) is undoubtedly the most common inferential technique used to justify claims in the social sciences. However, even staunch defenders of NHST agree that its outcomes are often misinterpreted. Confidence intervals (CIs) have frequently been proposed as a more

  9. Interval matrices: Regularity generates singularity

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Shary, S.P.

    2018-01-01

    Roč. 540, 1 March (2018), s. 149-159 ISSN 0024-3795 Institutional support: RVO:67985807 Keywords : interval matrix * regularity * singularity * P-matrix * absolute value equation * diagonally singilarizable matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016

  10. Chaotic dynamics from interspike intervals

    DEFF Research Database (Denmark)

    Pavlov, A N; Sosnovtseva, Olga; Mosekilde, Erik

    2001-01-01

    Considering two different mathematical models describing chaotic spiking phenomena, namely, an integrate-and-fire and a threshold-crossing model, we discuss the problem of extracting dynamics from interspike intervals (ISIs) and show that the possibilities of computing the largest Lyapunov expone...

  11. Interval mellem operation for ovariecancer og kemoterapi--sekundaerpublikation

    DEFF Research Database (Denmark)

    Larsen, Erling Peter; Blaakaer, Jan

    2009-01-01

    Worldwide, much effort goes into performing optimal surgery in treatment of epithelial ovarian cancer (EOC). However, the optimal timing (TI) of postoperative chemotherapy for ovarian cancer remains poorly defined. The relevant literature comprises seven studies with varying characteristics and i...... and includes different prognostic factors. The general supposition is that the time interval does not have a prognostic influence, but experimental studies have shown that it does affect cancer prognosis. Udgivelsesdato: 2009-Nov-2...

  12. Interpregnancy intervals: impact of postpartum contraceptive effectiveness and coverage.

    Science.gov (United States)

    Thiel de Bocanegra, Heike; Chang, Richard; Howell, Mike; Darney, Philip

    2014-04-01

    The purpose of this study was to determine the use of contraceptive methods, which was defined by effectiveness, length of coverage, and their association with short interpregnancy intervals, when controlling for provider type and client demographics. We identified a cohort of 117,644 women from the 2008 California Birth Statistical Master file with second or higher order birth and at least 1 Medicaid (Family Planning, Access, Care, and Treatment [Family PACT] program or Medi-Cal) claim within 18 months after index birth. We explored the effect of contraceptive method provision on the odds of having an optimal interpregnancy interval and controlled for covariates. The average length of contraceptive coverage was 3.81 months (SD = 4.84). Most women received user-dependent hormonal contraceptives as their most effective contraceptive method (55%; n = 65,103 women) and one-third (33%; n = 39,090 women) had no contraceptive claim. Women who used long-acting reversible contraceptive methods had 3.89 times the odds and women who used user-dependent hormonal methods had 1.89 times the odds of achieving an optimal birth interval compared with women who used barrier methods only; women with no method had 0.66 times the odds. When user-dependent methods are considered, the odds of having an optimal birth interval increased for each additional month of contraceptive coverage by 8% (odds ratio, 1.08; 95% confidence interval, 1.08-1.09). Women who were seen by Family PACT or by both Family PACT and Medi-Cal providers had significantly higher odds of optimal birth intervals compared with women who were served by Medi-Cal only. To achieve optimal birth spacing and ultimately to improve birth outcomes, attention should be given to contraceptive counseling and access to contraceptive methods in the postpartum period. Copyright © 2014 Mosby, Inc. All rights reserved.

  13. Elaboração e validação de intervalos de referência longitudinais de peso fetal com uma amostra da população brasileira Elaboration and validation of longitudinal reference intervals of fetal weight with a sample of the Brazilian population

    Directory of Open Access Journals (Sweden)

    Érica Luciana de Paula Furlan

    2012-10-01

    Full Text Available OBJETIVOS: Elaborar modelos de predição de peso fetal e de percentis longitudinais de peso fetal estimado (PFE com uma amostra da população brasileira. MÉTODOS: Estudo observacional prospectivo. Dois grupos de gestantes foram recrutados: Grupo EPF (estimativa de peso fetal: pacientes para elaboração (EPF-El e validação (EPF-Val de um modelo de predição de peso fetal; Grupo IRL (intervalos de referência longitudinais: gestantes para elaboração (IRL-El e validação (IRL-Val de intervalos de referência longitudinais de PFE. Regressão polinomial foi utilizada com os dados do subgrupo EPF-El para gerar o modelo de predição de peso fetal. O desempenho deste modelo foi comparado com os de outros disponíveis na literatura. Modelos lineares mistos foram usados para elaboração de intervalos longitudinais de PFE com os dados do subgrupo IRL-El. Os dados do subgrupo IRL-Val foram usados para validação destes intervalos. RESULTADOS: Quatrocentos e cinqüenta e oito pacientes compuseram o Grupo EPF (EPF-El: 367; EPF-Val: 91 e 315 o Grupo IRL (IRL-El: 265; IRL-Val: 50. A fórmula para cálculo do PFE foi: PFE=-8,277+2,146xDBPxCAxCF-2,449xCFxDB