WorldWideScience

Sample records for robust sampling strategies

  1. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  2. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  3. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  4. Robust H2 performance for sampled-data systems

    DEFF Research Database (Denmark)

    Rank, Mike Lind

    1997-01-01

    Robust H2 performance conditions under structured uncertainty, analogous to well known methods for H∞ performance, have recently emerged in both discrete and continuous-time. This paper considers the extension into uncertain sampled-data systems, taking into account inter-sample behavior. Convex...... conditions for robust H2 performance are derived for different uncertainty sets...

  5. Robust weak measurements on finite samples

    International Nuclear Information System (INIS)

    Tollaksen, Jeff

    2007-01-01

    A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime

  6. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  7. GFC-Robust Risk Management Strategies under the Basel Accord

    NARCIS (Netherlands)

    M.J. McAleer (Michael); J.A. Jiménez-Martín (Juan-Ángel); T. Pérez-Amaral (Teodosio)

    2010-01-01

    textabstractA risk management strategy is proposed as being robust to the Global Financial Crisis (GFC) by selecting a Value-at-Risk (VaR) forecast that combines the forecasts of different VaR models. The robust forecast is based on the median of the point VaR forecasts of a set of conditional

  8. Adjustable Robust Strategies for Flood Protection

    NARCIS (Netherlands)

    Postek, Krzysztof; den Hertog, Dick; Kind, J.; Pustjens, Chris

    2016-01-01

    Flood protection is of major importance to many flood-prone regions and involves substantial investment and maintenance costs. Modern flood risk management requires often to determine a cost-efficient protection strategy, i.e., one with lowest possible long run cost and satisfying flood protection

  9. UAV Robust Strategy Control Based on MAS

    Directory of Open Access Journals (Sweden)

    Jian Han

    2014-01-01

    Full Text Available A novel multiagent system (MAS has been proposed to integrate individual UAV (unmanned aerial vehicle to form a UAV team which can accomplish complex missions with better efficiency and effect. The MAS based UAV team control is more able to conquer dynamic situations and enhance the performance of any single UAV. In this paper, the MAS proposed and established combines the reacting and thinking abilities to be an initiative and autonomous hybrid system which can solve missions involving coordinated flight and cooperative operation. The MAS uses BDI model to support its logical perception and to classify the different missions; then the missions will be allocated by utilizing auction mechanism after analyzing dynamic parameters. Prim potential algorithm, particle swarm algorithm, and reallocation mechanism are proposed to realize the rational decomposing and optimal allocation in order to reach the maximum profit. After simulation, the MAS has been proved to be able to promote the success ratio and raise the robustness, while realizing feasibility of coordinated flight and optimality of cooperative mission.

  10. Spent nuclear fuel sampling strategy

    International Nuclear Information System (INIS)

    Bergmann, D.W.

    1995-01-01

    This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation

  11. Robust sampled-data control of hydraulic flight control actuators

    OpenAIRE

    Kliffken, Markus Gustav

    1997-01-01

    In todays flight-by-wire systems the primary flight control surfaces of modern commercial and transport aircraft are driven by electro hydraulic linear actuators. Changing flight conditions as well as nonlinear actuator dynamics may be interpreted as parameter uncertainties of the linear actuator model. This demands a robust design for the controller. Here the parameter space design is used for the direct sampled-data controller synthesis. Therefore, a static output controller is choosen, the...

  12. Robustness analysis of pull strategies in multi-product systems

    Directory of Open Access Journals (Sweden)

    Chukwunonyelum Emmanuel Onyeocha

    2015-09-01

    Full Text Available Purpose: This paper examines the behaviour of shared and dedicated Kanban allocation policies of Hybrid Kanban-CONWIP and Basestock-Kanban-CONWIP control strategies in multi-product systems; with considerations to robustness of optimal solutions to environmental and system variabilities. Design/methodology/approach: Discrete event simulation and evolutionary multi-objective optimisation approach were utilised to develop Pareto-frontier or sets of non-dominated optimal solutions and for selection of an appropriate decision set for the control parameters in the shared Kanban allocation policy (S-KAP and dedicated Kanban allocation policy (D-KAP. Simulation experiments were carried out via ExtendSim simulation application software. The outcomes of PCS+KAP performances were compared via all pairwise comparison and Nelson’s screening and selection procedure for superior PCS+KAP under negligible environmental and system stability. To determine superior PCS+KAP under systems’ and environmental variability, the optimal solutions were tested for robustness using Latin hypercube sampling technique and stochastic dominance test. Findings: The outcome of this study shows that under uncontrollable environmental variability, dedicated Kanban allocation policy outperformed shared Kanban allocation policy in serial manufacturing system with negligible and in complex assembly line with setup times. Moreover, the BK-CONWIP is shown as superior strategy to HK-CONWIP. Research limitations/implications: Future research should be conducted to verify the level of flexibility of BK-CONWIP with respect to product mix and product demand volume variations in a complex multi-product system Practical implications: The outcomes of this work are applicable to multi-product manufacturing industries with significant setup times and systems with negligible setup times. The multi-objective optimisation provides decision support for selection of control-parameters such that

  13. Robustness analysis of interdependent networks under multiple-attacking strategies

    Science.gov (United States)

    Gao, Yan-Li; Chen, Shi-Ming; Nie, Sen; Ma, Fei; Guan, Jun-Jie

    2018-04-01

    The robustness of complex networks under attacks largely depends on the structure of a network and the nature of the attacks. Previous research on interdependent networks has focused on two types of initial attack: random attack and degree-based targeted attack. In this paper, a deliberate attack function is proposed, where six kinds of deliberate attacking strategies can be derived by adjusting the tunable parameters. Moreover, the robustness of four types of interdependent networks (BA-BA, ER-ER, BA-ER and ER-BA) with different coupling modes (random, positive and negative correlation) is evaluated under different attacking strategies. Interesting conclusions could be obtained. It can be found that the positive coupling mode can make the vulnerability of the interdependent network to be absolutely dependent on the most vulnerable sub-network under deliberate attacks, whereas random and negative coupling modes make the vulnerability of interdependent network to be mainly dependent on the being attacked sub-network. The robustness of interdependent network will be enhanced with the degree-degree correlation coefficient varying from positive to negative. Therefore, The negative coupling mode is relatively more optimal than others, which can substantially improve the robustness of the ER-ER network and ER-BA network. In terms of the attacking strategies on interdependent networks, the degree information of node is more valuable than the betweenness. In addition, we found a more efficient attacking strategy for each coupled interdependent network and proposed the corresponding protection strategy for suppressing cascading failure. Our results can be very useful for safety design and protection of interdependent networks.

  14. Robust online tracking via adaptive samples selection with saliency detection

    Science.gov (United States)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  15. On the robust optimization to the uncertain vaccination strategy problem

    International Nuclear Information System (INIS)

    Chaerani, D.; Anggriani, N.; Firdaniza

    2014-01-01

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented

  16. On the robust optimization to the uncertain vaccination strategy problem

    Energy Technology Data Exchange (ETDEWEB)

    Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id [Department of Mathematics, Faculty of Mathematics and Natural Sciences, University of Padjadjaran Indonesia, Jalan Raya Bandung Sumedang KM 21 Jatinangor Sumedang 45363 (Indonesia)

    2014-02-21

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.

  17. Flexible and robust strategies for waste management in Sweden

    International Nuclear Information System (INIS)

    Finnveden, Goeran; Bjoerklund, Anna; Reich, Marcus Carlsson; Eriksson, Ola; Soerbom, Adrienne

    2007-01-01

    Treatment of solid waste continues to be on the political agenda. Waste disposal issues are often viewed from an environmental perspective, but economic and social aspects also need to be considered when deciding on waste strategies and policy instruments. The aim of this paper is to suggest flexible and robust strategies for waste management in Sweden, and to discuss different policy instruments. Emphasis is on environmental aspects, but social and economic aspects are also considered. The results show that most waste treatment methods have a role to play in a robust and flexible integrated waste management system, and that the waste hierarchy is valid as a rule of thumb from an environmental perspective. A review of social aspects shows that there is a general willingness among people to source separate wastes. A package of policy instruments can include landfill tax, an incineration tax which is differentiated with respect to the content of fossil fuels and a weight based incineration tax, as well as support to the use of biogas and recycled materials

  18. Soil sampling strategies: Evaluation of different approaches

    International Nuclear Information System (INIS)

    De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-01-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2σ, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies

  19. Soil sampling strategies: Evaluation of different approaches

    Energy Technology Data Exchange (ETDEWEB)

    De Zorzi, Paolo [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy)], E-mail: paolo.dezorzi@apat.it; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy); Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia [Agenzia Regionale per la Prevenzione e Protezione dell' Ambiente del Veneto, ARPA Veneto, U.O. Centro Qualita Dati, Via Spalato, 14-36045 Vicenza (Italy)

    2008-11-15

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2{sigma}, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  20. Soil sampling strategies: evaluation of different approaches.

    Science.gov (United States)

    de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-11-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  1. Mars Sample Return - Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/cache rover in 2018, an orbiter with an Earth return vehicle in 2022, and a fetch rover and ascent vehicle in 2024. Strategies are presented to launch the sample into a coplanar orbit with the Orbiter which facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits exist at 457 and 572 km which provide multiple launch opportunities with similar geometries for detection and rendezvous.

  2. Mars Sample Return: Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/ caching rover in 2018, an Earth return orbiter in 2022, and a fetch rover with ascent vehicle in 2024. Strategies are presented to launch the sample into a nearly coplanar orbit with the Orbiter which would facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits existat 457 and 572 km which would provide multiple launch opportunities with similar geometries for detection and rendezvous.

  3. Small Sample Robust Testing for Normality against Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2012-01-01

    Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012

  4. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    Science.gov (United States)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value

  5. Robustness of networks against propagating attacks under vaccination strategies

    International Nuclear Information System (INIS)

    Hasegawa, Takehisa; Masuda, Naoki

    2011-01-01

    We study the effect of vaccination on the robustness of networks against propagating attacks that obey the susceptible–infected–removed model. By extending the generating function formalism developed by Newman (2005 Phys. Rev. Lett. 95 108701), we analytically determine the robustness of networks that depends on the vaccination parameters. We consider the random defense where nodes are vaccinated randomly and the degree-based defense where hubs are preferentially vaccinated. We show that, when vaccines are inefficient, the random graph is more robust against propagating attacks than the scale-free network. When vaccines are relatively efficient, the scale-free network with the degree-based defense is more robust than the random graph with the random defense and the scale-free network with the random defense

  6. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  7. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  8. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  9. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  10. Research on Robust Control Strategies for VSC-HVDC

    Science.gov (United States)

    Zhu, Kaicheng; Bao, Hai

    2018-01-01

    In the control system of VSC-HVDC, the phase locked loop provides phase signals to voltage vector control and trigger pulses to generate the required reference phase. The PLL is a typical second-order system. When the system is in unstable state, it will oscillate, make the trigger angle shift, produce harmonic, and make active power and reactive power coupled. Thus, considering the external disturbances introduced by the PLL in VSC-HVDC control system, the parameter perturbations of the controller and the model uncertainties, a H∞ robust controller of mixed sensitivity optimization problem is designed by using the Hinf function provided by the robust control toolbox. Then, compare it with the proportional integral controller through the MATLAB simulation experiment. By contrast, when the H∞ robust controller is added, active and reactive power of the converter station can track the change of reference values more accurately and quickly, and reduce overshoot. When the step change of active and reactive power occurs, mutual influence is reduced and better independent regulation is achieved.

  11. Robust multi-objective calibration strategies – possibilities for improving flood forecasting

    Directory of Open Access Journals (Sweden)

    G. H. Schmitz

    2012-10-01

    Full Text Available Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008 studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE. This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify

  12. Robust Adaptive Stabilization of Linear Time-Invariant Dynamic Systems by Using Fractional-Order Holds and Multirate Sampling Controls

    Directory of Open Access Journals (Sweden)

    S. Alonso-Quesada

    2010-01-01

    Full Text Available This paper presents a strategy for designing a robust discrete-time adaptive controller for stabilizing linear time-invariant (LTI continuous-time dynamic systems. Such systems may be unstable and noninversely stable in the worst case. A reduced-order model is considered to design the adaptive controller. The control design is based on the discretization of the system with the use of a multirate sampling device with fast-sampled control signal. A suitable on-line adaptation of the multirate gains guarantees the stability of the inverse of the discretized estimated model, which is used to parameterize the adaptive controller. A dead zone is included in the parameters estimation algorithm for robustness purposes under the presence of unmodeled dynamics in the controlled dynamic system. The adaptive controller guarantees the boundedness of the system measured signal for all time. Some examples illustrate the efficacy of this control strategy.

  13. Existential risks: exploring a robust risk reduction strategy.

    Science.gov (United States)

    Jebari, Karim

    2015-06-01

    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of event. Neither can probabilistic risk analysis. This paper will argue that the approach that is referred to as engineering safety could be applied to reducing the risk from black swan extinction events. It will also propose a conceptual sketch of how such a strategy may be implemented: isolated, self-sufficient, and continuously manned underground refuges. Some characteristics of such refuges are also described, in particular the psychosocial aspects. Furthermore, it is argued that this implementation of the engineering safety strategy safety barriers would be effective and plausible and could reduce the risk of an extinction event in a wide range of possible (known and unknown) scenarios. Considering the staggering opportunity cost of an existential catastrophe, such strategies ought to be explored more vigorously.

  14. A robust model predictive control strategy for improving the control performance of air-conditioning systems

    International Nuclear Information System (INIS)

    Huang Gongsheng; Wang Shengwei; Xu Xinhua

    2009-01-01

    This paper presents a robust model predictive control strategy for improving the supply air temperature control of air-handling units by dealing with the associated uncertainties and constraints directly. This strategy uses a first-order plus time-delay model with uncertain time-delay and system gain to describe air-conditioning process of an air-handling unit usually operating at various weather conditions. The uncertainties of the time-delay and system gain, which imply the nonlinearities and the variable dynamic characteristics, are formulated using an uncertainty polytope. Based on this uncertainty formulation, an offline LMI-based robust model predictive control algorithm is employed to design a robust controller for air-handling units which can guarantee a good robustness subject to uncertainties and constraints. The proposed robust strategy is evaluated in a dynamic simulation environment of a variable air volume air-conditioning system in various operation conditions by comparing with a conventional PI control strategy. The robustness analysis of both strategies under different weather conditions is also presented.

  15. Direct infusion-SIM as fast and robust method for absolute protein quantification in complex samples

    Directory of Open Access Journals (Sweden)

    Christina Looße

    2015-06-01

    Full Text Available Relative and absolute quantification of proteins in biological and clinical samples are common approaches in proteomics. Until now, targeted protein quantification is mainly performed using a combination of HPLC-based peptide separation and selected reaction monitoring on triple quadrupole mass spectrometers. Here, we show for the first time the potential of absolute quantification using a direct infusion strategy combined with single ion monitoring (SIM on a Q Exactive mass spectrometer. By using complex membrane fractions of Escherichia coli, we absolutely quantified the recombinant expressed heterologous human cytochrome P450 monooxygenase 3A4 (CYP3A4 comparing direct infusion-SIM with conventional HPLC-SIM. Direct-infusion SIM revealed only 14.7% (±4.1 (s.e.m. deviation on average, compared to HPLC-SIM and a decreased processing and analysis time of 4.5 min (that could be further decreased to 30 s for a single sample in contrast to 65 min by the LC–MS method. Summarized, our simplified workflow using direct infusion-SIM provides a fast and robust method for quantification of proteins in complex protein mixtures.

  16. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    Science.gov (United States)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  17. Effects of methodology and analysis strategy on robustness of pestivirus phylogeny.

    Science.gov (United States)

    Liu, Lihong; Xia, Hongyan; Baule, Claudia; Belák, Sándor; Wahlberg, Niklas

    2010-01-01

    Phylogenetic analysis of pestiviruses is a useful tool for classifying novel pestiviruses and for revealing their phylogenetic relationships. In this study, robustness of pestivirus phylogenies has been compared by analyses of the 5'UTR, and complete N(pro) and E2 gene regions separately and combined, performed by four methods: neighbour-joining (NJ), maximum parsimony (MP), maximum likelihood (ML), and Bayesian inference (BI). The strategy of analysing the combined sequence dataset by BI, ML, and MP methods resulted in a single, well-supported tree topology, indicating a reliable and robust pestivirus phylogeny. By contrast, the single-gene analysis strategy resulted in 12 trees of different topologies, revealing different relationships among pestiviruses. These results indicate that the strategies and methodologies are two vital aspects affecting the robustness of the pestivirus phylogeny. The strategy and methodologies outlined in this paper may have a broader application in inferring phylogeny of other RNA viruses.

  18. Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.

    Science.gov (United States)

    Rogers, Emily; Murrugarra, David; Heitsch, Christine

    2017-07-25

    Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.

  19. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    Science.gov (United States)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust

  20. Sampling strategies for indoor radon investigations

    International Nuclear Information System (INIS)

    Prichard, H.M.

    1983-01-01

    Recent investigations prompted by concern about the environmental effects of residential energy conservation have produced many accounts of indoor radon concentrations far above background levels. In many instances time-normalized annual exposures exceeded the 4 WLM per year standard currently used for uranium mining. Further investigations of indoor radon exposures are necessary to judge the extent of the problem and to estimate the practicality of health effects studies. A number of trends can be discerned as more indoor surveys are reported. It is becoming increasingly clear that local geological factors play a major, if not dominant role in determining the distribution of indoor radon concentrations in a given area. Within a giving locale, indoor radon concentrations tend to be log-normally distributed, and sample means differ markedly from one region to another. The appreciation of geological factors and the general log-normality of radon distributions will improve the accuracy of population dose estimates and facilitate the design of preliminary health effects studies. The relative merits of grab samples, short and long term integrated samples, and more complicated dose assessment strategies are discussed in the context of several types of epidemiological investigations. A new passive radon sampler with a 24 hour integration time is described and evaluated as a tool for pilot investigations

  1. TU-H-CAMPUS-JeP3-01: Towards Robust Adaptive Radiation Therapy Strategies

    International Nuclear Information System (INIS)

    Boeck, M; Eriksson, K; Hardemark, B; Forsgren, A

    2016-01-01

    Purpose: To set up a framework combining robust treatment planning with adaptive reoptimization in order to maintain high treatment quality, to respond to interfractional variations and to identify those patients who will benefit the most from an adaptive fractionation schedule. Methods: We propose adaptive strategies based on stochastic minimax optimization for a series of simulated treatments on a one-dimensional patient phantom. The plan should be able to handle anticipated systematic and random errors and is applied during the first fractions. Information on the individual geometric variations is gathered at each fraction. At scheduled fractions, the impact of the measured errors on the delivered dose distribution is evaluated. For a patient that receives a dose that does not satisfy specified plan quality criteria, the plan is reoptimized based on these individual measurements using one of three different adaptive strategies. The reoptimized plan is then applied during future fractions until a new scheduled adaptation becomes necessary. In the first adaptive strategy the measured systematic and random error scenarios and their assigned probabilities are updated to guide the robust reoptimization. The focus of the second strategy lies on variation of the fraction of the worst scenarios taken into account during robust reoptimization. In the third strategy the uncertainty margins around the target are recalculated with the measured errors. Results: By studying the effect of the three adaptive strategies combined with various adaptation schedules on the same patient population, the group which benefits from adaptation is identified together with the most suitable strategy and schedule. Preliminary computational results indicate when and how best to adapt for the three different strategies. Conclusion: A workflow is presented that provides robust adaptation of the treatment plan throughout the course of treatment and useful measures to identify patients in need

  2. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  3. The Robust Control Mixer Method for Reconfigurable Control Design By Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, Mogens; Verhagen, M.

    2001-01-01

    This paper proposes a robust reconfigurable control synthesis method based on the combination of the control mixer method and robust H1 con- trol techniques through the model-matching strategy. The control mixer modules are extended from the conventional matrix-form into the LTI sys- tem form....... By regarding the nominal control system as the desired model, an augmented control system is constructed through the model-matching formulation, such that the current robust control techniques can be usedto synthesize these dynamical modules. One extension of this method with respect to the performance...... recovery besides the functionality recovery is also discussed under this framework. Comparing with the conventional control mixer method, the proposed method considers the recon gured system's stability, performance and robustness simultaneously. Finally, the proposed method is illustrated by a case study...

  4. An Integrated Environmental Assessment of Green and Gray Infrastructure Strategies for Robust Decision Making.

    Science.gov (United States)

    Casal-Campos, Arturo; Fu, Guangtao; Butler, David; Moore, Andrew

    2015-07-21

    The robustness of a range of watershed-scale "green" and "gray" drainage strategies in the future is explored through comprehensive modeling of a fully integrated urban wastewater system case. Four socio-economic future scenarios, defined by parameters affecting the environmental performance of the system, are proposed to account for the uncertain variability of conditions in the year 2050. A regret-based approach is applied to assess the relative performance of strategies in multiple impact categories (environmental, economic, and social) as well as to evaluate their robustness across future scenarios. The concept of regret proves useful in identifying performance trade-offs and recognizing states of the world most critical to decisions. The study highlights the robustness of green strategies (particularly rain gardens, resulting in half the regret of most options) over end-of-pipe gray alternatives (surface water separation or sewer and storage rehabilitation), which may be costly (on average, 25% of the total regret of these options) and tend to focus on sewer flooding and CSO alleviation while compromising on downstream system performance (this accounts for around 50% of their total regret). Trade-offs and scenario regrets observed in the analysis suggest that the combination of green and gray strategies may still offer further potential for robustness.

  5. Synthesis of robust disturbance-feedback strategies by using semi-definite programming

    NARCIS (Netherlands)

    Trottemant, E.J.

    2015-01-01

    Systems in real-life have to deal with uncertainty in such a manner that a high level of performance is guaranteed under all conditions. The objective in this thesis is to obtain robust strategies that provide an upper bound (worst-case) on the performance of an uncertain system against all

  6. Optimal and Robust Switching Control Strategies : Theory, and Applications in Traffic Management

    NARCIS (Netherlands)

    Hajiahmadi, M.

    2015-01-01

    Macroscopic modeling, predictive and robust control and route guidance for large-scale freeway and urban traffic networks are the main focus of this thesis. In order to increase the efficiency of our control strategies, we propose several mathematical and optimization techniques. Moreover, in the

  7. Estimator-based multiobjective robust control strategy for an active pantograph in high-speed railways

    DEFF Research Database (Denmark)

    Lu, Xiaobing; Liu, Zhigang; Song, Yang

    2018-01-01

    Active control of the pantograph is one of the promising measures for decreasing fluctuation in the contact force between the pantograph and the catenary. In this paper, an estimator-based multiobjective robust control strategy is proposed for an active pantograph, which consists of a state estim...

  8. Optimal robust control strategy of a solid oxide fuel cell system

    Science.gov (United States)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  9. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  10. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  11. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  12. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  13. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  14. Production and robustness of a Cacao agroecosystem: effects of two contrasting types of management strategies.

    Science.gov (United States)

    Sabatier, Rodolphe; Wiegand, Kerstin; Meyer, Katrin

    2013-01-01

    Ecological intensification, i.e. relying on ecological processes to replace chemical inputs, is often presented as the ideal alternative to conventional farming based on an intensive use of chemicals. It is said to both maintain high yield and provide more robustness to the agroecosystem. However few studies compared the two types of management with respect to their consequences for production and robustness toward perturbation. In this study our aim is to assess productive performance and robustness toward diverse perturbations of a Cacao agroecosystem managed with two contrasting groups of strategies: one group of strategies relying on a high level of pesticides and a second relying on low levels of pesticides. We conducted this study using a dynamical model of a Cacao agroecosystem that includes Cacao production dynamics, and dynamics of three insects: a pest (the Cacao Pod Borer, Conopomorpha cramerella) and two characteristic but unspecified beneficial insects (a pollinator of Cacao and a parasitoid of the Cacao Pod Borer). Our results showed two opposite behaviors of the Cacao agroecosystem depending on its management, i.e. an agroecosystem relying on a high input of pesticides and showing low ecosystem functioning and an agroecosystem with low inputs, relying on a high functioning of the ecosystem. From the production point of view, no type of management clearly outclassed the other and their ranking depended on the type of pesticide used. From the robustness point of view, the two types of managements performed differently when subjected to different types of perturbations. Ecologically intensive systems were more robust to pest outbreaks and perturbations related to pesticide characteristics while chemically intensive systems were more robust to Cacao production and management-related perturbation.

  15. An efficient, robust, and inexpensive grinding device for herbal samples like Cinchona bark

    DEFF Research Database (Denmark)

    Hansen, Steen Honoré; Holmfred, Else Skovgaard; Cornett, Claus

    2015-01-01

    An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum...... of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery...

  16. An Efficient, Robust, and Inexpensive Grinding Device for Herbal Samples like Cinchona Bark.

    Science.gov (United States)

    Hansen, Steen Honoré; Holmfred, Else; Cornett, Claus; Maldonado, Carla; Rønsted, Nina

    2015-01-01

    An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery of analytes in extracts of Cinchona bark was investigated using HPLC.

  17. MPLEx: a Robust and Universal Protocol for Single-Sample Integrative Proteomic, Metabolomic, and Lipidomic Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.; Burnum-Johnson, Kristin E.; Kim, Young-Mo; Kyle, Jennifer E.; Matzke, Melissa M.; Shukla, Anil K.; Chu, Rosalie K.; Schepmoes, Athena A.; Jacobs, Jon M.; Baric, Ralph S.; Webb-Robertson, Bobbie-Jo; Smith, Richard D.; Metz, Thomas O.; Chia, Nicholas

    2016-05-03

    ABSTRACT

    Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. Themetabolite,protein, andlipidextraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of this protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental,in vitro, and clinical).

    IMPORTANCEIn systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated

  18. Robust non-parametric one-sample tests for the analysis of recurrent events.

    Science.gov (United States)

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.

  19. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  20. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  1. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  2. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  3. Developing a Robust Strategy Map in Balanced Scorecard Model Using Scenario Planning

    Directory of Open Access Journals (Sweden)

    Mostafa Jafari

    2015-01-01

    Full Text Available The key to successful strategy implementation in an organization is for people in the organization to understand it, which requires the establishment of complicated but vital processes whereby the intangible assets are converted into tangible outputs. In this regard, a strategy map is a useful tool that helps execute this difficult task. However, such maps are typically developed based on ambiguous cause-effect relationships that result from the extrapolation of past data and flawed links with possible futures. However, if the strategy map is a mere reflection of the status quo but not future conditions and does not embrace real-world uncertainties, it will endanger the organization since it posits that the current situation will continue. In order to compensate for this deficiency, the environmental scenarios affecting an organization were identified in the present study. Then the strategy map was developed in the form of a scenario-based balanced scorecard. Besides, the effect of environmental changes on the components of the strategy map was investigated using the strategy maps illustrated over time together with the corresponding cash flow vectors. Subsequently, a method was proposed to calculate the degree of robustness of every component of the strategy map for the contingency of every scenario. Finally, the results were applied to a post office.

  4. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. A robust control strategy for a class of distributed network with transmission delays

    DEFF Research Database (Denmark)

    Vahid Naghavi, S.; A. Safavi, A.; Khooban, Mohammad Hassan

    2016-01-01

    Purpose The purpose of this paper is to concern the design of a robust model predictive controller for distributed networked systems with transmission delays. Design/methodology/approach The overall system is composed of a number of interconnected nonlinear subsystems with time-varying transmission...... as an optimization problem of a “worst-case” objective function over an infinite moving horizon. Findings The aim is to propose control synthesis approach that depends on nonlinearity and time varying delay characteristics. The MPC problem is represented in a time varying delayed state feedback structure....... Then the synthesis sufficient condition is provided in the form of a linear matrix inequality (LMI) optimization and is solved online at each time instant. In the rest, an LMI-based decentralized observer-based robust model predictive control strategy is proposed. Originality/value The authors develop RMPC...

  6. Robust identification of noncoding RNA from transcriptomes requires phylogenetically-informed sampling.

    Directory of Open Access Journals (Sweden)

    Stinus Lindgreen

    2014-10-01

    Full Text Available Noncoding RNAs are integral to a wide range of biological processes, including translation, gene regulation, host-pathogen interactions and environmental sensing. While genomics is now a mature field, our capacity to identify noncoding RNA elements in bacterial and archaeal genomes is hampered by the difficulty of de novo identification. The emergence of new technologies for characterizing transcriptome outputs, notably RNA-seq, are improving noncoding RNA identification and expression quantification. However, a major challenge is to robustly distinguish functional outputs from transcriptional noise. To establish whether annotation of existing transcriptome data has effectively captured all functional outputs, we analysed over 400 publicly available RNA-seq datasets spanning 37 different Archaea and Bacteria. Using comparative tools, we identify close to a thousand highly-expressed candidate noncoding RNAs. However, our analyses reveal that capacity to identify noncoding RNA outputs is strongly dependent on phylogenetic sampling. Surprisingly, and in stark contrast to protein-coding genes, the phylogenetic window for effective use of comparative methods is perversely narrow: aggregating public datasets only produced one phylogenetic cluster where these tools could be used to robustly separate unannotated noncoding RNAs from a null hypothesis of transcriptional noise. Our results show that for the full potential of transcriptomics data to be realized, a change in experimental design is paramount: effective transcriptomics requires phylogeny-aware sampling.

  7. Sampling strategies for millipedes (Diplopoda), centipedes ...

    African Journals Online (AJOL)

    At present considerable effort is being made to document and describe invertebrate diversity as part of numerous biodiversity conservation research projects. In order to determine diversity, rapid and effective sampling and estimation procedures are required and these need to be standardized for a particular group of ...

  8. A Robust Longitudinal Control Strategy of Platoons under Model Uncertainties and Time Delays

    Directory of Open Access Journals (Sweden)

    Na Chen

    2018-01-01

    Full Text Available Automated vehicles are designed to free drivers from driving tasks and are expected to improve traffic safety and efficiency when connected via vehicle-to-vehicle communication, that is, connected automated vehicles (CAVs. The time delays and model uncertainties in vehicle control systems pose challenges for automated driving in real world. Ignoring them may render the performance of cooperative driving systems unsatisfactory or even unstable. This paper aims to design a robust and flexible platooning control strategy for CAVs. A centralized control method is presented, where the leader of a CAV platoon collects information from followers, computes the desired accelerations of all controlled vehicles, and broadcasts the desired accelerations to followers. The robust platooning is formulated as a Min-Max Model Predictive Control (MM-MPC problem, where optimal accelerations are generated to minimize the cost function under the worst case, where the worst case is taken over the possible models. The proposed method is flexible in such a way that it can be applied to both homogeneous platoon and heterogeneous platoon with mixed human-driven and automated controlled vehicles. A third-order linear vehicle model with fixed feedback delay and stochastic actuator lag is used to predict the platoon behavior. Actuator lag is assumed to vary randomly with unknown distributions but a known upper bound. The controller regulates platoon accelerations over a time horizon to minimize a cost function representing driving safety, efficiency, and ride comfort, subject to speed limits, plausible acceleration range, and minimal net spacing. The designed strategy is tested by simulating homogeneous and heterogeneous platoons in a number of typical and extreme scenarios to assess the system stability and performance. The test results demonstrate that the designed control strategy for CAV can ensure the robustness of stability and performance against model uncertainties

  9. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  10. Improving the Robustness of Electromyogram-Pattern Recognition for Prosthetic Control by a Postprocessing Strategy

    Directory of Open Access Journals (Sweden)

    Xu Zhang

    2017-09-01

    Full Text Available Electromyogram (EMG contains rich information for motion decoding. As one of its major applications, EMG-pattern recognition (PR-based control of prostheses has been proposed and investigated in the field of rehabilitation robotics for decades. These prostheses can offer a higher level of dexterity compared to the commercially available ones. However, limited progress has been made toward clinical application of EMG-PR-based prostheses, due to their unsatisfactory robustness against various interferences during daily use. These interferences may lead to misclassifications of motion intentions, which damage the control performance of EMG-PR-based prostheses. A number of studies have applied methods that undergo a postprocessing stage to determine the current motion outputs, based on previous outputs or other information, which have proved effective in reducing erroneous outputs. In this study, we proposed a postprocessing strategy that locks the outputs during the constant contraction to block out occasional misclassifications, upon detecting the motion onset using a threshold. The strategy was investigated using three different motion onset detectors, namely mean absolute value, Teager–Kaiser energy operator, or mechanomyogram (MMG. Our results indicate that the proposed strategy could suppress erroneous outputs, during rest and constant contractions in particular. In addition, with MMG as the motion onset detector, the strategy was found to produce the most significant improvement in the performance, reducing the total errors up to around 50% (from 22.9 to 11.5% in comparison to the original classification output in the online test, and it is the most robust against threshold value changes. We speculate that motion onset detectors that are both smooth and responsive would further enhance the efficacy of the proposed postprocessing strategy, which would facilitate the clinical application of EMG-PR-based prosthetic control.

  11. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control...... system is constructed through a model-matching strategy. The stability, performance and robustness of the reconfigured system can be guaranteed when some conditions are satisfied. To illustrate the effectiveness of the proposed method, a robot system subjected to failures is used to demonstrate...

  12. Climate change on the Colorado River: a method to search for robust management strategies

    Science.gov (United States)

    Keefe, R.; Fischbach, J. R.

    2010-12-01

    The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.

  13. A Robust PCR Protocol for HIV Drug Resistance Testing on Low-Level Viremia Samples

    Directory of Open Access Journals (Sweden)

    Shivani Gupta

    2017-01-01

    Full Text Available The prevalence of drug resistance (DR mutations in people with HIV-1 infection, particularly those with low-level viremia (LLV, supports the need to improve the sensitivity of amplification methods for HIV DR genotyping in order to optimize antiretroviral regimen and facilitate HIV-1 DR surveillance and relevant research. Here we report on a fully validated PCR-based protocol that achieves consistent amplification of the protease (PR and reverse transcriptase (RT regions of HIV-1 pol gene across many HIV-1 subtypes from LLV plasma samples. HIV-spiked plasma samples from the External Quality Assurance Program Oversight Laboratory (EQAPOL, covering various HIV-1 subtypes, as well as clinical specimens were used to optimize and validate the protocol. Our results demonstrate that this protocol has a broad HIV-1 subtype coverage and viral load span with high sensitivity and reproducibility. Moreover, the protocol is robust even when plasma sample volumes are limited, the HIV viral load is unknown, and/or the HIV subtype is undetermined. Thus, the protocol is applicable for the initial amplification of the HIV-1 PR and RT genes required for subsequent genotypic DR assays.

  14. Effective sampling strategy to detect food and feed contamination

    NARCIS (Netherlands)

    Bouzembrak, Yamine; Fels, van der Ine

    2018-01-01

    Sampling plans for food safety hazards are aimed to be used to determine whether a lot of food is contaminated (with microbiological or chemical hazards) or not. One of the components of sampling plans is the sampling strategy. The aim of this study was to compare the performance of three

  15. Robustness of Quadratic Hedging Strategies in Finance via Backward Stochastic Differential Equations with Jumps

    International Nuclear Information System (INIS)

    Di Nunno, Giulia; Khedher, Asma; Vanmaele, Michèle

    2015-01-01

    We consider a backward stochastic differential equation with jumps (BSDEJ) which is driven by a Brownian motion and a Poisson random measure. We present two candidate-approximations to this BSDEJ and we prove that the solution of each candidate-approximation converges to the solution of the original BSDEJ in a space which we specify. We use this result to investigate in further detail the consequences of the choice of the model to (partial) hedging in incomplete markets in finance. As an application, we consider models in which the small variations in the price dynamics are modeled with a Poisson random measure with infinite activity and models in which these small variations are modeled with a Brownian motion or are cut off. Using the convergence results on BSDEJs, we show that quadratic hedging strategies are robust towards the approximation of the market prices and we derive an estimation of the model risk

  16. Robustness of Quadratic Hedging Strategies in Finance via Backward Stochastic Differential Equations with Jumps

    Energy Technology Data Exchange (ETDEWEB)

    Di Nunno, Giulia, E-mail: giulian@math.uio.no [University of Oslo, Center of Mathematics for Applications (Norway); Khedher, Asma, E-mail: asma.khedher@tum.de [Technische Universität München, Chair of Mathematical Finance (Germany); Vanmaele, Michèle, E-mail: michele.vanmaele@ugent.be [Ghent University, Department of Applied Mathematics, Computer Science and Statistics (Belgium)

    2015-12-15

    We consider a backward stochastic differential equation with jumps (BSDEJ) which is driven by a Brownian motion and a Poisson random measure. We present two candidate-approximations to this BSDEJ and we prove that the solution of each candidate-approximation converges to the solution of the original BSDEJ in a space which we specify. We use this result to investigate in further detail the consequences of the choice of the model to (partial) hedging in incomplete markets in finance. As an application, we consider models in which the small variations in the price dynamics are modeled with a Poisson random measure with infinite activity and models in which these small variations are modeled with a Brownian motion or are cut off. Using the convergence results on BSDEJs, we show that quadratic hedging strategies are robust towards the approximation of the market prices and we derive an estimation of the model risk.

  17. Sampling strategy to develop a primary core collection of apple ...

    African Journals Online (AJOL)

    PRECIOUS

    2010-01-11

    Jan 11, 2010 ... Physiology and Molecular Biology for Fruit, Tree, Beijing 100193, China. ... analyzed on genetic diversity to ensure their represen- .... strategy, cluster and random sampling. .... on isozyme data―A simulation study, Theor.

  18. A strategy for tissue self-organization that is robust to cellular heterogeneity and plasticity.

    Science.gov (United States)

    Cerchiari, Alec E; Garbe, James C; Jee, Noel Y; Todhunter, Michael E; Broaders, Kyle E; Peehl, Donna M; Desai, Tejal A; LaBarge, Mark A; Thomson, Matthew; Gartner, Zev J

    2015-02-17

    Developing tissues contain motile populations of cells that can self-organize into spatially ordered tissues based on differences in their interfacial surface energies. However, it is unclear how self-organization by this mechanism remains robust when interfacial energies become heterogeneous in either time or space. The ducts and acini of the human mammary gland are prototypical heterogeneous and dynamic tissues comprising two concentrically arranged cell types. To investigate the consequences of cellular heterogeneity and plasticity on cell positioning in the mammary gland, we reconstituted its self-organization from aggregates of primary cells in vitro. We find that self-organization is dominated by the interfacial energy of the tissue-ECM boundary, rather than by differential homo- and heterotypic energies of cell-cell interaction. Surprisingly, interactions with the tissue-ECM boundary are binary, in that only one cell type interacts appreciably with the boundary. Using mathematical modeling and cell-type-specific knockdown of key regulators of cell-cell cohesion, we show that this strategy of self-organization is robust to severe perturbations affecting cell-cell contact formation. We also find that this mechanism of self-organization is conserved in the human prostate. Therefore, a binary interfacial interaction with the tissue boundary provides a flexible and generalizable strategy for forming and maintaining the structure of two-component tissues that exhibit abundant heterogeneity and plasticity. Our model also predicts that mutations affecting binary cell-ECM interactions are catastrophic and could contribute to loss of tissue architecture in diseases such as breast cancer.

  19. Optimum and robust 3D facies interpolation strategies in a heterogeneous coal zone (Tertiary As Pontes basin, NW Spain)

    Energy Technology Data Exchange (ETDEWEB)

    Falivene, Oriol; Cabrera, Lluis; Saez, Alberto [Geomodels Institute, Group of Geodynamics and Basin Analysis, Department of Stratigraphy, Paleontology and Marine Geosciences, Universitat de Barcelona, c/ Marti i Franques s/n, Facultat de Geologia, 08028 Barcelona (Spain)

    2007-07-02

    Coal exploration and mining in extensively drilled and sampled coal zones can benefit from 3D statistical facies interpolation. Starting from closely spaced core descriptions, and using interpolation methods, a 3D optimum and robust facies distribution model was obtained for a thick, heterogeneous coal zone deposited in the non-marine As Pontes basin (Oligocene-Early Miocene, NW Spain). Several grid layering styles, interpolation methods (truncated inverse squared distance weighting, truncated kriging, truncated kriging with an areal trend, indicator inverse squared distance weighting, indicator kriging, and indicator kriging with an areal trend) and searching conditions were compared. Facies interpolation strategies were evaluated using visual comparison and cross validation. Moreover, robustness of the resultant facies distribution with respect to variations in interpolation method input parameters was verified by taking into account several scenarios of uncertainty. The resultant 3D facies reconstruction improves the understanding of the distribution and geometry of the coal facies. Furthermore, since some coal quality properties (e.g. calorific value or sulphur percentage) display a good statistical correspondence with facies, predicting the distribution of these properties using the reconstructed facies distribution as a template proved to be a powerful approach, yielding more accurate and realistic reconstructions of these properties in the coal zone. (author)

  20. High-throughput droplet analysis and multiplex DNA detection in the microfluidic platform equipped with a robust sample-introduction technique

    International Nuclear Information System (INIS)

    Chen, Jinyang; Ji, Xinghu; He, Zhike

    2015-01-01

    In this work, a simple, flexible and low-cost sample-introduction technique was developed and integrated with droplet platform. The sample-introduction strategy was realized based on connecting the components of positive pressure input device, sample container and microfluidic chip through the tygon tubing with homemade polydimethylsiloxane (PDMS) adaptor, so the sample was delivered into the microchip from the sample container under the driving of positive pressure. This sample-introduction technique is so robust and compatible that could be integrated with T-junction, flow-focus or valve-assisted droplet microchips. By choosing the PDMS adaptor with proper dimension, the microchip could be flexibly equipped with various types of familiar sample containers, makes the sampling more straightforward without trivial sample transfer or loading. And the convenient sample changing was easily achieved by positioning the adaptor from one sample container to another. Benefiting from the proposed technique, the time-dependent concentration gradient was generated and applied for quantum dot (QD)-based fluorescence barcoding within droplet chip. High-throughput droplet screening was preliminarily demonstrated through the investigation of the quenching efficiency of ruthenium complex to the fluorescence of QD. More importantly, multiplex DNA assay was successfully carried out in the integrated system, which shows the practicability and potentials in high-throughput biosensing. - Highlights: • A simple, robust and low-cost sample-introduction technique was developed. • Convenient and flexible sample changing was achieved in microfluidic system. • Novel strategy of concentration gradient generation was presented for barcoding. • High-throughput droplet screening could be realized in the integrated platform. • Multiplex DNA assay was successfully carried out in the droplet platform

  1. High-throughput droplet analysis and multiplex DNA detection in the microfluidic platform equipped with a robust sample-introduction technique

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jinyang; Ji, Xinghu [Key Laboratory of Analytical Chemistry for Biology and Medicine (Ministry of Education), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); He, Zhike, E-mail: zhkhe@whu.edu.cn [Key Laboratory of Analytical Chemistry for Biology and Medicine (Ministry of Education), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); Suzhou Institute of Wuhan University, Suzhou 215123 (China)

    2015-08-12

    In this work, a simple, flexible and low-cost sample-introduction technique was developed and integrated with droplet platform. The sample-introduction strategy was realized based on connecting the components of positive pressure input device, sample container and microfluidic chip through the tygon tubing with homemade polydimethylsiloxane (PDMS) adaptor, so the sample was delivered into the microchip from the sample container under the driving of positive pressure. This sample-introduction technique is so robust and compatible that could be integrated with T-junction, flow-focus or valve-assisted droplet microchips. By choosing the PDMS adaptor with proper dimension, the microchip could be flexibly equipped with various types of familiar sample containers, makes the sampling more straightforward without trivial sample transfer or loading. And the convenient sample changing was easily achieved by positioning the adaptor from one sample container to another. Benefiting from the proposed technique, the time-dependent concentration gradient was generated and applied for quantum dot (QD)-based fluorescence barcoding within droplet chip. High-throughput droplet screening was preliminarily demonstrated through the investigation of the quenching efficiency of ruthenium complex to the fluorescence of QD. More importantly, multiplex DNA assay was successfully carried out in the integrated system, which shows the practicability and potentials in high-throughput biosensing. - Highlights: • A simple, robust and low-cost sample-introduction technique was developed. • Convenient and flexible sample changing was achieved in microfluidic system. • Novel strategy of concentration gradient generation was presented for barcoding. • High-throughput droplet screening could be realized in the integrated platform. • Multiplex DNA assay was successfully carried out in the droplet platform.

  2. Robust remediation strategies at gas-work sites: a case of source recognition and source characterization

    International Nuclear Information System (INIS)

    Vries, P.O. de

    2005-01-01

    In The Netherlands there have been gasworks at about 260 to 270 locations. Most of these locations are or were heavily polluted with tar, ashes and cyanides and many of them belong to the locations where remediation actions have already been executed. It seems however that many of them also belong to the locations where remediation actions were not quite as successful as was expected. So, for many gas-work sites that were already 'remedied' in the 80's and early 90's of the foregoing century, new programs for site remediation are planned. Of course the mistakes from the past should now be avoided. The current remediation strategy in The Netherlands for gas-work sites can be comprised in four steps: 1 - removing spots in the top soil, 2 - removing spots with mobile components in the shallow subsoil, 3 - controlling spots with mobile components in the deep subsoil, 4 - creating a 'steady endpoint situation' in the plume. At many former gas-work sites real sources, i.e. in a physico-chemical sense, are not very well known. This can easily lead to insufficient removal of some or part of these sources and cause a longer delivery of contaminants to the groundwater plume, with higher endpoint concentrations, higher costs and more restrictions for future use. The higher concentrations and longer deliveries originating from not recognized or not localized sources are often not sufficiently compensated by the proposed plume management in current remediation strategies. Remediation results can be improved by using knowledge about the processes that determine the delivery of contaminants to the groundwater, the materials that cause these delivery and the locations at the site where these are most likely found. When sources are present in the deep subsoil or the exact localization of sources is uncertain, robust remediation strategies should be chosen and wishful thinking about removing sources with in situ techniques should be avoided. Robust strategies are probably less

  3. Multi-Phase Sub-Sampling Fractional-N PLL with soft loop switching for fast robust locking

    NARCIS (Netherlands)

    Liao, Dongyi; Dai, FA Foster; Nauta, Bram; Klumperink, Eric A.M.

    2017-01-01

    This paper presents a low phase noise sub-sampling PLL (SSPLL) with multi-phase outputs. Automatic soft switching between the sub-sampling phase loop and frequency loop is proposed to improve robustness against perturbations and interferences that may cause a traditional SSPLL to lose lock. A

  4. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  5. User-driven sampling strategies in image exploitation

    Science.gov (United States)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  6. The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM.

    Science.gov (United States)

    Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei

    2015-01-01

    The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages.

  7. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  8. A Bayesian sampling strategy for hazardous waste site characterization

    International Nuclear Information System (INIS)

    Skalski, J.R.

    1987-12-01

    Prior knowledge based on historical records or physical evidence often suggests the existence of a hazardous waste site. Initial surveys may provide additional or even conflicting evidence of site contamination. This article presents a Bayes sampling strategy that allocates sampling at a site using this prior knowledge. This sampling strategy minimizes the environmental risks of missing chemical or radionuclide hot spots at a waste site. The environmental risk is shown to be proportional to the size of the undetected hot spot or inversely proportional to the probability of hot spot detection. 12 refs., 2 figs

  9. Reagent-Less and Robust Biosensor for Direct Determination of Lactate in Food Samples

    Directory of Open Access Journals (Sweden)

    Iria Bravo

    2017-01-01

    Full Text Available Lactic acid is a relevant analyte in the food industry, since it affects the flavor, freshness, and storage quality of several products, such as milk and dairy products, juices, or wines. It is the product of lactose or malo-lactic fermentation. In this work, we developed a lactate biosensor based on the immobilization of lactate oxidase (LOx onto N,N′-Bis(3,4-dihydroxybenzylidene -1,2-diaminobenzene Schiff base tetradentate ligand-modified gold nanoparticles (3,4DHS–AuNPs deposited onto screen-printed carbon electrodes, which exhibit a potent electrocatalytic effect towards hydrogen peroxide oxidation/reduction. 3,4DHS–AuNPs were synthesized within a unique reaction step, in which 3,4DHS acts as reducing/capping/modifier agent for the generation of stable colloidal suspensions of Schiff base ligand–AuNPs assemblies of controlled size. The ligand—in addition to its reduction action—provides a robust coating to gold nanoparticles and a catalytic function. Lactate oxidase (LOx catalyzes the conversion of l-lactate to pyruvate in the presence of oxygen, producing hydrogen peroxide, which is catalytically oxidized at 3,4DHS–AuNPs modified screen-printed carbon electrodes at +0.2 V. The measured electrocatalytic current is directly proportional to the concentration of peroxide, which is related to the amount of lactate present in the sample. The developed biosensor shows a detection limit of 2.6 μM lactate and a sensitivity of 5.1 ± 0.1 μA·mM−1. The utility of the device has been demonstrated by the determination of the lactate content in different matrixes (white wine, beer, and yogurt. The obtained results compare well to those obtained using a standard enzymatic-spectrophotometric assay kit.

  10. Reagent-Less and Robust Biosensor for Direct Determination of Lactate in Food Samples.

    Science.gov (United States)

    Bravo, Iria; Revenga-Parra, Mónica; Pariente, Félix; Lorenzo, Encarnación

    2017-01-13

    Lactic acid is a relevant analyte in the food industry, since it affects the flavor, freshness, and storage quality of several products, such as milk and dairy products, juices, or wines. It is the product of lactose or malo-lactic fermentation. In this work, we developed a lactate biosensor based on the immobilization of lactate oxidase (LOx) onto N , N '-Bis(3,4-dihydroxybenzylidene) -1,2-diaminobenzene Schiff base tetradentate ligand-modified gold nanoparticles (3,4DHS-AuNPs) deposited onto screen-printed carbon electrodes, which exhibit a potent electrocatalytic effect towards hydrogen peroxide oxidation/reduction. 3,4DHS-AuNPs were synthesized within a unique reaction step, in which 3,4DHS acts as reducing/capping/modifier agent for the generation of stable colloidal suspensions of Schiff base ligand-AuNPs assemblies of controlled size. The ligand-in addition to its reduction action-provides a robust coating to gold nanoparticles and a catalytic function. Lactate oxidase (LOx) catalyzes the conversion of l-lactate to pyruvate in the presence of oxygen, producing hydrogen peroxide, which is catalytically oxidized at 3,4DHS-AuNPs modified screen-printed carbon electrodes at +0.2 V. The measured electrocatalytic current is directly proportional to the concentration of peroxide, which is related to the amount of lactate present in the sample. The developed biosensor shows a detection limit of 2.6 μM lactate and a sensitivity of 5.1 ± 0.1 μA·mM -1 . The utility of the device has been demonstrated by the determination of the lactate content in different matrixes (white wine, beer, and yogurt). The obtained results compare well to those obtained using a standard enzymatic-spectrophotometric assay kit.

  11. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  12. Robust Frequency and Voltage Stability Control Strategy for Standalone AC/DC Hybrid Microgrid

    Directory of Open Access Journals (Sweden)

    Furqan Asghar

    2017-05-01

    Full Text Available The microgrid (MG concept is attracting considerable attention as a solution to energy deficiencies, especially in remote areas, but the intermittent nature of renewable sources and varying loads cause many control problems and thereby affect the quality of power within a microgrid operating in standalone mode. This might cause large frequency and voltage deviations in the system due to unpredictable output power fluctuations. Furthermore, without any main grid support, it is more complex to control and manage the system. In past, droop control and various other coordination control strategies have been presented to stabilize the microgrid frequency and voltages, but in order to utilize the available resources up to their maximum capacity in a positive way, new and robust control mechanisms are required. In this paper, a standalone microgrid is presented, which integrates renewable energy-based distributed generations and local loads. A fuzzy logic-based intelligent control technique is proposed to maintain the frequency and DC (direct current-link voltage stability for sudden changes in load or generation power. Also from a frequency control perspective, a battery energy storage system (BESS is suggested as a replacement for a synchronous generator to stabilize the nominal system frequency as a synchronous generator is unable to operate at its maximum efficiency while being controlled for stabilization purposes. Likewise, a super capacitor (SC and BESS is used to stabilize DC bus voltages even though maximum possible energy is being extracted from renewable generated sources using maximum power point tracking. This newly proposed control method proves to be effective by reducing transient time, minimizing the frequency deviations, maintaining voltages even though maximum power point tracking is working and preventing generators from exceeding their power ratings during disturbances. However, due to the BESS limited capacity, load switching

  13. Robust event-triggered MPC with guaranteed asymptotic bound and average sampling rate

    NARCIS (Netherlands)

    Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2017-01-01

    We propose a robust event-triggered model predictive control (MPC) scheme for linear time-invariant discrete-time systems subject to bounded additive stochastic disturbances and hard constraints on the input and state. For given probability distributions of the disturbances acting on the system, we

  14. Sampling strategy for estimating human exposure pathways to consumer chemicals

    NARCIS (Netherlands)

    Papadopoulou, Eleni; Padilla-Sanchez, Juan A.; Collins, Chris D.; Cousins, Ian T.; Covaci, Adrian; de Wit, Cynthia A.; Leonards, Pim E.G.; Voorspoels, Stefan; Thomsen, Cathrine; Harrad, Stuart; Haug, Line S.

    2016-01-01

    Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway.

  15. Chapter 2: Sampling strategies in forest hydrology and biogeochemistry

    Science.gov (United States)

    Roger C. Bales; Martha H. Conklin; Branko Kerkez; Steven Glaser; Jan W. Hopmans; Carolyn T. Hunsaker; Matt Meadows; Peter C. Hartsough

    2011-01-01

    Many aspects of forest hydrology have been based on accurate but not necessarily spatially representative measurements, reflecting the measurement capabilities that were traditionally available. Two developments are bringing about fundamental changes in sampling strategies in forest hydrology and biogeochemistry: (a) technical advances in measurement capability, as is...

  16. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability.

    Science.gov (United States)

    Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj

    2012-03-20

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.

  17. Robustness Improvement of Superconducting Magnetic Energy Storage System in Microgrids Using an Energy Shaping Passivity-Based Control Strategy

    Directory of Open Access Journals (Sweden)

    Rui Hou

    2017-05-01

    Full Text Available Superconducting magnetic energy storage (SMES systems, in which the proportional-integral (PI method is usually used to control the SMESs, have been used in microgrids for improving the control performance. However, the robustness of PI-based SMES controllers may be unsatisfactory due to the high nonlinearity and coupling of the SMES system. In this study, the energy shaping passivity (ESP-based control strategy, which is a novel nonlinear control based on the methodology of interconnection and damping assignment (IDA, is proposed for robustness improvement of SMES systems. A step-by-step design of the ESP-based method considering the robustness of SMES systems is presented. A comparative analysis of the performance between ESP-based and PI control strategies is shown. Simulation and experimental results prove that the ESP-based strategy achieves the stronger robustness toward the system parameter uncertainties than the conventional PI control. Besides, the use of ESP-based control method can reduce the eddy current losses of a SMES system due to the significant reduction of 2nd and 3rd harmonics of superconducting coil DC current.

  18. Novel strategies for sample preparation in forensic toxicology.

    Science.gov (United States)

    Samanidou, Victoria; Kovatsi, Leda; Fragou, Domniki; Rentifis, Konstantinos

    2011-09-01

    This paper provides a review of novel strategies for sample preparation in forensic toxicology. The review initially outlines the principle of each technique, followed by sections addressing each class of abused drugs separately. The novel strategies currently reviewed focus on the preparation of various biological samples for the subsequent determination of opiates, benzodiazepines, amphetamines, cocaine, hallucinogens, tricyclic antidepressants, antipsychotics and cannabinoids. According to our experience, these analytes are the most frequently responsible for intoxications in Greece. The applications of techniques such as disposable pipette extraction, microextraction by packed sorbent, matrix solid-phase dispersion, solid-phase microextraction, polymer monolith microextraction, stir bar sorptive extraction and others, which are rapidly gaining acceptance in the field of toxicology, are currently reviewed.

  19. Adapting to a Changing Colorado River: Making Future Water Deliveries More Reliable Through Robust Management Strategies

    Science.gov (United States)

    Groves, D.; Bloom, E.; Fischbach, J. R.; Knopman, D.

    2013-12-01

    The U.S. Bureau of Reclamation and water management agencies representing the seven Colorado River Basin States initiated the Colorado River Basin Study in January 2010 to evaluate the resiliency of the Colorado River system over the next 50 years and compare different options for ensuring successful management of the river's resources. RAND was asked to join this Basin Study Team in January 2012 to help develop an analytic approach to identify key vulnerabilities in managing the Colorado River basin over the coming decades and to evaluate different options that could reduce this vulnerability. Using a quantitative approach for planning under uncertainty called Robust Decision Making (RDM), the RAND team assisted the Basin Study by: identifying future vulnerable conditions that could lead to imbalances that could cause the basin to be unable to meet its water delivery objectives; developing a computer-based tool to define 'portfolios' of management options reflecting different strategies for reducing basin imbalances; evaluating these portfolios across thousands of future scenarios to determine how much they could improve basin outcomes; and analyzing the results from the system simulations to identify key tradeoffs among the portfolios. This talk will describe RAND's contribution to the Basin Study, focusing on the methodologies used to to identify vulnerabilities for Upper Basin and Lower Basin water supply reliability and to compare portfolios of options. Several key findings emerged from the study. Future Streamflow and Climate Conditions Are Key: - Vulnerable conditions arise in a majority of scenarios where streamflows are lower than historical averages and where drought conditions persist for eight years or more. - Depending where the shortages occur, problems will arise for delivery obligations for the upper river basin and the lower river basin. The lower river basin is vulnerable to a broader range of plausible future conditions. Additional Investments in

  20. Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.

    Science.gov (United States)

    Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge

    2017-02-22

    Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Adaptive sampling strategies with high-throughput molecular dynamics

    Science.gov (United States)

    Clementi, Cecilia

    Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.

  2. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  3. Review of robust measurement of phosphorus in river water: sampling, storage, fractionation and sensitivity

    Directory of Open Access Journals (Sweden)

    H. P. Jarvie

    2002-01-01

    Full Text Available This paper reviews current knowledge on sampling, storage and analysis of phosphorus (P in river waters. Potential sensitivity of rivers with different physical, chemical and biological characteristics (trophic status, turbidity, flow regime, matrix chemistry is examined in terms of errors associated with sampling, sample preparation, storage, contamination, interference and analytical errors. Key issues identified include: The need to tailor analytical reagents and concentrations to take into account the characteristics of the sample matrix. The effects of matrix interference on the colorimetric analysis. The influence of variable rates of phospho-molybdenum blue colour formation. The differing responses of river waters to physical and chemical conditions of storage. The higher sensitivities of samples with low P concentrations to storage and analytical errors. Given high variability of river water characteristics in space and time, no single standardised methodology for sampling, storage and analysis of P in rivers can be offered. ‘Good Practice’ guidelines are suggested, which recommend that protocols for sampling, storage and analysis of river water for P is based on thorough site-specific method testing and assessment of P stability on storage. For wider sampling programmes at the regional/national scale where intensive site-specific method and stability testing are not feasible, ‘Precautionary Practice’ guidelines are suggested. The study highlights key areas requiring further investigation for improving methodological rigour. Keywords: phosphorus, orthophosphate, soluble reactive, particulate, colorimetry, stability, sensitivity, analytical error, storage, sampling, filtration, preservative, fractionation, digestion

  4. A robust control strategy for mitigating renewable energy fluctuations in a real hybrid power system combined with SMES

    Science.gov (United States)

    Magdy, G.; Shabib, G.; Elbaset, Adel A.; Qudaih, Yaser; Mitani, Yasunori

    2018-05-01

    Utilizing Renewable Energy Sources (RESs) is attracting great attention as a solution to future energy shortages. However, the irregular nature of RESs and random load deviations cause a large frequency and voltage fluctuations. Therefore, in order to benefit from a maximum capacity of the RESs, a robust mitigation strategy of power fluctuations from RESs must be applied. Hence, this paper proposes a design of Load Frequency Control (LFC) coordinated with Superconducting Magnetic Energy Storage (SMES) technology (i.e., an auxiliary LFC), using an optimal PID controller-based Particle Swarm Optimization (PSO) in the Egyptian Power System (EPS) considering high penetration of Photovoltaics (PV) power generation. Thus, from the perspective of LFC, the robust control strategy is proposed to maintain the nominal system frequency and mitigating the power fluctuations from RESs against all disturbances sources for the EPS with the multi-source environment. The EPS is decomposed into three dynamics subsystems, which are non-reheat, reheat and hydro power plants taking into consideration the system nonlinearity. The results by nonlinear simulation Matlab/Simulink for the EPS combined with SMES system considering PV solar power approves that, the proposed control strategy achieves a robust stability by reducing transient time, minimizing the frequency deviations, maintaining the system frequency, preventing conventional generators from exceeding their power ratings during load disturbances, and mitigating the power fluctuations from the RESs.

  5. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  6. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  7. Data-driven strategies for robust forecast of continuous glucose monitoring time-series.

    Science.gov (United States)

    Fiorini, Samuele; Martini, Chiara; Malpassi, Davide; Cordera, Renzo; Maggi, Davide; Verri, Alessandro; Barla, Annalisa

    2017-07-01

    Over the past decade, continuous glucose monitoring (CGM) has proven to be a very resourceful tool for diabetes management. To date, CGM devices are employed for both retrospective and online applications. Their use allows to better describe the patients' pathology as well as to achieve a better control of patients' level of glycemia. The analysis of CGM sensor data makes possible to observe a wide range of metrics, such as the glycemic variability during the day or the amount of time spent below or above certain glycemic thresholds. However, due to the high variability of the glycemic signals among sensors and individuals, CGM data analysis is a non-trivial task. Standard signal filtering solutions fall short when an appropriate model personalization is not applied. State-of-the-art data-driven strategies for online CGM forecasting rely upon the use of recursive filters. Each time a new sample is collected, such models need to adjust their parameters in order to predict the next glycemic level. In this paper we aim at demonstrating that the problem of online CGM forecasting can be successfully tackled by personalized machine learning models, that do not need to recursively update their parameters.

  8. Robustness to non-normality of various tests for the one-sample location problem

    Directory of Open Access Journals (Sweden)

    Michelle K. McDougall

    2004-01-01

    Full Text Available This paper studies the effect of the normal distribution assumption on the power and size of the sign test, Wilcoxon's signed rank test and the t-test when used in one-sample location problems. Power functions for these tests under various skewness and kurtosis conditions are produced for several sample sizes from simulated data using the g-and-k distribution of MacGillivray and Cannon [5].

  9. Robust optimisation for self-scheduling and bidding strategies of hybrid CSP-fossil power plants

    DEFF Research Database (Denmark)

    Pousinho, H.M.I.; Contreras, J.; Pinson, P.

    2015-01-01

    between the molten-salt thermal energy storage (TES) and a fossil-fuel backup to overcome solar irradiation insufficiency, but with emission allowances constrained in the backup system to mitigate carbon footprint. A robust optimisation-based approach is proposed to provide the day-ahead self...

  10. Sampling and analyte enrichment strategies for ambient mass spectrometry.

    Science.gov (United States)

    Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei

    2018-01-01

    Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.

  11. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

  12. Sampling and analysis strategies to support waste form qualification

    International Nuclear Information System (INIS)

    Westsik, J.H. Jr.; Pulsipher, B.A.; Eggett, D.L.; Kuhn, W.L.

    1989-04-01

    As part of the waste acceptance process, waste form producers will be required to (1) demonstrate that their glass waste form will meet minimum specifications, (2) show that the process can be controlled to consistently produce an acceptable waste form, and (3) provide documentation that the waste form produced meets specifications. Key to the success of these endeavors is adequate sampling and chemical and radiochemical analyses of the waste streams from the waste tanks through the process to the final glass product. This paper suggests sampling and analysis strategies for meeting specific statistical objectives of (1) detection of compositions outside specification limits, (2) prediction of final glass product composition, and (3) estimation of composition in process vessels for both reporting and guiding succeeding process steps. 2 refs., 1 fig., 3 tabs

  13. A robust and fast method of sampling and analysis of delta13C of dissolved inorganic carbon in ground waters.

    Science.gov (United States)

    Spötl, Christoph

    2005-09-01

    The stable carbon isotopic composition of dissolved inorganic carbon (delta13C(DIC)) is traditionally determined using either direct precipitation or gas evolution methods in conjunction with offline gas preparation and measurement in a dual-inlet isotope ratio mass spectrometer. A gas evolution method based on continuous-flow technology is described here, which is easy to use and robust. Water samples (100-1500 microl depending on the carbonate alkalinity) are injected into He-filled autosampler vials in the field and analysed on an automated continuous-flow gas preparation system interfaced to an isotope ratio mass spectrometer. Sample analysis time including online preparation is 10 min and overall precision is 0.1 per thousand. This method is thus fast and can easily be automated for handling large sample batches.

  14. Robust, Sensitive, and Automated Phosphopeptide Enrichment Optimized for Low Sample Amounts Applied to Primary Hippocampal Neurons

    NARCIS (Netherlands)

    Post, Harm; Penning, Renske; Fitzpatrick, Martin; Garrigues, L.B.; Wu, W.; Mac Gillavry, H.D.; Hoogenraad, C.C.; Heck, A.J.R.; Altelaar, A.F.M.

    2017-01-01

    Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC–MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts,

  15. Robustness analysis of complex networks with power decentralization strategy via flow-sensitive centrality against cascading failures

    Science.gov (United States)

    Guo, Wenzhang; Wang, Hao; Wu, Zhengping

    2018-03-01

    Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.

  16. Synthetic Jet Actuator-Based Aircraft Tracking Using a Continuous Robust Nonlinear Control Strategy

    Directory of Open Access Journals (Sweden)

    N. Ramos-Pedroza

    2017-01-01

    Full Text Available A robust nonlinear control law that achieves trajectory tracking control for unmanned aerial vehicles (UAVs equipped with synthetic jet actuators (SJAs is presented in this paper. A key challenge in the control design is that the dynamic characteristics of SJAs are nonlinear and contain parametric uncertainty. The challenge resulting from the uncertain SJA actuator parameters is mitigated via innovative algebraic manipulation in the tracking error system derivation along with a robust nonlinear control law employing constant SJA parameter estimates. A key contribution of the paper is a rigorous analysis of the range of SJA actuator parameter uncertainty within which asymptotic UAV trajectory tracking can be achieved. A rigorous stability analysis is carried out to prove semiglobal asymptotic trajectory tracking. Detailed simulation results are included to illustrate the effectiveness of the proposed control law in the presence of wind gusts and varying levels of SJA actuator parameter uncertainty.

  17. Adapting to Adaptations: Behavioural Strategies that are Robust to Mutations and Other Organisational-Transformations

    Science.gov (United States)

    Egbert, Matthew D.; Pérez-Mercader, Juan

    2016-01-01

    Genetic mutations, infection by parasites or symbionts, and other events can transform the way that an organism’s internal state changes in response to a given environment. We use a minimalistic computational model to support an argument that by behaving “interoceptively,” i.e. responding to internal state rather than to the environment, organisms can be robust to these organisational-transformations. We suggest that the robustness of interoceptive behaviour is due, in part, to the asymmetrical relationship between an organism and its environment, where the latter more substantially influences the former than vice versa. This relationship means that interoceptive behaviour can respond to the environment, the internal state and the interaction between the two, while exteroceptive behaviour can only respond to the environment. We discuss the possibilities that (i) interoceptive behaviour may play an important role of facilitating adaptive evolution (especially in the early evolution of primitive life) and (ii) interoceptive mechanisms could prove useful in efforts to create more robust synthetic life-forms. PMID:26743579

  18. Perspectives on land snails - sampling strategies for isotopic analyses

    Science.gov (United States)

    Kwiecien, Ola; Kalinowski, Annika; Kamp, Jessica; Pellmann, Anna

    2017-04-01

    Since the seminal works of Goodfriend (1992), several substantial studies confirmed a relation between the isotopic composition of land snail shells (d18O, d13C) and environmental parameters like precipitation amount, moisture source, temperature and vegetation type. This relation, however, is not straightforward and site dependent. The choice of sampling strategy (discrete or bulk sampling) and cleaning procedure (several methods can be used, but comparison of their effects in an individual shell has yet not been achieved) further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in the snails' limited mobility, and therefore an intrinsic aptitude of recording local and site-specific conditions. Also, snail shells are often found at dated archaeological sites. An obvious drawback is that shell assemblages rarely make up a continuous record, and a single shell is only a snapshot of the environmental setting at a given time. Shells from archaeological sites might represent a dietary component and cooking would presumably alter the isotopic signature of aragonite material. Consequently, a proper sampling strategy is of great importance and should be adjusted to the scientific question. Here, we compare and contrast different sampling approaches using modern shells collected in Morocco, Spain and Germany. The bulk shell approach (fine-ground material) yields information on mean environmental parameters within the life span of analyzed individuals. However, despite homogenization, replicate measurements of bulk shell material returned results with a variability greater than analytical precision (up to 2‰ for d18O, and up to 1‰ for d13C), calling for caution analyzing only single individuals. Horizontal high-resolution sampling (single drill holes along growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line

  19. Robustness to non-normality of common tests for the many-sample location problem

    Directory of Open Access Journals (Sweden)

    Azmeri Khan

    2003-01-01

    Full Text Available This paper studies the effect of deviating from the normal distribution assumption when considering the power of two many-sample location test procedures: ANOVA (parametric and Kruskal-Wallis (non-parametric. Power functions for these tests under various conditions are produced using simulation, where the simulated data are produced using MacGillivray and Cannon's [10] recently suggested g-and-k distribution. This distribution can provide data with selected amounts of skewness and kurtosis by varying two nearly independent parameters.

  20. Sampling strategy for estimating human exposure pathways to consumer chemicals

    Directory of Open Access Journals (Sweden)

    Eleni Papadopoulou

    2016-03-01

    Full Text Available Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway. The selected groups of chemicals to be studied are consumer chemicals whose production and use are currently in a state of transition and are; per- and polyfluorinated alkyl substances (PFASs, traditional and “emerging” brominated flame retardants (BFRs and EBFRs, organophosphate esters (OPEs and phthalate esters (PEs. Information about human exposure to these contaminants is needed due to existing data gaps on human exposure intakes from multiple exposure pathways and relationships between internal and external exposure. Indoor environment, food and biological samples were collected from 61 participants and their households in the Oslo area (Norway on two consecutive days, during winter 2013-14. Air, dust, hand wipes, and duplicate diet (food and drink samples were collected as indicators of external exposure, and blood, urine, blood spots, hair, nails and saliva as indicators of internal exposure. A food diary, food frequency questionnaire (FFQ and indoor environment questionnaire were also implemented. Approximately 2000 samples were collected in total and participant views on their experiences of this campaign were collected via questionnaire. While 91% of our participants were positive about future participation in a similar project, some tasks were viewed as problematic. Completing the food diary and collection of duplicate food/drink portions were the tasks most frequent reported as “hard”/”very hard”. Nevertheless, a strong positive correlation between the reported total mass of food/drinks in the food record and the total weight of the food/drinks in the collection bottles was observed, being an indication of accurate performance

  1. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  2. Evaluating sampling strategies for larval cisco (Coregonus artedi)

    Science.gov (United States)

    Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.

    2008-01-01

    To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.

  3. Sample preparation strategies for food and biological samples prior to nanoparticle detection and imaging

    DEFF Research Database (Denmark)

    Larsen, Erik Huusfeldt; Löschner, Katrin

    2014-01-01

    microscopy (TEM) proved to be necessary for trouble shooting of results obtained from AFFF-LS-ICP-MS. Aqueous and enzymatic extraction strategies were tested for thorough sample preparation aiming at degrading the sample matrix and to liberate the AgNPs from chicken meat into liquid suspension. The resulting...... AFFF-ICP-MS fractograms, which corresponded to the enzymatic digests, showed a major nano-peak (about 80 % recovery of AgNPs spiked to the meat) plus new smaller peaks that eluted close to the void volume of the fractograms. Small, but significant shifts in retention time of AFFF peaks were observed...... for the meat sample extracts and the corresponding neat AgNP suspension, and rendered sizing by way of calibration with AgNPs as sizing standards inaccurate. In order to gain further insight into the sizes of the separated AgNPs, or their possible dissolved state, fractions of the AFFF eluate were collected...

  4. Breeding Strategy To Generate Robust Yeast Starter Cultures for Cocoa Pulp Fermentations

    Science.gov (United States)

    Meersman, Esther; Steensels, Jan; Paulus, Tinneke; Struyf, Nore; Saels, Veerle; Mathawan, Melissa; Koffi, Jean; Vrancken, Gino

    2015-01-01

    Cocoa pulp fermentation is a spontaneous process during which the natural microbiota present at cocoa farms is allowed to ferment the pulp surrounding cocoa beans. Because such spontaneous fermentations are inconsistent and contribute to product variability, there is growing interest in a microbial starter culture that could be used to inoculate cocoa pulp fermentations. Previous studies have revealed that many different fungi are recovered from different batches of spontaneous cocoa pulp fermentations, whereas the variation in the prokaryotic microbiome is much more limited. In this study, therefore, we aimed to develop a suitable yeast starter culture that is able to outcompete wild contaminants and consistently produce high-quality chocolate. Starting from specifically selected Saccharomyces cerevisiae strains, we developed robust hybrids with characteristics that allow them to efficiently ferment cocoa pulp, including improved temperature tolerance and fermentation capacity. We conducted several laboratory and field trials to show that these new hybrids often outperform their parental strains and are able to dominate spontaneous pilot scale fermentations, which results in much more consistent microbial profiles. Moreover, analysis of the resulting chocolate showed that some of the cocoa batches that were fermented with specific starter cultures yielded superior chocolate. Taken together, these results describe the development of robust yeast starter cultures for cocoa pulp fermentations that can contribute to improving the consistency and quality of commercial chocolate production. PMID:26150457

  5. Breeding Strategy To Generate Robust Yeast Starter Cultures for Cocoa Pulp Fermentations.

    Science.gov (United States)

    Meersman, Esther; Steensels, Jan; Paulus, Tinneke; Struyf, Nore; Saels, Veerle; Mathawan, Melissa; Koffi, Jean; Vrancken, Gino; Verstrepen, Kevin J

    2015-09-01

    Cocoa pulp fermentation is a spontaneous process during which the natural microbiota present at cocoa farms is allowed to ferment the pulp surrounding cocoa beans. Because such spontaneous fermentations are inconsistent and contribute to product variability, there is growing interest in a microbial starter culture that could be used to inoculate cocoa pulp fermentations. Previous studies have revealed that many different fungi are recovered from different batches of spontaneous cocoa pulp fermentations, whereas the variation in the prokaryotic microbiome is much more limited. In this study, therefore, we aimed to develop a suitable yeast starter culture that is able to outcompete wild contaminants and consistently produce high-quality chocolate. Starting from specifically selected Saccharomyces cerevisiae strains, we developed robust hybrids with characteristics that allow them to efficiently ferment cocoa pulp, including improved temperature tolerance and fermentation capacity. We conducted several laboratory and field trials to show that these new hybrids often outperform their parental strains and are able to dominate spontaneous pilot scale fermentations, which results in much more consistent microbial profiles. Moreover, analysis of the resulting chocolate showed that some of the cocoa batches that were fermented with specific starter cultures yielded superior chocolate. Taken together, these results describe the development of robust yeast starter cultures for cocoa pulp fermentations that can contribute to improving the consistency and quality of commercial chocolate production. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  6. Developing a Robust Strategy for Implementing a Water Resources Master Plan in Lima, Peru

    Science.gov (United States)

    Kalra, N.; Groves, D.; Bonzanigo, L.; Molina-Perez, E.

    2015-12-01

    Lima, the capital of Peru, faces significant water stress. It is the fifth largest metropolitan area in Latin America, and the second largest desert city in the world. The city has developed a Master Plan of major investment projects to improve water reliability until 2040. Yet key questions remain. Is the Master Plan sufficient for ensuring reliability in the face of deeply uncertain future climate change and demand? How do uncertain budget and project feasibility conditions shape Lima's options? How should the investments in the plan be prioritized, and can some be delayed? Lima is not alone in facing these planning challenges. Governments invest billions of dollars annually in long-term projects. Yet deep uncertainties pose formidable challenges to making near-term decisions that make long-term sense. The World Bank has spearheaded a community of practice on methods for Decision Making Under Deep Uncertainty (DMU). This pilot project in Peru is the first in-depth application of DMU techniques to water supply planning in a developing country. It builds on prior analysis done in New York, California, and for the Colorado River, yet shows how these methods can be applied in regions which do not have as advanced data or tools available. The project combines three methods in particular -- Robust Decision Making, Decision Scaling, and Adaptive Pathways -- to help Lima implement its Master Plan in a way that is robust, no-regret, and adaptive. It was done in close partnership with SEDAPAL, the water utility company in Lima, and in coordination with other national WRM and meteorological agencies. This talk will: Present the planning challenges Lima and other cities face, including climate change Describe DMU methodologies and how they were applied in collaboration with SEDAPAL Summarize recommendations for achieving long-term water reliability in Lima Suggest how these methodologies can benefit other investment projects in developing countries.

  7. Robust and efficient direct multiplex amplification method for large-scale DNA detection of blood samples on FTA cards

    International Nuclear Information System (INIS)

    Jiang Bowei; Xiang Fawei; Zhao Xingchun; Wang Lihua; Fan Chunhai

    2013-01-01

    Deoxyribonucleic acid (DNA) damage arising from radiations widely occurred along with the development of nuclear weapons and clinically wide application of computed tomography (CT) scan and nuclear medicine. All ionizing radiations (X-rays, γ-rays, alpha particles, etc.) and ultraviolet (UV) radiation lead to the DNA damage. Polymerase chain reaction (PCR) is one of the most wildly used techniques for detecting DNA damage as the amplification stops at the site of the damage. Improvements to enhance the efficiency of PCR are always required and remain a great challenge. Here we establish a multiplex PCR assay system (MPAS) that is served as a robust and efficient method for direct detection of target DNA sequences in genomic DNA. The establishment of the system is performed by adding a combination of PCR enhancers to standard PCR buffer, The performance of MPAS was demonstrated by carrying out the direct PCR amplification on l.2 mm human blood punch using commercially available primer sets which include multiple primer pairs. The optimized PCR system resulted in high quality genotyping results without any inhibitory effect indicated and led to a full-profile success rate of 98.13%. Our studies demonstrate that the MPAS provides an efficient and robust method for obtaining sensitive, reliable and reproducible PCR results from human blood samples. (authors)

  8. Robust predictive control strategy applied for propofol dosing using BIS as a controlled variable during anesthesia

    NARCIS (Netherlands)

    Ionescu, Clara A.; De Keyser, Robin; Torrico, Bismark Claure; De Smet, Tom; Struys, Michel M. R. F.; Normey-Rico, Julio E.

    This paper presents the application of predictive control to drug dosing during anesthesia in patients undergoing surgery. The performance of a generic predictive control strategy in drug dosing control, with a previously reported anesthesia-specific control algorithm, has been evaluated. The

  9. Combining backcasting and exploratory scenarios to develop robust water strategies in face of uncertain futures

    NARCIS (Netherlands)

    Vliet, van M.; Kok, K.

    2015-01-01

    Water management strategies in times of global change need to be developed within a complex and uncertain environment. Scenarios are often used to deal with uncertainty. A novel backcasting methodology has been tested in which a normative objective (e.g. adaptive water management) is backcasted

  10. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. OUTPACE long duration stations: physical variability, context of biogeochemical sampling, and evaluation of sampling strategy

    Directory of Open Access Journals (Sweden)

    A. de Verneil

    2018-04-01

    Full Text Available Research cruises to quantify biogeochemical fluxes in the ocean require taking measurements at stations lasting at least several days. A popular experimental design is the quasi-Lagrangian drifter, often mounted with in situ incubations or sediment traps that follow the flow of water over time. After initial drifter deployment, the ship tracks the drifter for continuing measurements that are supposed to represent the same water environment. An outstanding question is how to best determine whether this is true. During the Oligotrophy to UlTra-oligotrophy PACific Experiment (OUTPACE cruise, from 18 February to 3 April 2015 in the western tropical South Pacific, three separate stations of long duration (five days over the upper 500 m were conducted in this quasi-Lagrangian sampling scheme. Here we present physical data to provide context for these three stations and to assess whether the sampling strategy worked, i.e., that a single body of water was sampled. After analyzing tracer variability and local water circulation at each station, we identify water layers and times where the drifter risks encountering another body of water. While almost no realization of this sampling scheme will be truly Lagrangian, due to the presence of vertical shear, the depth-resolved observations during the three stations show most layers sampled sufficiently homogeneous physical environments during OUTPACE. By directly addressing the concerns raised by these quasi-Lagrangian sampling platforms, a protocol of best practices can begin to be formulated so that future research campaigns include the complementary datasets and analyses presented here to verify the appropriate use of the drifter platform.

  12. IVS Combination Center at BKG - Robust Outlier Detection and Weighting Strategies

    Science.gov (United States)

    Bachmann, S.; Lösler, M.

    2012-12-01

    Outlier detection plays an important role within the IVS combination. Even if the original data is the same for all contributing Analysis Centers (AC), the analyzed data shows differences due to analysis software characteristics. The treatment of outliers is thus a fine line between keeping data heterogeneity and elimination of real outliers. Robust outlier detection based on the Least Median Square (LMS) is used within the IVS combination. This method allows reliable outlier detection with a small number of input parameters. A similar problem arises for the weighting of the individual solutions within the combination process. The variance component estimation (VCE) is used to control the weighting factor for each AC. The Operator-Software-Impact (OSI) method takes into account that the analyzed data is strongly influenced by the software and the responsible operator. It allows to make the VCE more sensitive to the diverse input data. This method has already been set up within GNSS data analysis as well as the analysis of troposphere data. The benefit of an OSI realization within the VLBI combination and its potential in weighting factor determination has not been investigated before.

  13. A sampling strategy to establish existing plant configuration baselines

    International Nuclear Information System (INIS)

    Buchanan, L.P.

    1995-01-01

    The Department of Energy's Gaseous Diffusion Plants (DOEGDP) are undergoing a Safety Analysis Update Program. As part of this program, critical existing structures are being reevaluated for Natural Phenomena Hazards (NPH) based on the recommendations of UCRL-15910. The Department of Energy has specified that current plant configurations be used in the performance of these reevaluations. This paper presents the process and results of a walkdown program implemented at DOEGDP to establish the current configuration baseline for these existing critical structures for use in subsequent NPH evaluations. These structures are classified as moderate hazard facilities and were constructed in the early 1950's. The process involved a statistical sampling strategy to determine the validity of critical design information as represented on the original design drawings such as member sizes, orientation, connection details and anchorage. A floor load inventory of the dead load of the equipment, both permanently attached and spare, was also performed as well as a walkthrough inspection of the overall structure to identify any other significant anomalies

  14. Multiple strategies to improve sensitivity, speed and robustness of isothermal nucleic acid amplification for rapid pathogen detection

    Directory of Open Access Journals (Sweden)

    Lemieux Bertrand

    2011-05-01

    Full Text Available Abstract Background In the past decades the rapid growth of molecular diagnostics (based on either traditional PCR or isothermal amplification technologies meet the demand for fast and accurate testing. Although isothermal amplification technologies have the advantages of low cost requirements for instruments, the further improvement on sensitivity, speed and robustness is a prerequisite for the applications in rapid pathogen detection, especially at point-of-care diagnostics. Here, we describe and explore several strategies to improve one of the isothermal technologies, helicase-dependent amplification (HDA. Results Multiple strategies were approached to improve the overall performance of the isothermal amplification: the restriction endonuclease-mediated DNA helicase homing, macromolecular crowding agents, and the optimization of reaction enzyme mix. The effect of combing all strategies was compared with that of the individual strategy. With all of above methods, we are able to detect 50 copies of Neisseria gonorrhoeae DNA in just 20 minutes of amplification using a nearly instrument-free detection platform (BESt™ cassette. Conclusions The strategies addressed in this proof-of-concept study are independent of expensive equipments, and are not limited to particular primers, targets or detection format. However, they make a large difference in assay performance. Some of them can be adjusted and applied to other formats of nucleic acid amplification. Furthermore, the strategies to improve the in vitro assays by maximally simulating the nature conditions may be useful in the general field of developing molecular assays. A new fast molecular assay for Neisseria gonorrhoeae has also been developed which has great potential to be used at point-of-care diagnostics.

  15. Robust differences in antisaccade performance exist between COGS schizophrenia cases and controls regardless of recruitment strategies.

    Science.gov (United States)

    Radant, Allen D; Millard, Steven P; Braff, David L; Calkins, Monica E; Dobie, Dorcas J; Freedman, Robert; Green, Michael F; Greenwood, Tiffany A; Gur, Raquel E; Gur, Ruben C; Lazzeroni, Laura C; Light, Gregory A; Meichle, Sean P; Nuechterlein, Keith H; Olincy, Ann; Seidman, Larry J; Siever, Larry J; Silverman, Jeremy M; Stone, William S; Swerdlow, Neal R; Sugar, Catherine A; Tsuang, Ming T; Turetsky, Bruce I; Tsuang, Debby W

    2015-04-01

    The impaired ability to make correct antisaccades (i.e., antisaccade performance) is well documented among schizophrenia subjects, and researchers have successfully demonstrated that antisaccade performance is a valid schizophrenia endophenotype that is useful for genetic studies. However, it is unclear how the ascertainment biases that unavoidably result from recruitment differences in schizophrenia subjects identified in family versus case-control studies may influence patient-control differences in antisaccade performance. To assess the impact of ascertainment bias, researchers from the Consortium on the Genetics of Schizophrenia (COGS) compared antisaccade performance and antisaccade metrics (latency and gain) in schizophrenia and control subjects from COGS-1, a family-based schizophrenia study, to schizophrenia and control subjects from COGS-2, a corresponding case-control study. COGS-2 schizophrenia subjects were substantially older; had lower education status, worse psychosocial function, and more severe symptoms; and were three times more likely to be a member of a multiplex family than COGS-1 schizophrenia subjects. Despite these variations, which were likely the result of ascertainment differences (as described in the introduction to this special issue), the effect sizes of the control-schizophrenia differences in antisaccade performance were similar in both studies (Cohen's d effect size of 1.06 and 1.01 in COGS-1 and COGS-2, respectively). This suggests that, in addition to the robust, state-independent schizophrenia-related deficits described in endophenotype studies, group differences in antisaccade performance do not vary based on subject ascertainment and recruitment factors. Published by Elsevier B.V.

  16. A PSF-shape-based beamforming strategy for robust 2D motion estimation in ultrafast data

    NARCIS (Netherlands)

    Saris, Anne E.C.M.; Fekkes, Stein; Nillesen, Maartje; Hansen, Hendrik H.G.; de Korte, Chris L.

    2018-01-01

    This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system's point-spread-function (PSF). As a consequence, the cross-correlation functions (CCF) used in the speckle

  17. A PSF-Shape-Based Beamforming Strategy for Robust 2D Motion Estimation in Ultrafast Data

    OpenAIRE

    Anne E. C. M. Saris; Stein Fekkes; Maartje M. Nillesen; Hendrik H. G. Hansen; Chris L. de Korte

    2018-01-01

    This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system’s point-spread-function (PSF). As a consequence, the cross-correlation functions (CCF) used in the speckle tracking (ST) algorithm will have circular-shaped peaks, which can be interpolated using a 2D interpolation method to estimate subsample displacements. Carotid artery wall motion and parabolic blood flow...

  18. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  19. A novel strategy for preparing mechanically robust ionically cross-linked alginate hydrogels

    International Nuclear Information System (INIS)

    Jejurikar, Aparna; Lawrie, Gwen; Groendahl, Lisbeth; Martin, Darren

    2011-01-01

    The properties of alginate films modified using two cross-linker ions (Ca 2+ and Ba 2+ ), comparing two separate cross-linking techniques (the traditional immersion (IM) method and a new strategy in a pressure-assisted diffusion (PD) method), are evaluated. This was achieved through measuring metal ion content, water uptake and film stability in an ionic solution ([Ca 2+ ] = 2 mM). Characterization of the internal structure and mechanical properties of hydrated films were established by cryogenic scanning electron microscopy and tensile testing, respectively. It was found that gels formed by the PD technique possessed greater stability and did not exhibit any delamination after 21 day immersion as compared to gels formed by the IM technique. The Ba 2+ cross-linked gels possessed significantly higher cross-linking density as reflected in lower water content, a more dense internal structure and higher Young's modulus compared to Ca 2+ cross-linked gels. For the Ca 2+ cross-linked gels, a large improvement in the mechanical properties was observed in gels produced by the PD technique and this was attributed to thicker pore walls observed within the hydrogel structure. In contrast, for the Ba 2+ cross-linked gels, the PD technique resulted in gels that had lower tensile strength and strain energy density and this was attributed to phase separation and larger macropores in this gel.

  20. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  1. A PSF-Shape-Based Beamforming Strategy for Robust 2D Motion Estimation in Ultrafast Data

    Directory of Open Access Journals (Sweden)

    Anne E. C. M. Saris

    2018-03-01

    Full Text Available This paper presents a framework for motion estimation in ultrafast ultrasound data. It describes a novel approach for determining the sampling grid for ultrafast data based on the system’s point-spread-function (PSF. As a consequence, the cross-correlation functions (CCF used in the speckle tracking (ST algorithm will have circular-shaped peaks, which can be interpolated using a 2D interpolation method to estimate subsample displacements. Carotid artery wall motion and parabolic blood flow simulations together with rotating disk experiments using a Verasonics Vantage 256 are used for performance evaluation. Zero-degree plane wave data were acquired using an ATL L5-12 (fc = 9 MHz transducer for a range of pulse repetition frequencies (PRFs, resulting in 0–600 µm inter-frame displacements. The proposed methodology was compared to data beamformed on a conventionally spaced grid, combined with the commonly used 1D parabolic interpolation. The PSF-shape-based beamforming grid combined with 2D cubic interpolation showed the most accurate and stable performance with respect to the full range of inter-frame displacements, both for the assessment of blood flow and vessel wall dynamics. The proposed methodology can be used as a protocolled way to beamform ultrafast data and obtain accurate estimates of tissue motion.

  2. Robust species taxonomy assignment algorithm for 16S rRNA NGS reads: application to oral carcinoma samples

    Directory of Open Access Journals (Sweden)

    Nezar Noor Al-Hebshi

    2015-09-01

    Full Text Available Background: Usefulness of next-generation sequencing (NGS in assessing bacteria associated with oral squamous cell carcinoma (OSCC has been undermined by inability to classify reads to the species level. Objective: The purpose of this study was to develop a robust algorithm for species-level classification of NGS reads from oral samples and to pilot test it for profiling bacteria within OSCC tissues. Methods: Bacterial 16S V1-V3 libraries were prepared from three OSCC DNA samples and sequenced using 454's FLX chemistry. High-quality, well-aligned, and non-chimeric reads ≥350 bp were classified using a novel, multi-stage algorithm that involves matching reads to reference sequences in revised versions of the Human Oral Microbiome Database (HOMD, HOMD extended (HOMDEXT, and Greengene Gold (GGG at alignment coverage and percentage identity ≥98%, followed by assignment to species level based on top hit reference sequences. Priority was given to hits in HOMD, then HOMDEXT and finally GGG. Unmatched reads were subject to operational taxonomic unit analysis. Results: Nearly, 92.8% of the reads were matched to updated-HOMD 13.2, 1.83% to trusted-HOMDEXT, and 1.36% to modified-GGG. Of all matched reads, 99.6% were classified to species level. A total of 228 species-level taxa were identified, representing 11 phyla; the most abundant were Proteobacteria, Bacteroidetes, Firmicutes, Fusobacteria, and Actinobacteria. Thirty-five species-level taxa were detected in all samples. On average, Prevotella oris, Neisseria flava, Neisseria flavescens/subflava, Fusobacterium nucleatum ss polymorphum, Aggregatibacter segnis, Streptococcus mitis, and Fusobacterium periodontium were the most abundant. Bacteroides fragilis, a species rarely isolated from the oral cavity, was detected in two samples. Conclusion: This multi-stage algorithm maximizes the fraction of reads classified to the species level while ensuring reliable classification by giving priority to the

  3. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  4. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  5. Sampling strategies for the analysis of reactive low-molecular weight compounds in air

    NARCIS (Netherlands)

    Henneken, H.

    2006-01-01

    Within this thesis, new sampling and analysis strategies for the determination of airborne workplace contaminants have been developed. Special focus has been directed towards the development of air sampling methods that involve diffusive sampling. In an introductory overview, the current

  6. Solving the Puzzle of Recruitment and Retention-Strategies for Building a Robust Clinical and Translational Research Workforce.

    Science.gov (United States)

    Nearing, Kathryn A; Hunt, Cerise; Presley, Jessica H; Nuechterlein, Bridget M; Moss, Marc; Manson, Spero M

    2015-10-01

    This paper is the first in a five-part series on the clinical and translational science educational pipeline and presents strategies to support recruitment and retention to create diverse pathways into clinical and translational research (CTR). The strategies address multiple levels or contexts of persistence decisions and include: (1) creating a seamless pipeline by forming strategic partnerships to achieve continuity of support for scholars and collective impact; (2) providing meaningful research opportunities to support identity formation as a scientist and sustain motivation to pursue and persist in CTR careers; (3) fostering an environment for effective mentorship and peer support to promote academic and social integration; (4) advocating for institutional policies to alleviate environmental pull factors; and, (5) supporting program evaluation-particularly, the examination of longitudinal outcomes. By combining institutional policies that promote a culture and climate for diversity with quality, evidence-based programs and integrated networks of support, we can create the environment necessary for diverse scholars to progress successfully and efficiently through the pipeline to achieve National Institutes of Health's vision of a robust CTR workforce. © 2015 Wiley Periodicals, Inc.

  7. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    Science.gov (United States)

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  8. Sampling strategies to capture single-cell heterogeneity

    OpenAIRE

    Satwik Rajaram; Louise E. Heinrich; John D. Gordan; Jayant Avva; Kathy M. Bonness; Agnieszka K. Witkiewicz; James S. Malter; Chloe E. Atreya; Robert S. Warren; Lani F. Wu; Steven J. Altschuler

    2017-01-01

    Advances in single-cell technologies have highlighted the prevalence and biological significance of cellular heterogeneity. A critical question is how to design experiments that faithfully capture the true range of heterogeneity from samples of cellular populations. Here, we develop a data-driven approach, illustrated in the context of image data, that estimates the sampling depth required for prospective investigations of single-cell heterogeneity from an existing collection of samples. ...

  9. Evaluation of sampling strategies to estimate crown biomass

    Science.gov (United States)

    Krishna P Poudel; Hailemariam Temesgen; Andrew N Gray

    2015-01-01

    Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire...

  10. Sampling strategies for efficient estimation of tree foliage biomass

    Science.gov (United States)

    Hailemariam Temesgen; Vicente Monleon; Aaron Weiskittel; Duncan Wilson

    2011-01-01

    Conifer crowns can be highly variable both within and between trees, particularly with respect to foliage biomass and leaf area. A variety of sampling schemes have been used to estimate biomass and leaf area at the individual tree and stand scales. Rarely has the effectiveness of these sampling schemes been compared across stands or even across species. In addition,...

  11. Robust Load Cell Cell for Discrete Contact Force Measurements of Sampling Systems and/or Instruments, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Bear Engineering proposes to develop a simple, robust, extreme environment compatible, mechanical load cell to enable the control of contact forces for placement of...

  12. WRAP Module 1 sampling strategy and waste characterization alternatives study

    Energy Technology Data Exchange (ETDEWEB)

    Bergeson, C.L.

    1994-09-30

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner.

  13. WRAP Module 1 sampling strategy and waste characterization alternatives study

    International Nuclear Information System (INIS)

    Bergeson, C.L.

    1994-01-01

    The Waste Receiving and Processing Module 1 Facility is designed to examine, process, certify, and ship drums and boxes of solid wastes that have a surface dose equivalent of less than 200 mrem/h. These wastes will include low-level and transuranic wastes that are retrievably stored in the 200 Area burial grounds and facilities in addition to newly generated wastes. Certification of retrievably stored wastes processing in WRAP 1 is required to meet the waste acceptance criteria for onsite treatment and disposal of low-level waste and mixed low-level waste and the Waste Isolation Pilot Plant Waste Acceptance Criteria for the disposal of TRU waste. In addition, these wastes will need to be certified for packaging in TRUPACT-II shipping containers. Characterization of the retrievably stored waste is needed to support the certification process. Characterization data will be obtained from historical records, process knowledge, nondestructive examination nondestructive assay, visual inspection of the waste, head-gas sampling, and analysis of samples taken from the waste containers. Sample characterization refers to the method or methods that are used to test waste samples for specific analytes. The focus of this study is the sample characterization needed to accurately identify the hazardous and radioactive constituents present in the retrieved wastes that will be processed in WRAP 1. In addition, some sampling and characterization will be required to support NDA calculations and to provide an over-check for the characterization of newly generated wastes. This study results in the baseline definition of WRAP 1 sampling and analysis requirements and identifies alternative methods to meet these requirements in an efficient and economical manner

  14. Calibration sets selection strategy for the construction of robust PLS models for prediction of biodiesel/diesel blends physico-chemical properties using NIR spectroscopy

    Science.gov (United States)

    Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel

    2017-06-01

    Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.

  15. Human pluripotent stem cell-derived products: advances towards robust, scalable and cost-effective manufacturing strategies.

    Science.gov (United States)

    Jenkins, Michael J; Farid, Suzanne S

    2015-01-01

    The ability to develop cost-effective, scalable and robust bioprocesses for human pluripotent stem cells (hPSCs) will be key to their commercial success as cell therapies and tools for use in drug screening and disease modelling studies. This review outlines key process economic drivers for hPSCs and progress made on improving the economic and operational feasibility of hPSC bioprocesses. Factors influencing key cost metrics, namely capital investment and cost of goods, for hPSCs are discussed. Step efficiencies particularly for differentiation, media requirements and technology choice are amongst the key process economic drivers identified for hPSCs. Progress made to address these cost drivers in hPSC bioprocessing strategies is discussed. These include improving expansion and differentiation yields in planar and bioreactor technologies, the development of xeno-free media and microcarrier coatings, identification of optimal bioprocess operating conditions to control cell fate and the development of directed differentiation protocols that reduce reliance on expensive morphogens such as growth factors and small molecules. These approaches offer methods to further optimise hPSC bioprocessing in terms of its commercial feasibility. © 2014 The Authors. Biotechnology Journal published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  16. Statistical sampling strategies for survey of soil contamination

    NARCIS (Netherlands)

    Brus, D.J.

    2011-01-01

    This chapter reviews methods for selecting sampling locations in contaminated soils for three situations. In the first situation a global estimate of the soil contamination in an area is required. The result of the surey is a number or a series of numbers per contaminant, e.g. the estimated mean

  17. Robustness and Strategies of Adaptation among Farmer Varieties of African Rice (Oryza glaberrima) and Asian Rice (Oryza sativa) across West Africa

    NARCIS (Netherlands)

    Mokuwa, A.; Nuijten, H.A.C.P.; Okry, F.; Teeken, B.W.E.; Maat, H.; Richards, P.; Struik, P.C.

    2013-01-01

    This study offers evidence of the robustness of farmer rice varieties (Oryza glaberrima and O. sativa) in West Africa. Our experiments in five West African countries showed that farmer varieties were tolerant of sub-optimal conditions, but employed a range of strategies to cope with stress.

  18. Measurement of radioactivity in the environment - Soil - Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided

  19. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  20. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    Science.gov (United States)

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical

  1. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software

    OpenAIRE

    Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by vir...

  2. Practical experiences with an extended screening strategy for genetically modified organisms (GMOs) in real-life samples.

    Science.gov (United States)

    Scholtens, Ingrid; Laurensse, Emile; Molenaar, Bonnie; Zaaijer, Stephanie; Gaballo, Heidi; Boleij, Peter; Bak, Arno; Kok, Esther

    2013-09-25

    Nowadays most animal feed products imported into Europe have a GMO (genetically modified organism) label. This means that they contain European Union (EU)-authorized GMOs. For enforcement of these labeling requirements, it is necessary, with the rising number of EU-authorized GMOs, to perform an increasing number of analyses. In addition to this, it is necessary to test products for the potential presence of EU-unauthorized GMOs. Analysis for EU-authorized and -unauthorized GMOs in animal feed has thus become laborious and expensive. Initial screening steps may reduce the number of GMO identification methods that need to be applied, but with the increasing diversity also screening with GMO elements has become more complex. For the present study, the application of an informative detailed 24-element screening and subsequent identification strategy was applied in 50 animal feed samples. Almost all feed samples were labeled as containing GMO-derived materials. The main goal of the study was therefore to investigate if a detailed screening strategy would reduce the number of subsequent identification analyses. An additional goal was to test the samples in this way for the potential presence of EU-unauthorized GMOs. Finally, to test the robustness of the approach, eight of the samples were tested in a concise interlaboratory study. No significant differences were found between the results of the two laboratories.

  3. The Viking X ray fluorescence experiment - Sampling strategies and laboratory simulations. [Mars soil sampling

    Science.gov (United States)

    Baird, A. K.; Castro, A. J.; Clark, B. C.; Toulmin, P., III; Rose, H., Jr.; Keil, K.; Gooding, J. L.

    1977-01-01

    Ten samples of Mars regolith material (six on Viking Lander 1 and four on Viking Lander 2) have been delivered to the X ray fluorescence spectrometers as of March 31, 1977. An additional six samples at least are planned for acquisition in the remaining Extended Mission (to January 1979) for each lander. All samples acquired are Martian fines from the near surface (less than 6-cm depth) of the landing sites except the latest on Viking Lander 1, which is fine material from the bottom of a trench dug to a depth of 25 cm. Several attempts on each lander to acquire fresh rock material (in pebble sizes) for analysis have yielded only cemented surface crustal material (duricrust). Laboratory simulation and experimentation are required both for mission planning of sampling and for interpretation of data returned from Mars. This paper is concerned with the rationale for sample site selections, surface sampler operations, and the supportive laboratory studies needed to interpret X ray results from Mars.

  4. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  5. Identification of a robust subpathway-based signature for acute myeloid leukemia prognosis using an miRNA integrated strategy.

    Science.gov (United States)

    Chang, Huijuan; Gao, Qiuying; Ding, Wei; Qing, Xueqin

    2018-01-01

    Acute myeloid leukemia (AML) is a heterogeneous disease, and survival signatures are urgently needed to better monitor treatment. MiRNAs displayed vital regulatory roles on target genes, which was necessary involved in the complex disease. We therefore examined the expression levels of miRNAs and genes to identify robust signatures for survival benefit analyses. First, we reconstructed subpathway graphs by embedding miRNA components that were derived from low-throughput miRNA-gene interactions. Then, we randomly divided the data sets from The Cancer Genome Atlas (TCGA) into training and testing sets, and further formed 100 subsets based on the training set. Using each subset, we identified survival-related miRNAs and genes, and identified survival subpathways based on the reconstructed subpathway graphs. After statistical analyses of these survival subpathways, the most robust subpathways with the top three ranks were identified, and risk scores were calculated based on these robust subpathways for AML patient prognoses. Among these robust subpathways, three representative subpathways, path: 05200_10 from Pathways in cancer, path: 04110_20 from Cell cycle, and path: 04510_8 from Focal adhesion, were significantly associated with patient survival in the TCGA training and testing sets based on subpathway risk scores. In conclusion, we performed integrated analyses of miRNAs and genes to identify robust prognostic subpathways, and calculated subpathway risk scores to characterize AML patient survival.

  6. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  7. Dried blood spot measurement: application in tacrolimus monitoring using limited sampling strategy and abbreviated AUC estimation.

    Science.gov (United States)

    Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo

    2008-02-01

    Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.

  8. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    Science.gov (United States)

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  9. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  10. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  11. Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith

    2010-09-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.

  12. Robust and economical multi-sample, multi-wavelength UV/vis absorption and fluorescence detector for biological and chemical contamination

    Science.gov (United States)

    Lu, Peter J.; Hoehl, Melanie M.; Macarthur, James B.; Sims, Peter A.; Ma, Hongshen; Slocum, Alexander H.

    2012-09-01

    We present a portable multi-channel, multi-sample UV/vis absorption and fluorescence detection device, which has no moving parts, can operate wirelessly and on batteries, interfaces with smart mobile phones or tablets, and has the sensitivity of commercial instruments costing an order of magnitude more. We use UV absorption to measure the concentration of ethylene glycol in water solutions at all levels above those deemed unsafe by the United States Food and Drug Administration; in addition we use fluorescence to measure the concentration of d-glucose. Both wavelengths can be used concurrently to increase measurement robustness and increase detection sensitivity. Our small robust economical device can be deployed in the absence of laboratory infrastructure, and therefore may find applications immediately following natural disasters, and in more general deployment for much broader-based testing of food, agricultural and household products to prevent outbreaks of poisoning and disease.

  13. Sampling strategies and stopping criteria for stochastic dual dynamic programming: a case study in long-term hydrothermal scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)

    2011-03-15

    The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)

  14. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  15. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    Science.gov (United States)

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  16. Toward a bioethical framework for antibiotic use, antimicrobial resistance and for empirically designing ethically robust strategies to protect human health: a research protocol.

    Science.gov (United States)

    Hernández-Marrero, Pablo; Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia

    2017-12-01

    Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches.

  17. Microchip-electrochemistry route for rapid screening of hydroquinone and arbutin from miscellaneous samples: Investigation of the robustness of a simple cross-injector system

    International Nuclear Information System (INIS)

    Crevillen, Agustin G.; Barrigas, Ines; Blasco, Antonio Javier; Gonzalez, Maria Cristina; Escarpa, Alberto

    2006-01-01

    This work examines in deep the analytical performance of an example of 'first-generation' microdevices: capillary electrophoresis microchip (CE) with end-channel electrochemical detection (ED). A hydroquinone and arbutin separation strategically chosen as route involving pharmaceutical-clinical testing, public safety and food control scenes was carried out. The reproducibility of the unpinched electrokinetic protocol was carefully studied and the technical possibility of working indiscriminately and/or sequentially with both simple cross-injectors was also demonstrated using a real sample (R.S.D.'s less than 7%). The robustness of the injection protocol allowed checking the state of the microchip/detector coupling and following the extraction efficiency of the analyte from real sample. Separation variables such as pH, ionic strength and, separation voltage were also carefully assayed and optimized. Analyte screening was performed using borate buffer (pH 9, 60 mM) in less than 180 s in the samples studied improving dramatically the analysis times used for the same analytes on a conventional scale (15 min), with good precision (R.S.D.'s ranging 5-10%), accuracy (recoveries ranging 90-110%) and acceptable resolution (Rs ≥ 1.0). In addition, the excellent analytical performance of the overall analytical method indicated the quality of the whole analytical microsystem and allowed to introduce the definition of robustness for methodologies developed into the 'lab-on-a-chip' scene

  18. Microchip-electrochemistry route for rapid screening of hydroquinone and arbutin from miscellaneous samples: Investigation of the robustness of a simple cross-injector system

    Energy Technology Data Exchange (ETDEWEB)

    Crevillen, Agustin G. [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Barrigas, Ines [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Blasco, Antonio Javier [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Gonzalez, Maria Cristina [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Escarpa, Alberto [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain)]. E-mail: alberto.escarpa@uah.es

    2006-03-15

    This work examines in deep the analytical performance of an example of 'first-generation' microdevices: capillary electrophoresis microchip (CE) with end-channel electrochemical detection (ED). A hydroquinone and arbutin separation strategically chosen as route involving pharmaceutical-clinical testing, public safety and food control scenes was carried out. The reproducibility of the unpinched electrokinetic protocol was carefully studied and the technical possibility of working indiscriminately and/or sequentially with both simple cross-injectors was also demonstrated using a real sample (R.S.D.'s less than 7%). The robustness of the injection protocol allowed checking the state of the microchip/detector coupling and following the extraction efficiency of the analyte from real sample. Separation variables such as pH, ionic strength and, separation voltage were also carefully assayed and optimized. Analyte screening was performed using borate buffer (pH 9, 60 mM) in less than 180 s in the samples studied improving dramatically the analysis times used for the same analytes on a conventional scale (15 min), with good precision (R.S.D.'s ranging 5-10%), accuracy (recoveries ranging 90-110%) and acceptable resolution (Rs {>=} 1.0). In addition, the excellent analytical performance of the overall analytical method indicated the quality of the whole analytical microsystem and allowed to introduce the definition of robustness for methodologies developed into the 'lab-on-a-chip' scene.

  19. Sample preparation composite and replicate strategy for assay of solid oral drug products.

    Science.gov (United States)

    Harrington, Brent; Nickerson, Beverly; Guo, Michele Xuemei; Barber, Marc; Giamalva, David; Lee, Carlos; Scrivens, Garry

    2014-12-16

    In pharmaceutical analysis, the results of drug product assay testing are used to make decisions regarding the quality, efficacy, and stability of the drug product. In order to make sound risk-based decisions concerning drug product potency, an understanding of the uncertainty of the reportable assay value is required. Utilizing the most restrictive criteria in current regulatory documentation, a maximum variability attributed to method repeatability is defined for a drug product potency assay. A sampling strategy that reduces the repeatability component of the assay variability below this predefined maximum is demonstrated. The sampling strategy consists of determining the number of dosage units (k) to be prepared in a composite sample of which there may be a number of equivalent replicate (r) sample preparations. The variability, as measured by the standard error (SE), of a potency assay consists of several sources such as sample preparation and dosage unit variability. A sampling scheme that increases the number of sample preparations (r) and/or number of dosage units (k) per sample preparation will reduce the assay variability and thus decrease the uncertainty around decisions made concerning the potency of the drug product. A maximum allowable repeatability component of the standard error (SE) for the potency assay is derived using material in current regulatory documents. A table of solutions for the number of dosage units per sample preparation (r) and number of replicate sample preparations (k) is presented for any ratio of sample preparation and dosage unit variability.

  20. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    Shangguan Danhua; Bao Jingdong

    2010-01-01

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  1. Sampling stored product insect pests: a comparison of four statistical sampling models for probability of pest detection

    Science.gov (United States)

    Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...

  2. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    International Nuclear Information System (INIS)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-01-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to −1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management. (paper)

  3. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  4. Mendelian breeding units versus standard sampling strategies: mitochondrial DNA variation in southwest Sardinia

    Directory of Open Access Journals (Sweden)

    Daria Sanna

    2011-01-01

    Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.

  5. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    2003-01-01

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied. The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low. Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  6. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied.

    The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low.

    Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  7. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  8. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  9. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  10. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  11. Limited sampling strategy for determining metformin area under the plasma concentration-time curve

    DEFF Research Database (Denmark)

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José

    2016-01-01

    AIM: The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration-time curve (AUC) for metformin. METHODS: Metformin plasma concentrations (n = 627) at 0-24 h after a single 500 mg dose were used for LSS development, based on all su...

  12. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Science.gov (United States)

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  13. Catch, effort and sampling strategies in the highly variable sardine fisheries around East Java, Indonesia.

    NARCIS (Netherlands)

    Pet, J.S.; Densen, van W.L.T.; Machiels, M.A.M.; Sukkel, M.; Setyohady, D.; Tumuljadi, A.

    1997-01-01

    Temporal and spatial patterns in the fishery for Sardinella spp. around East Java, Indonesia, were studied in an attempt to develop an efficient catch and effort sampling strategy for this highly variable fishery. The inter-annual and monthly variation in catch, effort and catch per unit of effort

  14. Current advances and strategies towards fully automated sample preparation for regulated LC-MS/MS bioanalysis.

    Science.gov (United States)

    Zheng, Naiyu; Jiang, Hao; Zeng, Jianing

    2014-09-01

    Robotic liquid handlers (RLHs) have been widely used in automated sample preparation for liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. Automated sample preparation for regulated bioanalysis offers significantly higher assay efficiency, better data quality and potential bioanalytical cost-savings. For RLHs that are used for regulated bioanalysis, there are additional requirements, including 21 CFR Part 11 compliance, software validation, system qualification, calibration verification and proper maintenance. This article reviews recent advances in automated sample preparation for regulated bioanalysis in the last 5 years. Specifically, it covers the following aspects: regulated bioanalysis requirements, recent advances in automation hardware and software development, sample extraction workflow simplification, strategies towards fully automated sample extraction, and best practices in automated sample preparation for regulated bioanalysis.

  15. Robust nonhomogeneous training samples detection method for space-time adaptive processing radar using sparse-recovery with knowledge-aided

    Science.gov (United States)

    Li, Zhihui; Liu, Hanwei; Zhang, Yongshun; Guo, Yiduo

    2017-10-01

    The performance of space-time adaptive processing (STAP) may degrade significantly when some of the training samples are contaminated by the signal-like components (outliers) in nonhomogeneous clutter environments. To remove the training samples contaminated by outliers in nonhomogeneous clutter environments, a robust nonhomogeneous training samples detection method using the sparse-recovery (SR) with knowledge-aided (KA) is proposed. First, the reduced-dimension (RD) overcomplete spatial-temporal steering dictionary is designed with the prior knowledge of system parameters and the possible target region. Then, the clutter covariance matrix (CCM) of cell under test is efficiently estimated using a modified focal underdetermined system solver (FOCUSS) algorithm, where a RD overcomplete spatial-temporal steering dictionary is applied. Third, the proposed statistics are formed by combining the estimated CCM with the generalized inner products (GIP) method, and the contaminated training samples can be detected and removed. Finally, several simulation results validate the effectiveness of the proposed KA-SR-GIP method.

  16. TiSH - a robust and sensitive global phosphoproteomics strategy employing a combination of TiO(2), SIMAC, and HILIC

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Birck, Pernille; Størling, Joachim

    2012-01-01

    losses. We demonstrate the capability of this strategy by quantitative investigation of early interferon-γ signaling in low quantities of insulinoma cells. We identified ~6600 unique phosphopeptides from 300μg of peptides/condition (22 unique phosphopeptides/μg) in a duplex dimethyl labeling experiment....... This strategy thus shows great potential for interrogating signaling networks from low amounts of sample with high sensitivity and specificity....

  17. Comparison of active and passive sampling strategies for the monitoring of pesticide contamination in streams

    Science.gov (United States)

    Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina

    2014-05-01

    The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate

  18. Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies

    Directory of Open Access Journals (Sweden)

    Luby Stephen P

    2010-08-01

    Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid

  19. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Science.gov (United States)

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  20. Limited-sampling strategies for anti-infective agents: systematic review.

    Science.gov (United States)

    Sprague, Denise A; Ensom, Mary H H

    2009-09-01

    Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or

  1. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey

    Directory of Open Access Journals (Sweden)

    Mirel Lisa B.

    2017-06-01

    Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.

  2. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  3. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    Science.gov (United States)

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  4. Strategy for thermo-gravimetric analysis of K East fuel samples

    International Nuclear Information System (INIS)

    Lawrence, L.A.

    1997-01-01

    A strategy was developed for the Thermo-Gravimetric Analysis (TGA) testing of K East fuel samples for oxidation rate determinations. Tests will first establish if there are any differences for dry air oxidation between the K West and K East fuel. These tests will be followed by moist inert gas oxidation rate measurements. The final series of tests will consider pure water vapor i.e., steam

  5. Robustness and strategies of adaptation among farmer varieties of African Rice (Oryza glaberrima) and Asian Rice (Oryza sativa) across West Africa.

    Science.gov (United States)

    Mokuwa, Alfred; Nuijten, Edwin; Okry, Florent; Teeken, Béla; Maat, Harro; Richards, Paul; Struik, Paul C

    2013-01-01

    This study offers evidence of the robustness of farmer rice varieties (Oryza glaberrima and O. sativa) in West Africa. Our experiments in five West African countries showed that farmer varieties were tolerant of sub-optimal conditions, but employed a range of strategies to cope with stress. Varieties belonging to the species Oryza glaberrima - solely the product of farmer agency - were the most successful in adapting to a range of adverse conditions. Some of the farmer selections from within the indica and japonica subspecies of O. sativa also performed well in a range of conditions, but other farmer selections from within these two subspecies were mainly limited to more specific niches. The results contradict the rather common belief that farmer varieties are only of local value. Farmer varieties should be considered by breeding programmes and used (alongside improved varieties) in dissemination projects for rural food security.

  6. Robustness and Strategies of Adaptation among Farmer Varieties of African Rice (Oryza glaberrima) and Asian Rice (Oryza sativa) across West Africa

    Science.gov (United States)

    Maat, Harro; Richards, Paul; Struik, Paul C.

    2013-01-01

    This study offers evidence of the robustness of farmer rice varieties (Oryza glaberrima and O. sativa) in West Africa. Our experiments in five West African countries showed that farmer varieties were tolerant of sub-optimal conditions, but employed a range of strategies to cope with stress. Varieties belonging to the species Oryza glaberrima – solely the product of farmer agency – were the most successful in adapting to a range of adverse conditions. Some of the farmer selections from within the indica and japonica subspecies of O. sativa also performed well in a range of conditions, but other farmer selections from within these two subspecies were mainly limited to more specific niches. The results contradict the rather common belief that farmer varieties are only of local value. Farmer varieties should be considered by breeding programmes and used (alongside improved varieties) in dissemination projects for rural food security. PMID:23536754

  7. Using Direct Policy Search to Identify Robust Strategies in Adapting to Uncertain Sea Level Rise and Storm Surge

    Science.gov (United States)

    Garner, G. G.; Keller, K.

    2017-12-01

    Sea-level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with deeply uncertain sea-level projections when designing a strategy for coastal adaptation. The traditional methods have provided tremendous insight into this decision problem, but are often silent on tradeoffs as well as the effects of tail-area events and of potential future learning. Here we reformulate a simple sea-level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto-dominance in the objectives, over the traditional approach under uncertain sea-level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than the traditional approach. Our results illustrate the utility of multi-objective adaptive formulations for the example of coastal adaptation, the value of information provided by observations, and point to wider-ranging application in climate change adaptation decision problems.

  8. Replica exchange enveloping distribution sampling (RE-EDS): A robust method to estimate multiple free-energy differences from a single simulation.

    Science.gov (United States)

    Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina

    2016-10-21

    In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.

  9. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  10. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  11. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  12. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    Science.gov (United States)

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Measuring strategies for learning regulation in medical education: scale reliability and dimensionality in a Swedish sample.

    Science.gov (United States)

    Edelbring, Samuel

    2012-08-15

    The degree of learners' self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206). The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach's alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students' regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  14. Measuring strategies for learning regulation in medical education: Scale reliability and dimensionality in a Swedish sample

    Directory of Open Access Journals (Sweden)

    Edelbring Samuel

    2012-08-01

    Full Text Available Abstract Background The degree of learners’ self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206. Methods The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Results Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach’s alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. Discussion The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students’ regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  15. Females' sampling strategy to comparatively evaluate prospective mates in the peacock blenny Salaria pavo

    Science.gov (United States)

    Locatello, Lisa; Rasotto, Maria B.

    2017-08-01

    Emerging evidence suggests the occurrence of comparative decision-making processes in mate choice, questioning the traditional idea of female choice based on rules of absolute preference. In such a scenario, females are expected to use a typical best-of- n sampling strategy, being able to recall previous sampled males based on memory of their quality and location. Accordingly, the quality of preferred mate is expected to be unrelated to both the number and the sequence of female visits. We found support for these predictions in the peacock blenny, Salaria pavo, a fish where females have the opportunity to evaluate the attractiveness of many males in a short time period and in a restricted spatial range. Indeed, even considering the variability in preference among females, most of them returned to previous sampled males for further evaluations; thus, the preferred male did not represent the last one in the sequence of visited males. Moreover, there was no relationship between the attractiveness of the preferred male and the number of further visits assigned to the other males. Our results suggest the occurrence of a best-of- n mate sampling strategy in the peacock blenny.

  16. Serum sample containing endogenous antibodies interfering with multiple hormone immunoassays. Laboratory strategies to detect interference

    Directory of Open Access Journals (Sweden)

    Elena García-González

    2016-04-01

    Full Text Available Objectives: Endogenous antibodies (EA may interfere with immunoassays, causing erroneous results for hormone analyses. As (in most cases this interference arises from the assay format and most immunoassays, even from different manufacturers, are constructed in a similar way, it is possible for a single type of EA to interfere with different immunoassays. Here we describe the case of a patient whose serum sample contains EA that interfere several hormones tests. We also discuss the strategies deployed to detect interference. Subjects and methods: Over a period of four years, a 30-year-old man was subjected to a plethora of laboratory and imaging diagnostic procedures as a consequence of elevated hormone results, mainly of pituitary origin, which did not correlate with the overall clinical picture. Results: Once analytical interference was suspected, the best laboratory approaches to investigate it were sample reanalysis on an alternative platform and sample incubation with antibody blocking tubes. Construction of an in-house ‘nonsense’ sandwich assay was also a valuable strategy to confirm interference. In contrast, serial sample dilutions were of no value in our case, while polyethylene glycol (PEG precipitation gave inconclusive results, probably due to the use of inappropriate PEG concentrations for several of the tests assayed. Conclusions: Clinicians and laboratorians must be aware of the drawbacks of immunometric assays, and alert to the possibility of EA interference when results do not fit the clinical pattern. Keywords: Endogenous antibodies, Immunoassay, Interference, Pituitary hormones, Case report

  17. Sample preparation composite and replicate strategy case studies for assay of solid oral drug products.

    Science.gov (United States)

    Nickerson, Beverly; Harrington, Brent; Li, Fasheng; Guo, Michele Xuemei

    2017-11-30

    Drug product assay is one of several tests required for new drug products to ensure the quality of the product at release and throughout the life cycle of the product. Drug product assay testing is typically performed by preparing a composite sample of multiple dosage units to obtain an assay value representative of the batch. In some cases replicate composite samples may be prepared and the reportable assay value is the average value of all the replicates. In previously published work by Harrington et al. (2014) [5], a sample preparation composite and replicate strategy for assay was developed to provide a systematic approach which accounts for variability due to the analytical method and dosage form with a standard error of the potency assay criteria based on compendia and regulatory requirements. In this work, this sample preparation composite and replicate strategy for assay is applied to several case studies to demonstrate the utility of this approach and its application at various stages of pharmaceutical drug product development. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Ferromagnetic particles as a rapid and robust sample preparation for the absolute quantification of seven eicosanoids in human plasma by UHPLC-MS/MS.

    Science.gov (United States)

    Suhr, Anna Catharina; Bruegel, Mathias; Maier, Barbara; Holdt, Lesca Miriam; Kleinhempel, Alisa; Teupser, Daniel; Grimm, Stefanie H; Vogeser, Michael

    2016-06-01

    We used ferromagnetic particles as a novel technique to deproteinize plasma samples prior to quantitative UHPLC-MS/MS analysis of seven eicosanoids [thromboxane B2 (TXB2), prostaglandin E2 (PGE2), PGD2, 5-hydroxyeicosatetraenoic acid (5-HETE), 11-HETE, 12-HETE, arachidonic acid (AA)]. A combination of ferromagnetic particle enhanced deproteination and subsequent on-line solid phase extraction (on-line SPE) realized quick and convenient semi-automated sample preparation-in contrast to widely used manual SPE techniques which are rather laborious and therefore impede the investigation of AA metabolism in larger patient cohorts. Method evaluation was performed according to a protocol based on the EMA guideline for bioanalytical method validation, modified for endogenous compounds. Calibrators were prepared in ethanol. The calibration curves were found to be linear in a range of 0.1-80ngmL(-1) (TXB2, PGE2, PGD2), 0.05-40ngmL(-1) (5-HETE, 11-HETE), 0.5-400ngmL(-1) (12-HETE) and 25-9800ngmL(-1) (AA). Regarding all analytes and all quality controls, the resulting precision data (inter-assay 2.6 %-15.5 %; intra-assay 2.5 %-15.1 %, expressed as variation coefficient) as well as the accuracy results (inter-assay 93.3 %-125 %; intra-assay 91.7 %-114 %) were adequate. Further experiments addressing matrix effect, recovery and robustness, yielded also very satisfying results. As a proof of principle, the newly developed LC-MS/MS assay was employed to determine the capacity of AA metabolite release after whole blood stimulation in healthy blood donors. For this purpose, whole blood specimens of 5 healthy blood donors were analyzed at baseline and after a lipopolysaccharide (LPS) induced blood cell activation. In several baseline samples some eicosanoids levels were below the Lower Limit of Quantification. However, in the stimulated samples all chosen eicosanoids (except PGD2) could be quantified. These results, in context with those obtained in validation, demonstrate the

  19. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  20. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    Energy Technology Data Exchange (ETDEWEB)

    Maglevanny, I.I., E-mail: sianko@list.ru [Volgograd State Social Pedagogical University, 27 Lenin Avenue, Volgograd 400131 (Russian Federation); Smolar, V.A. [Volgograd State Technical University, 28 Lenin Avenue, Volgograd 400131 (Russian Federation)

    2016-01-15

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  1. Robust sampling-sourced numerical retrieval algorithm for optical energy loss function based on log–log mesh optimization and local monotonicity preserving Steffen spline

    International Nuclear Information System (INIS)

    Maglevanny, I.I.; Smolar, V.A.

    2016-01-01

    We introduce a new technique of interpolation of the energy-loss function (ELF) in solids sampled by empirical optical spectra. Finding appropriate interpolation methods for ELFs poses several challenges. The sampled ELFs are usually very heterogeneous, can originate from various sources thus so called “data gaps” can appear, and significant discontinuities and multiple high outliers can be present. As a result an interpolation based on those data may not perform well at predicting reasonable physical results. Reliable interpolation tools, suitable for ELF applications, should therefore satisfy several important demands: accuracy and predictive power, robustness and computational efficiency, and ease of use. We examined the effect on the fitting quality due to different interpolation schemes with emphasis on ELF mesh optimization procedures and we argue that the optimal fitting should be based on preliminary log–log scaling data transforms by which the non-uniformity of sampled data distribution may be considerably reduced. The transformed data are then interpolated by local monotonicity preserving Steffen spline. The result is a piece-wise smooth fitting curve with continuous first-order derivatives that passes through all data points without spurious oscillations. Local extrema can occur only at grid points where they are given by the data, but not in between two adjacent grid points. It is found that proposed technique gives the most accurate results and also that its computational time is short. Thus, it is feasible using this simple method to address practical problems associated with interaction between a bulk material and a moving electron. A compact C++ implementation of our algorithm is also presented.

  2. Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.

    Science.gov (United States)

    Niioka, Takenori

    2011-03-01

    Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.

  3. Strategies for monitoring the emerging polar organic contaminants in water with emphasis on integrative passive sampling.

    Science.gov (United States)

    Söderström, Hanna; Lindberg, Richard H; Fick, Jerker

    2009-01-16

    Although polar organic contaminants (POCs) such as pharmaceuticals are considered as some of today's most emerging contaminants few of them are regulated or included in on-going monitoring programs. However, the growing concern among the public and researchers together with the new legislature within the European Union, the registration, evaluation and authorisation of chemicals (REACH) system will increase the future need of simple, low cost strategies for monitoring and risk assessment of POCs in aquatic environments. In this article, we overview the advantages and shortcomings of traditional and novel sampling techniques available for monitoring the emerging POCs in water. The benefits and drawbacks of using active and biological sampling were discussed and the principles of organic passive samplers (PS) presented. A detailed overview of type of polar organic PS available, and their classes of target compounds and field of applications were given, and the considerations involved in using them such as environmental effects and quality control were discussed. The usefulness of biological sampling of POCs in water was found to be limited. Polar organic PS was considered to be the only available, but nevertheless, an efficient alternative to active water sampling due to its simplicity, low cost, no need of power supply or maintenance, and the ability of collecting time-integrative samples with one sample collection. However, the polar organic PS need to be further developed before they can be used as standard in water quality monitoring programs.

  4. A novel one-step strategy toward ZnMn2O4/N-doped graphene nanosheets with robust chemical interaction for superior lithium storage

    International Nuclear Information System (INIS)

    Wang, Dong; Zhou, Weiwei; Zhang, Yong; Wang, Yali; Wu, Gangan; Yu, Kun; Wen, Guangwu

    2016-01-01

    Ingenious hybrid electrode design, especially realized with a facile strategy, is appealing yet challenging for electrochemical energy storage devices. Here, we report the synthesis of a novel ZnMn 2 O 4 /N-doped graphene (ZMO/NG) nanohybrid with sandwiched structure via a facile one-step approach, in which ultrafine ZMO nanoparticles with diameters of 10–12 nm are well dispersed on both surfaces of N-doped graphene (NG) nanosheets. Note that one-step synthetic strategies are rarely reported for ZMO-based nanostructures. Systematical control experiments reveal that the formation of well-dispersed ZMO nanoparticles is not solely ascribed to the restriction effect of the functional groups on graphene oxide (GO), but also to the presence of ammonia. Benefitting from the synergistic effects and robust chemical interaction between ZMO nanoparticles and N-doped graphene nanosheets, the ZMO/NG hybrids deliver a reversible capacity up to 747 mAh g −1 after 200 cycles at a current density of 500 mA g −1 . Even at a high current density of 3200 mA g −1 , an unrivaled capacity of 500 mAh g −1 can still be retained, corroborating the good rate capability. (paper)

  5. New sampling strategy using a Bayesian approach to assess iohexol clearance in kidney transplant recipients.

    Science.gov (United States)

    Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H

    2012-06-01

    Glomerular filtration rate (GFR) measurement is a major issue in kidney transplant recipients for clinicians. GFR can be determined by estimating the plasma clearance of iohexol, a nonradiolabeled compound. For practical and convenient application for patients and caregivers, it is important that a minimal number of samples are drawn. The aim of this study was to develop and validate a Bayesian model with fewer samples for reliable prediction of GFR in kidney transplant recipients. Iohexol plasma concentration-time curves from 95 patients were divided into an index (n = 63) and a validation set (n = 32). Samples (n = 4-6 per patient) were obtained during the elimination phase, that is, between 120 and 270 minutes. Individual reference values of iohexol clearance (CL(iohexol)) were calculated from k (elimination slope) and V (volume of distribution from intercept). Individual CL(iohexol) values were then introduced into the Bröchner-Mortensen equation to obtain the GFR (reference value). A population pharmacokinetic model was developed from the index set and validated using standard methods. For the validation set, we tested various combinations of 1, 2, or 3 sampling time to estimate CL(iohexol). According to the different combinations tested, a maximum a posteriori Bayesian estimation of CL(iohexol) was obtained from population parameters. Individual estimates of GFR were compared with individual reference values through analysis of bias and precision. A capability analysis allowed us to determine the best sampling strategy for Bayesian estimation. A 1-compartment model best described our data. Covariate analysis showed that uremia, serum creatinine, and age were significantly associated with k(e), and weight with V. The strategy, including samples drawn at 120 and 270 minutes, allowed accurate prediction of GFR (mean bias: -3.71%, mean imprecision: 7.77%). With this strategy, about 20% of individual predictions were outside the bounds of acceptance set at ± 10

  6. Decision Tree and Survey Development for Support in Agricultural Sampling Strategies during Nuclear and Radiological Emergencies

    International Nuclear Information System (INIS)

    Yi, Amelia Lee Zhi; Dercon, Gerd

    2017-01-01

    In the event of a severe nuclear or radiological accident, the release of radionuclides results in contamination of land surfaces affecting agricultural and food resources. Speedy accumulation of information and guidance on decision making is essential in enhancing the ability of stakeholders to strategize for immediate countermeasure strategies. Support tools such as decision trees and sampling protocols allow for swift response by governmental bodies and assist in proper management of the situation. While such tools exist, they focus mainly on protecting public well-being and not food safety management strategies. Consideration of the latter is necessary as it has long-term implications especially to agriculturally dependent Member States. However, it is a research gap that remains to be filled.

  7. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    Energy Technology Data Exchange (ETDEWEB)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N. [Natural Resources Canada, Ottawa, ON (Canada); Chapman, M. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2011-07-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  8. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    International Nuclear Information System (INIS)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N.; Chapman, M.

    2011-01-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  9. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    Science.gov (United States)

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  10. Modelling of in-stream nitrogen and phosphorus concentrations using different sampling strategies for calibration data

    Science.gov (United States)

    Jomaa, Seifeddine; Jiang, Sanyuan; Yang, Xiaoqiang; Rode, Michael

    2016-04-01

    It is known that a good evaluation and prediction of surface water pollution is mainly limited by the monitoring strategy and the capability of the hydrological water quality model to reproduce the internal processes. To this end, a compromise sampling frequency, which can reflect the dynamical behaviour of leached nutrient fluxes responding to changes in land use, agriculture practices and point sources, and appropriate process-based water quality model are required. The objective of this study was to test the identification of hydrological water quality model parameters (nitrogen and phosphorus) under two different monitoring strategies: (1) regular grab-sampling approach and (2) regular grab-sampling with additional monitoring during the hydrological events using automatic samplers. First, the semi-distributed hydrological water quality HYPE (Hydrological Predictions for the Environment) model was successfully calibrated (1994-1998) for discharge (NSE = 0.86), nitrate-N (lowest NSE for nitrate-N load = 0.69), particulate phosphorus and soluble phosphorus in the Selke catchment (463 km2, central Germany) for the period 1994-1998 using regular grab-sampling approach (biweekly to monthly for nitrogen and phosphorus concentrations). Second, the model was successfully validated during the period 1999-2010 for discharge, nitrate-N, particulate-phosphorus and soluble-phosphorus (lowest NSE for soluble phosphorus load = 0.54). Results, showed that when additional sampling during the events with random grab-sampling approach was used (period 2011-2013), the hydrological model could reproduce only the nitrate-N and soluble phosphorus concentrations reasonably well. However, when additional sampling during the hydrological events was considered, the HYPE model could not represent the measured particulate phosphorus. This reflects the importance of suspended sediment during the hydrological events increasing the concentrations of particulate phosphorus. The HYPE model could

  11. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  12. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  13. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    Science.gov (United States)

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  14. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    Science.gov (United States)

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  15. Robust Learning Control Design for Quantum Unitary Transformations.

    Science.gov (United States)

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  16. Evolution strategies for robust optimization

    NARCIS (Netherlands)

    Kruisselbrink, Johannes Willem

    2012-01-01

    Real-world (black-box) optimization problems often involve various types of uncertainties and noise emerging in different parts of the optimization problem. When this is not accounted for, optimization may fail or may yield solutions that are optimal in the classical strict notion of optimality, but

  17. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples

    Directory of Open Access Journals (Sweden)

    Stéphanie Simon

    2015-11-01

    Full Text Available Botulinum neurotoxins (BoNTs cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A–G, of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as “category A” bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  18. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples.

    Science.gov (United States)

    Simon, Stéphanie; Fiebig, Uwe; Liu, Yvonne; Tierney, Rob; Dano, Julie; Worbs, Sylvia; Endermann, Tanja; Nevers, Marie-Claire; Volland, Hervé; Sesardic, Dorothea; Dorner, Martin B

    2015-11-26

    Botulinum neurotoxins (BoNTs) cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A-G), of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as "category A" bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT) was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL) distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  19. A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2016-08-01

    Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.

  20. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  1. Recruiting hard-to-reach United States population sub-groups via adaptations of snowball sampling strategy

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith

    2011-01-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089

  2. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  3. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  4. An instrument design and sample strategy for measuring soil respiration in the coastal temperate rain forest

    Science.gov (United States)

    Nay, S. M.; D'Amore, D. V.

    2009-12-01

    The coastal temperate rainforest (CTR) along the northwest coast of North America is a large and complex mosaic of forests and wetlands located on an undulating terrain ranging from sea level to thousands of meters in elevation. This biome stores a dynamic portion of the total carbon stock of North America. The fate of the terrestrial carbon stock is of concern due to the potential for mobilization and export of this store to both the atmosphere as carbon respiration flux and ocean as dissolved organic and inorganic carbon flux. Soil respiration is the largest export vector in the system and must be accurately measured to gain any comprehensive understanding of how carbon moves though this system. Suitable monitoring tools capable of measuring carbon fluxes at small spatial scales are essential for our understanding of carbon dynamics at larger spatial scales within this complex assemblage of ecosystems. We have adapted instrumentation and developed a sampling strategy for optimizing replication of soil respiration measurements to quantify differences among spatially complex landscape units of the CTR. We start with the design of the instrument to ease the technological, ergonomic and financial barriers that technicians encounter in monitoring the efflux of CO2 from the soil. Our sampling strategy optimizes the physical efforts of the field work and manages for the high variation of flux measurements encountered in this difficult environment of rough terrain, dense vegetation and wet climate. Our soil respirometer incorporates an infra-red gas analyzer (LiCor Inc. LI-820) and an 8300 cm3 soil respiration chamber; the device is durable, lightweight, easy to operate and can be built for under $5000 per unit. The modest unit price allows for a multiple unit fleet to be deployed and operated in an intensive field monitoring campaign. We use a large 346 cm2 collar to accommodate as much micro spatial variation as feasible and to facilitate repeated measures for tracking

  5. A simulation approach to assessing sampling strategies for insect pests: an example with the balsam gall midge.

    Directory of Open Access Journals (Sweden)

    R Drew Carleton

    Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.

  6. Field screening sampling and analysis strategy and methodology for the 183-H Solar Evaporation Basins: Phase 2, Soils

    International Nuclear Information System (INIS)

    Antipas, A.; Hopkins, A.M.; Wasemiller, M.A.; McCain, R.G.

    1996-01-01

    This document provides a sampling/analytical strategy and methodology for Resource Conservation and Recovery Act (RCRA) closure of the 183-H Solar Evaporation Basins within the boundaries and requirements identified in the initial Phase II Sampling and Analysis Plan for RCRA Closure of the 183-H Solar Evaporation Basins

  7. Analytical sample preparation strategies for the determination of antimalarial drugs in human whole blood, plasma and urine

    DEFF Research Database (Denmark)

    Casas, Monica Escolà; Hansen, Martin; Krogh, Kristine A

    2014-01-01

    the available sample preparation strategies combined with liquid chromatographic (LC) analysis to determine antimalarials in whole blood, plasma and urine published over the last decade. Sample preparation can be done by protein precipitation, solid-phase extraction, liquid-liquid extraction or dilution. After...

  8. A hybrid computational strategy to address WGS variant analysis in >5000 samples.

    Science.gov (United States)

    Huang, Zhuoyi; Rustagi, Navin; Veeraraghavan, Narayanan; Carroll, Andrew; Gibbs, Richard; Boerwinkle, Eric; Venkata, Manjunath Gorentla; Yu, Fuli

    2016-09-10

    The decreasing costs of sequencing are driving the need for cost effective and real time variant calling of whole genome sequencing data. The scale of these projects are far beyond the capacity of typical computing resources available with most research labs. Other infrastructures like the cloud AWS environment and supercomputers also have limitations due to which large scale joint variant calling becomes infeasible, and infrastructure specific variant calling strategies either fail to scale up to large datasets or abandon joint calling strategies. We present a high throughput framework including multiple variant callers for single nucleotide variant (SNV) calling, which leverages hybrid computing infrastructure consisting of cloud AWS, supercomputers and local high performance computing infrastructures. We present a novel binning approach for large scale joint variant calling and imputation which can scale up to over 10,000 samples while producing SNV callsets with high sensitivity and specificity. As a proof of principle, we present results of analysis on Cohorts for Heart And Aging Research in Genomic Epidemiology (CHARGE) WGS freeze 3 dataset in which joint calling, imputation and phasing of over 5300 whole genome samples was produced in under 6 weeks using four state-of-the-art callers. The callers used were SNPTools, GATK-HaplotypeCaller, GATK-UnifiedGenotyper and GotCloud. We used Amazon AWS, a 4000-core in-house cluster at Baylor College of Medicine, IBM power PC Blue BioU at Rice and Rhea at Oak Ridge National Laboratory (ORNL) for the computation. AWS was used for joint calling of 180 TB of BAM files, and ORNL and Rice supercomputers were used for the imputation and phasing step. All other steps were carried out on the local compute cluster. The entire operation used 5.2 million core hours and only transferred a total of 6 TB of data across the platforms. Even with increasing sizes of whole genome datasets, ensemble joint calling of SNVs for low

  9. Analytical strategies for uranium determination in natural water and industrial effluents samples

    International Nuclear Information System (INIS)

    Santos, Juracir Silva

    2011-01-01

    The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0 3 , H 3 C 2 00H or HCI. The determinations in HN0 3 medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L -1 ) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 μg L -1 and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 μg L -1 . In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and minimize reagent consumption, as well as

  10. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    Science.gov (United States)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  11. Sampling strategies for tropical forest nutrient cycling studies: a case study in São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    G. Sparovek

    1997-12-01

    Full Text Available The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P, and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation. A natural remnant forest in the West of São Paulo State (Brazil was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete

  12. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  13. An Optimal Sample Data Usage Strategy to Minimize Overfitting and Underfitting Effects in Regression Tree Models Based on Remotely-Sensed Data

    Directory of Open Access Journals (Sweden)

    Yingxin Gu

    2016-11-01

    Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  14. Sampling strategies to improve passive optical remote sensing of river bathymetry

    Science.gov (United States)

    Legleiter, Carl; Overstreet, Brandon; Kinzel, Paul J.

    2018-01-01

    Passive optical remote sensing of river bathymetry involves establishing a relation between depth and reflectance that can be applied throughout an image to produce a depth map. Building upon the Optimal Band Ratio Analysis (OBRA) framework, we introduce sampling strategies for constructing calibration data sets that lead to strong relationships between an image-derived quantity and depth across a range of depths. Progressively excluding observations that exceed a series of cutoff depths from the calibration process improved the accuracy of depth estimates and allowed the maximum detectable depth ($d_{max}$) to be inferred directly from an image. Depth retrieval in two distinct rivers also was enhanced by a stratified version of OBRA that partitions field measurements into a series of depth bins to avoid biases associated with under-representation of shallow areas in typical field data sets. In the shallower, clearer of the two rivers, including the deepest field observations in the calibration data set did not compromise depth retrieval accuracy, suggesting that $d_{max}$ was not exceeded and the reach could be mapped without gaps. Conversely, in the deeper and more turbid stream, progressive truncation of input depths yielded a plausible estimate of $d_{max}$ consistent with theoretical calculations based on field measurements of light attenuation by the water column. This result implied that the entire channel, including pools, could not be mapped remotely. However, truncation improved the accuracy of depth estimates in areas shallower than $d_{max}$, which comprise the majority of the channel and are of primary interest for many habitat-oriented applications.

  15. Comparison of sampling strategies for tobacco retailer inspections to maximize coverage in vulnerable areas and minimize cost.

    Science.gov (United States)

    Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M

    2017-06-23

    In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of

  16. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  17. Sample design and gamma-ray counting strategy of neutron activation system for triton burnup measurements in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jungmin [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Cheon, Mun Seong [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); Chung, Kyoung-Jae, E-mail: jkjlsh1@snu.ac.kr [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Hwang, Y.S. [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of)

    2016-11-01

    Highlights: • Sample design for triton burnup ratio measurement is carried out. • Samples for 14.1 MeV neutron measurements are selected for KSTAR. • Si and Cu are the most suitable materials for d-t neutron measurements. • Appropriate γ-ray counting strategies for each selected sample are established. - Abstract: On the purpose of triton burnup measurements in Korea Superconducting Tokamak Advanced Research (KSTAR) deuterium plasmas, appropriate neutron activation system (NAS) samples for 14.1 MeV d-t neutron measurements have been designed and gamma-ray counting strategy is established. Neutronics calculations are performed with the MCNP5 neutron transport code for the KSTAR neutral beam heated deuterium plasma discharges. Based on those calculations and the assumed d-t neutron yield, the activities induced by d-t neutrons are estimated with the inventory code FISPACT-2007 for candidate sample materials: Si, Cu, Al, Fe, Nb, Co, Ti, and Ni. It is found that Si, Cu, Al, and Fe are suitable for the KSATR NAS in terms of the minimum detectable activity (MDA) calculated based on the standard deviation of blank measurements. Considering background gamma-rays radiated from surrounding structures activated by thermalized fusion neutrons, appropriate gamma-ray counting strategy for each selected sample is established.

  18. An Energy Efficient Localization Strategy for Outdoor Objects based on Intelligent Light-Intensity Sampling

    OpenAIRE

    Sandnes, Frode Eika

    2010-01-01

    A simple and low cost strategy for implementing pervasive objects that identify and track their own geographical location is proposed. The strategy, which is not reliant on any GIS infrastructure such as GPS, is realized using an electronic artifact with a built in clock, a light sensor, or low-cost digital camera, persistent storage such as flash and sufficient computational circuitry to make elementary trigonometric computations. The object monitors the lighting conditions and thereby detec...

  19. Evaluation of 5-FU pharmacokinetics in cancer patients with DPD deficiency using a Bayesian limited sampling strategy

    NARCIS (Netherlands)

    Van Kuilenburg, A.; Hausler, P.; Schalhorn, A.; Tanck, M.; Proost, J.H.; Terborg, C.; Behnke, D.; Schwabe, W.; Jabschinsky, K.; Maring, J.G.

    Aims: Dihydropyrimidine dehydrogenase (DPD) is the initial enzyme in the catabolism of 5-fluorouracil (5FU) and DPD deficiency is an important pharmacogenetic syndrome. The main purpose of this study was to develop a limited sampling strategy to evaluate the pharmacokinetics of 5FU and to detect

  20. Integrating the Theory of Sampling into Underground Mine Grade Control Strategies

    Directory of Open Access Journals (Sweden)

    Simon C. Dominy

    2018-05-01

    Full Text Available Grade control in underground mines aims to deliver quality tonnes to the process plant via the accurate definition of ore and waste. It comprises a decision-making process including data collection and interpretation; local estimation; development and mining supervision; ore and waste destination tracking; and stockpile management. The foundation of any grade control programme is that of high-quality samples collected in a geological context. The requirement for quality samples has long been recognised, where they should be representative and fit-for-purpose. Once a sampling error is introduced, it propagates through all subsequent processes contributing to data uncertainty, which leads to poor decisions and financial loss. Proper application of the Theory of Sampling reduces errors during sample collection, preparation, and assaying. To achieve quality, sampling techniques must minimise delimitation, extraction, and preparation errors. Underground sampling methods include linear (chip and channel, grab (broken rock, and drill-based samples. Grade control staff should be well-trained and motivated, and operating staff should understand the critical need for grade control. Sampling must always be undertaken with a strong focus on safety and alternatives sought if the risk to humans is high. A quality control/quality assurance programme must be implemented, particularly when samples contribute to a reserve estimate. This paper assesses grade control sampling with emphasis on underground gold operations and presents recommendations for optimal practice through the application of the Theory of Sampling.

  1. A comparison of sample preparation strategies for biological tissues and subsequent trace element analysis using LA-ICP-MS.

    Science.gov (United States)

    Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas

    2017-03-01

    Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.

  2. Anatomical robust optimization to account for nasal cavity filling variation during intensity-modulated proton therapy: a comparison with conventional and adaptive planning strategies

    Science.gov (United States)

    van de Water, Steven; Albertini, Francesca; Weber, Damien C.; Heijmen, Ben J. M.; Hoogeman, Mischa S.; Lomax, Antony J.

    2018-01-01

    The aim of this study is to develop an anatomical robust optimization method for intensity-modulated proton therapy (IMPT) that accounts for interfraction variations in nasal cavity filling, and to compare it with conventional single-field uniform dose (SFUD) optimization and online plan adaptation. We included CT data of five patients with tumors in the sinonasal region. Using the planning CT, we generated for each patient 25 ‘synthetic’ CTs with varying nasal cavity filling. The robust optimization method available in our treatment planning system ‘Erasmus-iCycle’ was extended to also account for anatomical uncertainties by including (synthetic) CTs with varying patient anatomy as error scenarios in the inverse optimization. For each patient, we generated treatment plans using anatomical robust optimization and, for benchmarking, using SFUD optimization and online plan adaptation. Clinical target volume (CTV) and organ-at-risk (OAR) doses were assessed by recalculating the treatment plans on the synthetic CTs, evaluating dose distributions individually and accumulated over an entire fractionated 50 GyRBE treatment, assuming each synthetic CT to correspond to a 2 GyRBE fraction. Treatment plans were also evaluated using actual repeat CTs. Anatomical robust optimization resulted in adequate CTV doses (V95%  ⩾  98% and V107%  ⩽  2%) if at least three synthetic CTs were included in addition to the planning CT. These CTV requirements were also fulfilled for online plan adaptation, but not for the SFUD approach, even when applying a margin of 5 mm. Compared with anatomical robust optimization, OAR dose parameters for the accumulated dose distributions were on average 5.9 GyRBE (20%) higher when using SFUD optimization and on average 3.6 GyRBE (18%) lower for online plan adaptation. In conclusion, anatomical robust optimization effectively accounted for changes in nasal cavity filling during IMPT, providing substantially improved CTV and

  3. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  4. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    Science.gov (United States)

    Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL- 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  5. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Science.gov (United States)

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  6. An evaluation of sampling and full enumeration strategies for Fisher Jenks classification in big data settings

    Science.gov (United States)

    Rey, Sergio J.; Stephens, Philip A.; Laura, Jason R.

    2017-01-01

    Large data contexts present a number of challenges to optimal choropleth map classifiers. Application of optimal classifiers to a sample of the attribute space is one proposed solution. The properties of alternative sampling-based classification methods are examined through a series of Monte Carlo simulations. The impacts of spatial autocorrelation, number of desired classes, and form of sampling are shown to have significant impacts on the accuracy of map classifications. Tradeoffs between improved speed of the sampling approaches and loss of accuracy are also considered. The results suggest the possibility of guiding the choice of classification scheme as a function of the properties of large data sets.

  7. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    Science.gov (United States)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  8. Spatial scan statistics to assess sampling strategy of antimicrobial resistance monitoring programme

    DEFF Research Database (Denmark)

    Vieira, Antonio; Houe, Hans; Wegener, Henrik Caspar

    2009-01-01

    Pie collection and analysis of data on antimicrobial resistance in human and animal Populations are important for establishing a baseline of the occurrence of resistance and for determining trends over time. In animals, targeted monitoring with a stratified sampling plan is normally used. However...... sampled by the Danish Integrated Antimicrobial Resistance Monitoring and Research Programme (DANMAP), by identifying spatial Clusters of samples and detecting areas with significantly high or low sampling rates. These analyses were performed for each year and for the total 5-year study period for all...... by an antimicrobial monitoring program....

  9. Comparison of sampling strategies for object-based classification of urban vegetation from Very High Resolution satellite images

    Science.gov (United States)

    Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas

    2016-09-01

    Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.

  10. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    Energy Technology Data Exchange (ETDEWEB)

    Nischkauer, Winfried [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria); Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Vanhaecke, Frank [Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Bernacchi, Sébastien; Herwig, Christoph [Institute of Chemical Engineering, Vienna University of Technology, Vienna (Austria); Limbeck, Andreas, E-mail: Andreas.Limbeck@tuwien.ac.at [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria)

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL{sup −1} with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  11. Technical Note: Comparison of storage strategies of sea surface microlayer samples

    Directory of Open Access Journals (Sweden)

    K. Schneider-Zapp

    2013-07-01

    Full Text Available The sea surface microlayer (SML is an important biogeochemical system whose physico-chemical analysis often necessitates some degree of sample storage. However, many SML components degrade with time so the development of optimal storage protocols is paramount. We here briefly review some commonly used treatment and storage protocols. Using freshwater and saline SML samples from a river estuary, we investigated temporal changes in surfactant activity (SA and the absorbance and fluorescence of chromophoric dissolved organic matter (CDOM over four weeks, following selected sample treatment and storage protocols. Some variability in the effectiveness of individual protocols most likely reflects sample provenance. None of the various protocols examined performed any better than dark storage at 4 °C without pre-treatment. We therefore recommend storing samples refrigerated in the dark.

  12. LC-MS analysis of the plasma metabolome–a novel sample preparation strategy

    DEFF Research Database (Denmark)

    Skov, Kasper; Hadrup, Niels; Smedsgaard, Jørn

    2015-01-01

    Blood plasma is a well-known body fluid often analyzed in studies on the effects of toxic compounds as physiological or chemical induced changes in the mammalian body are reflected in the plasma metabolome. Sample preparation prior to LC-MS based analysis of the plasma metabolome is a challenge...... as plasma contains compounds with very different properties. Besides, proteins, which usually are precipitated with organic solvent, phospholipids, are known to cause ion suppression in electrospray mass spectrometry. We have compared two different sample preparation techniques prior to LC-qTOF analysis...... of plasma samples: The first is protein precipitation; the second is protein precipitation followed by solid phase extraction with sub-fractionation into three sub-samples; a phospholipid, a lipid and a polar sub-fraction. Molecular feature extraction of the data files from LC-qTOF analysis of the samples...

  13. A rapid, accurate and robust particle-based assay for the simultaneous screening of plasma samples for the presence of five different anti-cytokine autoantibodies

    DEFF Research Database (Denmark)

    Guldager, Daniel Kring Rasmussen; von Stemann, Jakob Hjorth; Larsen, Rune

    2015-01-01

    suitable for larger screenings. Based on confirmed antibody binding characteristics and the resultant reactivity in this multiplex assay, a classification of the c-aAb levels was suggested. The screening results of the recipients who received blood transfusions indicate that more studies are needed...... plasma samples and pooled normal immunoglobulin preparations were used to validate the assay. Plasma samples from 98 transfusion recipients, half of whom presented with febrile reactions, were tested by the assay. RESULTS: The assay detected specific and saturable immunoglobulin G (IgG) binding to each...... cytokine autoantibodies quantities in the negative plasma samples ranged between 80% and 125%. The analytical intra- and inter-assay variations were 4% and 11%, respectively. Varying c-aAb levels were detectable in the transfusion recipients. There was no difference in c-aAb frequency between the patients...

  14. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  15. Application of robust NiTi-ZrO2-PEG SPME fiber in the determination of haloanisoles in cork stopper samples

    International Nuclear Information System (INIS)

    Budziak, Dilma; Martendal, Edmar; Carasek, Eduardo

    2008-01-01

    In this study, a novel solid-phase microextraction (SPME) fiber obtained using sol-gel technology was applied in the determination of off-flavor compounds (2,4,6-trichloroanisole (TCA), 2,4,6-tribromoanisole (TBA) and pentachloroanisole (PCA)) present in cork stopper samples. A NiTi alloy previously electrodeposited with zirconium oxide was used as the substrate for a poly(ethylene glycol) (PEG) coating. Scanning electronic microscopy showed good uniformity of the coating and allowed the coating thickness to be estimated as around 17 μm. The optimization of the main parameters influencing the extraction efficiency, such as cork sample mass, sodium chloride mass, extraction temperature and extraction time were optimized using a full factorial design, followed by a Doehlert design. The optimum conditions were: 20 min of extraction at 70 deg. C using 60 mg of the cork sample and 10 mL of water saturated with sodium chloride in a 20 mL amber vial with constant magnetic stirring. Satisfactory detection limits between 2.5 and 5.1 ng g -1 were obtained, as well as good precision (R.S.D. in the range of 5.8-12.0%). Recovery tests were performed on three different cork samples, and values between 83 and 119% were obtained. The proposed SPME fiber was compared with commercially available fibers and good results were achieved, demonstrating its applicability

  16. Study of radioelements drained by Rhone stream to Mediterranean Sea: Strategy of sampling and methodology

    International Nuclear Information System (INIS)

    Arnaud, M.; Charmasson, S.; Calmet, D.; Fernandez, J.M.

    1992-01-01

    This paper describes the methods used for water and sediments sampling in rivers and sea. The purpose is the study of radionuclide migration (Cesium 134, Cesium 137) in Mediterranean Sea (Gulf of Lion). 20 refs., 11 figs., 1 tab

  17. Prescription drug samples--does this marketing strategy counteract policies for quality use of medicines?

    Science.gov (United States)

    Groves, K E M; Sketris, I; Tett, S E

    2003-08-01

    Prescription drug samples, as used by the pharmaceutical industry to market their products, are of current interest because of their influence on prescribing, and their potential impact on consumer safety. Very little research has been conducted into the use and misuse of prescription drug samples, and the influence of samples on health policies designed to improve the rational use of medicines. This is a topical issue in the prescription drug debate, with increasing costs and increasing concerns about optimizing use of medicines. This manuscript critically evaluates the research that has been conducted to date about prescription drug samples, discusses the issues raised in the context of traditional marketing theory, and suggests possible alternatives for the future.

  18. The new strategy for particle identification samples in Run 2 at LHCb

    CERN Multimedia

    Mathad, Abhijit

    2017-01-01

    For Run 2 of LHCb data taking, the selection of PID calibration samples is implemented in the high level trigger. A further processing is needed to provide background-subtracted samples to determine the PID performance, or to develop new algorithms for the evaluation of the detector performance in upgrade scenarios. This is achieved through a centralised production which makes efficient use of LHCb computing resources. This poster presents the major steps of the implementation.

  19. Marine anthropogenic radiotracers in the Southern Hemisphere: New sampling and analytical strategies

    DEFF Research Database (Denmark)

    Levy, I.; Povinec, P.P.; Aoyama, M.

    2011-01-01

    The Japan Agency for Marine Earth Science and Technology conducted in 2003–2004 the Blue Earth Global Expedition (BEAGLE2003) around the Southern Hemisphere Oceans, which was a rare opportunity to collect many seawater samples for anthropogenic radionuclide studies. We describe here sampling...... showed a reasonable agreement between the participating laboratories. The obtained data on the distribution of 137Cs and plutonium isotopes in seawater represent the most comprehensive results available for the Southern Hemisphere Oceans....

  20. Robust Strategy for Crafting Li5Cr7Ti6O25@CeO2 Composites as High-Performance Anode Material for Lithium-Ion Battery.

    Science.gov (United States)

    Mei, Jie; Yi, Ting-Feng; Li, Xin-Yuan; Zhu, Yan-Rong; Xie, Ying; Zhang, Chao-Feng

    2017-07-19

    A facile strategy was developed to prepare Li 5 Cr 7 Ti 6 O 25 @CeO 2 composites as a high-performance anode material. X-ray diffraction (XRD) and Rietveld refinement results show that the CeO 2 coating does not alter the structure of Li 5 Cr 7 Ti 6 O 25 but increases the lattice parameter. Scanning electron microscopy (SEM) indicates that all samples have similar morphologies with a homogeneous particle distribution in the range of 100-500 nm. Energy-dispersive spectroscopy (EDS) mapping and high-resolution transmission electron microscopy (HRTEM) prove that CeO 2 layer successfully formed a coating layer on a surface of Li 5 Cr 7 Ti 6 O 25 particles and supplied a good conductive connection between the Li 5 Cr 7 Ti 6 O 25 particles. The electrochemical characterization reveals that Li 5 Cr 7 Ti 6 O 25 @CeO 2 (3 wt %) electrode shows the highest reversibility of the insertion and deinsertion behavior of Li ion, the smallest electrochemical polarization, the best lithium-ion mobility among all electrodes, and a better electrochemical activity than the pristine one. Therefore, Li 5 Cr 7 Ti 6 O 25 @CeO 2 (3 wt %) electrode indicates the highest delithiation and lithiation capacities at each rate. At 5 C charge-discharge rate, the pristine Li 5 Cr 7 Ti 6 O 25 only delivers an initial delithiation capacity of ∼94.7 mAh g -1 , and the delithiation capacity merely achieves 87.4 mAh g -1 even after 100 cycles. However, Li 5 Cr 7 Ti 6 O 25 @CeO 2 (3 wt %) delivers an initial delithiation capacity of 107.5 mAh·g -1 , and the delithiation capacity also reaches 100.5 mAh g -1 even after 100 cycles. The cerium dioxide modification is a direct and efficient approach to improve the delithiation and lithiation capacities and cycle property of Li 5 Cr 7 Ti 6 O 25 at large current densities.

  1. Actual distribution of Cronobacter spp. in industrial batches of powdered infant formula and consequences for performance of sampling strategies.

    Science.gov (United States)

    Jongenburger, I; Reij, M W; Boer, E P J; Gorris, L G M; Zwietering, M H

    2011-11-15

    The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study investigated the spatial distribution of Cronobacter spp. in powdered infant formula (PIF) on industrial batch-scale for both a recalled batch as well a reference batch. Additionally, local spatial occurrence of clusters of Cronobacter cells was assessed, as well as the performance of typical sampling strategies to determine the presence of the microorganisms. The concentration of Cronobacter spp. was assessed in the course of the filling time of each batch, by taking samples of 333 g using the most probable number (MPN) enrichment technique. The occurrence of clusters of Cronobacter spp. cells was investigated by plate counting. From the recalled batch, 415 MPN samples were drawn. The expected heterogeneous distribution of Cronobacter spp. could be quantified from these samples, which showed no detectable level (detection limit of -2.52 log CFU/g) in 58% of samples, whilst in the remainder concentrations were found to be between -2.52 and 2.75 log CFU/g. The estimated average concentration in the recalled batch was -2.78 log CFU/g and a standard deviation of 1.10 log CFU/g. The estimated average concentration in the reference batch was -4.41 log CFU/g, with 99% of the 93 samples being below the detection limit. In the recalled batch, clusters of cells occurred sporadically in 8 out of 2290 samples of 1g taken. The two largest clusters contained 123 (2.09 log CFU/g) and 560 (2.75 log CFU/g) cells. Various sampling strategies were evaluated for the recalled batch. Taking more and smaller samples and keeping the total sampling weight constant, considerably improved the performance of the sampling plans to detect such a type of contaminated batch. Compared to random sampling

  2. Evaluation of active sampling strategies for the determination of 1,3-butadiene in air

    Science.gov (United States)

    Vallecillos, Laura; Maceira, Alba; Marcé, Rosa Maria; Borrull, Francesc

    2018-03-01

    Two analytical methods for determining levels of 1,3-butadiene in urban and industrial atmospheres were evaluated in this study. Both methods are extensively used for determining the concentration of volatile organic compounds in the atmosphere and involve collecting samples by active adsorptive enrichment on solid sorbents. The first method uses activated charcoal as the sorbent and involves liquid desorption with carbon disulfide. The second involves the use of a multi-sorbent bed with two graphitised carbons and a carbon molecular sieve as the sorbent, with thermal desorption. Special attention was paid to the optimization of the sampling procedure through the study of sample volume, the stability of 1,3-butadiene once inside the sampling tube and the humidity effect. In the end, the thermal desorption method showed better repeatability and limits of detection and quantification for 1,3-butadiene than the liquid desorption method, which makes the thermal desorption method more suitable for analysing air samples from both industrial and urban atmospheres. However, sampling must be performed with a pre-tube filled with a drying agent to prevent the loss of the adsorption capacity of the solid adsorbent caused by water vapour. The thermal desorption method has successfully been applied to determine of 1,3-butadiene inside a 1,3-butadiene production plant and at three locations in the vicinity of the same plant.

  3. The Personality Assessment Inventory as a proxy for the Psychopathy Checklist Revised: testing the incremental validity and cross-sample robustness of the Antisocial Features Scale.

    Science.gov (United States)

    Douglas, Kevin S; Guy, Laura S; Edens, John F; Boer, Douglas P; Hamilton, Jennine

    2007-09-01

    The Personality Assessment Inventory's (PAI's) ability to predict psychopathic personality features, as assessed by the Psychopathy Checklist-Revised (PCL-R), was examined. To investigate whether the PAI Antisocial Features (ANT) Scale and subscales possessed incremental validity beyond other theoretically relevant PAI scales, optimized regression equations were derived in a sample of 281 Canadian federal offenders. ANT, or ANT-Antisocial Behavior (ANT-A), demonstrated unique variance in regression analyses predicting PCL-R total and Factor 2 (Lifestyle Impulsivity and Social Deviance) scores, but only the Dominance (DOM) Scale was retained in models predicting Factor 1 (Interpersonal and Affective Deficits). Attempts to cross-validate the regression equations derived from the first sample on a sample of 85 U.S. sex offenders resulted in considerable validity shrinkage, with the ANT Scale in isolation performing comparably to or better than the statistical models for PCL-R total and Factor 2 scores. Results offer limited evidence of convergent validity between the PAI and the PCL-R.

  4. Strategies and equipment for sampling suspended sediment and associated toxic chemicals in large rivers - with emphasis on the Mississippi River

    Science.gov (United States)

    Meade, R.H.; Stevens, H.H.

    1990-01-01

    A Lagrangian strategy for sampling large rivers, which was developed and tested in the Orinoco and Amazon Rivers of South America during the early 1980s, is now being applied to the study of toxic chemicals in the Mississippi River. A series of 15-20 cross-sections of the Mississippi mainstem and its principal tributaries is sampled by boat in downstream sequence, beginning upriver of St. Louis and concluding downriver of New Orleans 3 weeks later. The timing of the downstream sampling sequence approximates the travel time of the river water. Samples at each cross-section are discharge-weighted to provide concentrations of dissolved and suspended constituents that are converted to fluxes. Water-sediment mixtures are collected from 10-40 equally spaced points across the river width by sequential depth integration at a uniform vertical transit rate. Essential equipment includes (i) a hydraulic winch, for sensitive control of vertical transit rates, and (ii) a collapsible-bag sampler, which allows integrated samples to be collected at all depths in the river. A section is usually sampled in 4-8 h, for a total sample recovery of 100-120 l. Sampled concentrations of suspended silt and clay are reproducible within 3%.

  5. [Identification of Systemic Contaminations with Legionella Spec. in Drinking Water Plumbing Systems: Sampling Strategies and Corresponding Parameters].

    Science.gov (United States)

    Völker, S; Schreiber, C; Müller, H; Zacharias, N; Kistemann, T

    2017-05-01

    After the amendment of the Drinking Water Ordinance in 2011, the requirements for the hygienic-microbiological monitoring of drinking water installations have increased significantly. In the BMBF-funded project "Biofilm Management" (2010-2014), we examined the extent to which established sampling strategies in practice can uncover drinking water plumbing systems systemically colonized with Legionella. Moreover, we investigated additional parameters that might be suitable for detecting systemic contaminations. We subjected the drinking water plumbing systems of 8 buildings with known microbial contamination (Legionella) to an intensive hygienic-microbiological sampling with high spatial and temporal resolution. A total of 626 drinking hot water samples were analyzed with classical culture-based methods. In addition, comprehensive hygienic observations were conducted in each building and qualitative interviews with operators and users were applied. Collected tap-specific parameters were quantitatively analyzed by means of sensitivity and accuracy calculations. The systemic presence of Legionella in drinking water plumbing systems has a high spatial and temporal variability. Established sampling strategies were only partially suitable to detect long-term Legionella contaminations in practice. In particular, the sampling of hot water at the calorifier and circulation re-entrance showed little significance in terms of contamination events. To detect the systemic presence of Legionella,the parameters stagnation (qualitatively assessed) and temperature (compliance with the 5K-rule) showed better results. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Robust Two Degrees-of-freedom Single-current Control Strategy for LCL-type Grid-Connected DG System under Grid-Frequency Fluctuation and Grid-impedance Variation

    DEFF Research Database (Denmark)

    Zhou, Leming; Chen, Yandong; Luo, An

    2016-01-01

    -of-freedom single-current control (RTDOF-SCC) strategy is proposed, which mainly includes the synchronous reference frame quasi-proportional-integral (SRFQPI) control and robust grid-current-feedback active damping (RGCFAD) control. The proposed SRFQPI control can compensate the local-loads reactive power......, and regulate the instantaneous grid current without steady-state error regardless of the fundamental frequency fluctuation. Simultaneously, the proposed RGCFAD control effectively damps the LCL-resonance peak regardless of the grid-impedance variation, and further improves both transient and steady...

  7. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  8. Sampling and Homogenization Strategies Significantly Influence the Detection of Foodborne Pathogens in Meat.

    Science.gov (United States)

    Rohde, Alexander; Hammerl, Jens Andre; Appel, Bernd; Dieckmann, Ralf; Al Dahouk, Sascha

    2015-01-01

    Efficient preparation of food samples, comprising sampling and homogenization, for microbiological testing is an essential, yet largely neglected, component of foodstuff control. Salmonella enterica spiked chicken breasts were used as a surface contamination model whereas salami and meat paste acted as models of inner-matrix contamination. A systematic comparison of different homogenization approaches, namely, stomaching, sonication, and milling by FastPrep-24 or SpeedMill, revealed that for surface contamination a broad range of sample pretreatment steps is applicable and loss of culturability due to the homogenization procedure is marginal. In contrast, for inner-matrix contamination long treatments up to 8 min are required and only FastPrep-24 as a large-volume milling device produced consistently good recovery rates. In addition, sampling of different regions of the spiked sausages showed that pathogens are not necessarily homogenously distributed throughout the entire matrix. Instead, in meat paste the core region contained considerably more pathogens compared to the rim, whereas in the salamis the distribution was more even with an increased concentration within the intermediate region of the sausages. Our results indicate that sampling and homogenization as integral parts of food microbiology and monitoring deserve more attention to further improve food safety.

  9. Optimizing sampling strategy for radiocarbon dating of Holocene fluvial systems in a vertically aggrading setting

    International Nuclear Information System (INIS)

    Toernqvist, T.E.; Dijk, G.J. Van

    1993-01-01

    The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs

  10. Sample-efficient Strategies for Learning in the Presence of Noise

    DEFF Research Database (Denmark)

    Cesa-Bianchi, N.; Dichterman, E.; Fischer, Paul

    1999-01-01

    In this paper, we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behavior of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of &egr;/&Dgr;2 + d/&Dgr; (up...... to logarithmic factors) examples are necessary for PAC learning any target class of {#123;0,1}#125;-valued functions of VC dimension d, where &egr; is the desired accuracy and &eegr; = &egr;/(1 + &egr;) - &Dgr; the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned...... with accuracy &egr; and malicious noise rate &eegr; &egr;/(1 + &egr;), this irrespective to sample complexity). We also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d...

  11. Successful Sampling Strategy Advances Laboratory Studies of NMR Logging in Unconsolidated Aquifers

    Science.gov (United States)

    Behroozmand, Ahmad A.; Knight, Rosemary; Müller-Petke, Mike; Auken, Esben; Barfod, Adrian A. S.; Ferré, Ty P. A.; Vilhelmsen, Troels N.; Johnson, Carole D.; Christiansen, Anders V.

    2017-11-01

    The nuclear magnetic resonance (NMR) technique has become popular in groundwater studies because it responds directly to the presence and mobility of water in a porous medium. There is a need to conduct laboratory experiments to aid in the development of NMR hydraulic conductivity models, as is typically done in the petroleum industry. However, the challenge has been obtaining high-quality laboratory samples from unconsolidated aquifers. At a study site in Denmark, we employed sonic drilling, which minimizes the disturbance of the surrounding material, and extracted twelve 7.6 cm diameter samples for laboratory measurements. We present a detailed comparison of the acquired laboratory and logging NMR data. The agreement observed between the laboratory and logging data suggests that the methodologies proposed in this study provide good conditions for studying NMR measurements of unconsolidated near-surface aquifers. Finally, we show how laboratory sample size and condition impact the NMR measurements.

  12. Combining multiple hypothesis testing and affinity propagation clustering leads to accurate, robust and sample size independent classification on gene expression data

    Directory of Open Access Journals (Sweden)

    Sakellariou Argiris

    2012-10-01

    Full Text Available Abstract Background A feature selection method in microarray gene expression data should be independent of platform, disease and dataset size. Our hypothesis is that among the statistically significant ranked genes in a gene list, there should be clusters of genes that share similar biological functions related to the investigated disease. Thus, instead of keeping N top ranked genes, it would be more appropriate to define and keep a number of gene cluster exemplars. Results We propose a hybrid FS method (mAP-KL, which combines multiple hypothesis testing and affinity propagation (AP-clustering algorithm along with the Krzanowski & Lai cluster quality index, to select a small yet informative subset of genes. We applied mAP-KL on real microarray data, as well as on simulated data, and compared its performance against 13 other feature selection approaches. Across a variety of diseases and number of samples, mAP-KL presents competitive classification results, particularly in neuromuscular diseases, where its overall AUC score was 0.91. Furthermore, mAP-KL generates concise yet biologically relevant and informative N-gene expression signatures, which can serve as a valuable tool for diagnostic and prognostic purposes, as well as a source of potential disease biomarkers in a broad range of diseases. Conclusions mAP-KL is a data-driven and classifier-independent hybrid feature selection method, which applies to any disease classification problem based on microarray data, regardless of the available samples. Combining multiple hypothesis testing and AP leads to subsets of genes, which classify unknown samples from both, small and large patient cohorts with high accuracy.

  13. Effects of achievement differences for internal/external frame of reference model investigations: A test of robustness of findings over diverse student samples.

    Science.gov (United States)

    Schmidt, Isabelle; Brunner, Martin; Preckel, Franzis

    2017-11-12

    Achievement in math and achievement in verbal school subjects are more strongly correlated than the respective academic self-concepts. The internal/external frame of reference model (I/E model; Marsh, 1986, Am. Educ. Res. J., 23, 129) explains this finding by social and dimensional comparison processes. We investigated a key assumption of the model that dimensional comparisons mainly depend on the difference in achievement between subjects. We compared correlations between subject-specific self-concepts of groups of elementary and secondary school students with or without achievement differences in the respective subjects. The main goals were (1) to show that effects of dimensional comparisons depend to a large degree on the existence of achievement differences between subjects, (2) to demonstrate the generalizability of findings over different grade levels and self-concept scales, and (3) to test a rarely used correlation comparison approach (CCA) for the investigation of I/E model assumptions. We analysed eight German elementary and secondary school student samples (grades 3-8) from three independent studies (Ns 326-878). Correlations between math and German self-concepts of students with identical grades in the respective subjects were compared with the correlation of self-concepts of students having different grades using Fisher's Z test for independent samples. In all samples, correlations between math self-concept and German self-concept were higher for students having identical grades than for students having different grades. Differences in median correlations had small effect sizes for elementary school students and moderate effect sizes for secondary school students. Findings generalized over grades and indicated a developmental aspect in self-concept formation. The CCA complements investigations within I/E-research. © 2017 The British Psychological Society.

  14. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks.

    Science.gov (United States)

    Nischkauer, Winfried; Vanhaecke, Frank; Bernacchi, Sébastien; Herwig, Christoph; Limbeck, Andreas

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL - 1 with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  15. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  16. Demonstration of the efficiency and robustness of an acid leaching process to remove metals from various CCA-treated wood samples.

    Science.gov (United States)

    Coudert, Lucie; Blais, Jean-François; Mercier, Guy; Cooper, Paul; Janin, Amélie; Gastonguay, Louis

    2014-01-01

    In recent years, an efficient and economically attractive leaching process has been developed to remove metals from copper-based treated wood wastes. This study explored the applicability of this leaching process using chromated copper arsenate (CCA) treated wood samples with different initial metal loading and elapsed time between wood preservation treatment and remediation. The sulfuric acid leaching process resulted in the solubilization of more than 87% of the As, 70% of the Cr, and 76% of the Cu from CCA-chips and in the solubilization of more than 96% of the As, 78% of the Cr and 91% of the Cu from CCA-sawdust. The results showed that the performance of this leaching process might be influenced by the initial metal loading of the treated wood wastes and the elapsed time between preservation treatment and remediation. The effluents generated during the leaching steps were treated by precipitation-coagulation to satisfy the regulations for effluent discharge in municipal sewers. Precipitation using ferric chloride and sodium hydroxide was highly efficient, removing more than 99% of the As, Cr, and Cu. It appears that this leaching process can be successfully applied to remove metals from different CCA-treated wood samples and then from the effluents. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Robotic traverse and sample return strategies for a lunar farside mission to the Schrodinger basin

    NARCIS (Netherlands)

    Potts, N.J.; Gullikson, A.L.; Curran, N.M.; Dhaliwal, J.K.; Leader, M.K.; Rege, R.N.; Klaus, K.K.; Kring, D.A.

    2015-01-01

    Most of the highest priority objectives for lunar science and exploration (e.g.; NRC, 2007) require sample return. Studies of the best places to conduct that work have identified Schrödinger basin as a geologically rich area, able to address a significant number of these scientific concepts. In this

  18. Optimal sampling strategies to assess inulin clearance in children by the inulin single-injection method

    NARCIS (Netherlands)

    van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.

    2003-01-01

    Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method

  19. Impact of diversity of colonizing strains on strategies for sampling Escherichia coli from fecal specimens.

    Science.gov (United States)

    Lautenbach, Ebbing; Bilker, Warren B; Tolomeo, Pam; Maslow, Joel N

    2008-09-01

    Of 49 subjects, 21 were colonized with more than one strain of Escherichia coli and 12 subjects had at least one strain present in fewer than 20% of colonies. The ability to accurately characterize E. coli strain diversity is directly related to the number of colonies sampled and the underlying prevalence of the strain.

  20. Development of a robust ionic liquid-based dispersive liquid-liquid microextraction against high concentration of salt for preconcentration of trace metals in saline aqueous samples: Application to the determination of Pb and Cd

    International Nuclear Information System (INIS)

    Yousefi, Seyed Reza; Shemirani, Farzaneh

    2010-01-01

    A new ionic liquid-based dispersive liquid-liquid microextraction method was developed for preconcentration and determination of compounds in aqueous samples containing very high salt concentrations. This method can solve the problems associated with the limited application of the conventional IL-based DLLME in these samples. This is believed to arise from dissolving of the ionic liquids in aqueous samples with high salt content. In this method, the robustness of microextraction system against high salt concentration (up to 40%, w/v) is increased by introducing a common ion of the ionic liquid into the sample solution. The proposed method was applied satisfactorily to the preconcentration of lead and cadmium in saline samples. After preconcentration, the settled IL-phase was dissolved in 100 μL ethanol and aspirated into the flame atomic absorption spectrometer (FAAS) using a home-made microsample introduction system. Several variables affecting the microextraction efficiency were investigated and optimized. Under the optimized conditions and preconcentration of only 10 mL of sample, the enhancement factors of 273 and 311 and the detection limits of 0.6 μg L -1 and 0.03 μg L -1 were obtained for lead and cadmium, respectively. Validation of the method was performed by both an analysis of a certified reference material (CRM) and comparison of results with those obtained by ISO standard method.

  1. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  2. Addressing Underrepresentation in Sex Work Research: Reflections on Designing a Purposeful Sampling Strategy.

    Science.gov (United States)

    Bungay, Vicky; Oliffe, John; Atchison, Chris

    2016-06-01

    Men, transgender people, and those working in off-street locales have historically been underrepresented in sex work health research. Failure to include all sections of sex worker populations precludes comprehensive understandings about a range of population health issues, including potential variations in the manifestation of such issues within and between population subgroups, which in turn can impede the development of effective services and interventions. In this article, we describe our attempts to define, determine, and recruit a purposeful sample for a qualitative study examining the interrelationships between sex workers' health and the working conditions in the Vancouver off-street sex industry. Detailed is our application of ethnographic mapping approaches to generate information about population diversity and work settings within distinct geographical boundaries. Bearing in mind the challenges and the overwhelming discrimination sex workers experience, we scope recommendations for safe and effective purposeful sampling inclusive of sex workers' heterogeneity. © The Author(s) 2015.

  3. Marine anthropogenic radiotracers in the Southern Hemisphere: New sampling and analytical strategies

    Science.gov (United States)

    Levy, I.; Povinec, P. P.; Aoyama, M.; Hirose, K.; Sanchez-Cabeza, J. A.; Comanducci, J.-F.; Gastaud, J.; Eriksson, M.; Hamajima, Y.; Kim, C. S.; Komura, K.; Osvath, I.; Roos, P.; Yim, S. A.

    2011-04-01

    The Japan Agency for Marine Earth Science and Technology conducted in 2003-2004 the Blue Earth Global Expedition (BEAGLE2003) around the Southern Hemisphere Oceans, which was a rare opportunity to collect many seawater samples for anthropogenic radionuclide studies. We describe here sampling and analytical methodologies based on radiochemical separations of Cs and Pu from seawater, as well as radiometric and mass spectrometry measurements. Several laboratories took part in radionuclide analyses using different techniques. The intercomparison exercises and analyses of certified reference materials showed a reasonable agreement between the participating laboratories. The obtained data on the distribution of 137Cs and plutonium isotopes in seawater represent the most comprehensive results available for the Southern Hemisphere Oceans.

  4. Strategies for Distinguishing Abiotic Chemistry from Martian Biochemistry in Samples Returned from Mars

    Science.gov (United States)

    Glavin, D. P.; Burton, A. S.; Callahan, M. P.; Elsila, J. E.; Stern, J. C.; Dworkin, J. P.

    2012-01-01

    A key goal in the search for evidence of extinct or extant life on Mars will be the identification of chemical biosignatures including complex organic molecules common to all life on Earth. These include amino acids, the monomer building blocks of proteins and enzymes, and nucleobases, which serve as the structural basis of information storage in DNA and RNA. However, many of these organic compounds can also be formed abiotically as demonstrated by their prevalence in carbonaceous meteorites [1]. Therefore, an important challenge in the search for evidence of life on Mars will be distinguishing between abiotic chemistry of either meteoritic or martian origin from any chemical biosignatures from an extinct or extant martian biota. Although current robotic missions to Mars, including the 2011 Mars Science Laboratory (MSL) and the planned 2018 ExoMars rovers, will have the analytical capability needed to identify these key classes of organic molecules if present [2,3], return of a diverse suite of martian samples to Earth would allow for much more intensive laboratory studies using a broad array of extraction protocols and state-of-theart analytical techniques for bulk and spatially resolved characterization, molecular detection, and isotopic and enantiomeric compositions that may be required for unambiguous confirmation of martian life. Here we will describe current state-of-the-art laboratory analytical techniques that have been used to characterize the abundance and distribution of amino acids and nucleobases in meteorites, Apollo samples, and comet- exposed materials returned by the Stardust mission with an emphasis on their molecular characteristics that can be used to distinguish abiotic chemistry from biochemistry as we know it. The study of organic compounds in carbonaceous meteorites is highly relevant to Mars sample return analysis, since exogenous organic matter should have accumulated in the martian regolith over the last several billion years and the

  5. Sustained attention across the lifespan in a sample of 10,000: Dissociating ability and strategy

    OpenAIRE

    Fortenbaugh, Francesca C.; DeGutis, Joseph; Germine, Laura; Wilmer, Jeremy; Grosso, Mallory; Russo, Kathryn; Esterman, Michael

    2015-01-01

    Normal and abnormal differences in sustained visual attention have long been of interest to scientists, educators, and clinicians. Still lacking, however, is a clear understanding of how sustained visual attention varies across the broad sweep of the human lifespan. Here, we fill this gap in two ways. First, powered by an unprecedentedly large, 10,430-person sample, we model age-related differences with substantially greater precision than prior efforts. Second, using the recently developed g...

  6. Reliability of sampling strategies for measuring dairy cattle welfare on commercial farms.

    Science.gov (United States)

    Van Os, Jennifer M C; Winckler, Christoph; Trieb, Julia; Matarazzo, Soraia V; Lehenbauer, Terry W; Champagne, John D; Tucker, Cassandra B

    2018-02-01

    Our objective was to evaluate how the proportion of high-producing lactating cows sampled on each farm and the selection method affect prevalence estimates for animal-based measures. We assessed the entire high-producing pen (days in milk size calculations from the Welfare Quality Protocol; and (4) selecting the first, middle, or final third of cows exiting the milking parlor. Estimates were compared with true values using regression analysis and were considered accurate if they met 3 criteria: the coefficient of determination was ≥0.9 and the slope and intercept did not differ significantly from 1 and 0, respectively. All estimates met the slope and intercept criteria, whereas the coefficient of determination increased when more cows were sampled. All estimates were accurate for neck alterations, ocular discharge (22.2 ± 27.4%), and carpal joint hair loss (14.1 ± 17.4%). Selecting a third of the milking order or using the Welfare Quality sample size calculations failed to accurately estimate all measures simultaneously. However, all estimates were accurate when selecting at least 2 of every 3 cows locked at the feed bunk. Using restraint position at the feed bunk did not differ systematically from computer-selecting the same proportion of cows randomly, and the former may be a simpler approach for welfare assessments. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Understanding active sampling strategies: Empirical approaches and implications for attention and decision research.

    Science.gov (United States)

    Gottlieb, Jacqueline

    2018-05-01

    In natural behavior we actively gather information using attention and active sensing behaviors (such as shifts of gaze) to sample relevant cues. However, while attention and decision making are naturally coordinated, in the laboratory they have been dissociated. Attention is studied independently of the actions it serves. Conversely, decision theories make the simplifying assumption that the relevant information is given, and do not attempt to describe how the decision maker may learn and implement active sampling policies. In this paper I review recent studies that address questions of attentional learning, cue validity and information seeking in humans and non-human primates. These studies suggest that learning a sampling policy involves large scale interactions between networks of attention and valuation, which implement these policies based on reward maximization, uncertainty reduction and the intrinsic utility of cognitive states. I discuss the importance of using such paradigms for formalizing the role of attention, as well as devising more realistic theories of decision making that capture a broader range of empirical observations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    Science.gov (United States)

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  9. High resolution x-ray microtomography of biological samples: Requirements and strategies for satisfying them

    Energy Technology Data Exchange (ETDEWEB)

    Loo, B.W. Jr. [Univ. of California, San Francisco, CA (United States)]|[Univ. of California, Davis, CA (United States)]|[Lawrence Berkeley National Lab., CA (United States); Rothman, S.S. [Univ. of California, San Francisco, CA (United States)]|[Lawrence Berkeley National Lab., CA (United States)

    1997-02-01

    High resolution x-ray microscopy has been made possible in recent years primarily by two new technologies: microfabricated diffractive lenses for soft x-rays with about 30-50 nm resolution, and high brightness synchrotron x-ray sources. X-ray microscopy occupies a special niche in the array of biological microscopic imaging methods. It extends the capabilities of existing techniques mainly in two areas: a previously unachievable combination of sub-visible resolution and multi-micrometer sample size, and new contrast mechanisms. Because of the soft x-ray wavelengths used in biological imaging (about 1-4 nm), XM is intermediate in resolution between visible light and electron microscopies. Similarly, the penetration depth of soft x-rays in biological materials is such that the ideal sample thickness for XM falls in the range of 0.25 - 10 {mu}m, between that of VLM and EM. XM is therefore valuable for imaging of intermediate level ultrastructure, requiring sub-visible resolutions, in intact cells and subcellular organelles, without artifacts produced by thin sectioning. Many of the contrast producing and sample preparation techniques developed for VLM and EM also work well with XM. These include, for example, molecule specific staining by antibodies with heavy metal or fluorescent labels attached, and sectioning of both frozen and plastic embedded tissue. However, there is also a contrast mechanism unique to XM that exists naturally because a number of elemental absorption edges lie in the wavelength range used. In particular, between the oxygen and carbon absorption edges (2.3 and 4.4 nm wavelength), organic molecules absorb photons much more strongly than does water, permitting element-specific imaging of cellular structure in aqueous media, with no artifically introduced contrast agents. For three-dimensional imaging applications requiring the capabilities of XM, an obvious extension of the technique would therefore be computerized x-ray microtomography (XMT).

  10. A comparison of temporal and location-based sampling strategies for global positioning system-triggered electronic diaries.

    Science.gov (United States)

    Törnros, Tobias; Dorn, Helen; Reichert, Markus; Ebner-Priemer, Ulrich; Salize, Hans-Joachim; Tost, Heike; Meyer-Lindenberg, Andreas; Zipf, Alexander

    2016-11-21

    Self-reporting is a well-established approach within the medical and psychological sciences. In order to avoid recall bias, i.e. past events being remembered inaccurately, the reports can be filled out on a smartphone in real-time and in the natural environment. This is often referred to as ambulatory assessment and the reports are usually triggered at regular time intervals. With this sampling scheme, however, rare events (e.g. a visit to a park or recreation area) are likely to be missed. When addressing the correlation between mood and the environment, it may therefore be beneficial to include participant locations within the ambulatory assessment sampling scheme. Based on the geographical coordinates, the database query system then decides if a self-report should be triggered or not. We simulated four different ambulatory assessment sampling schemes based on movement data (coordinates by minute) from 143 voluntary participants tracked for seven consecutive days. Two location-based sampling schemes incorporating the environmental characteristics (land use and population density) at each participant's location were introduced and compared to a time-based sampling scheme triggering a report on the hour as well as to a sampling scheme incorporating physical activity. We show that location-based sampling schemes trigger a report less often, but we obtain more unique trigger positions and a greater spatial spread in comparison to sampling strategies based on time and distance. Additionally, the location-based methods trigger significantly more often at rarely visited types of land use and less often outside the study region where no underlying environmental data are available.

  11. APPLICATION OF SPATIAL MODELLING APPROACHES, SAMPLING STRATEGIES AND 3S TECHNOLOGY WITHIN AN ECOLGOCIAL FRAMWORK

    Directory of Open Access Journals (Sweden)

    H.-C. Chen

    2012-07-01

    Full Text Available How to effectively describe ecological patterns in nature over broader spatial scales and build a modeling ecological framework has become an important issue in ecological research. We test four modeling methods (MAXENT, DOMAIN, GLM and ANN to predict the potential habitat of Schima superba (Chinese guger tree, CGT with different spatial scale in the Huisun study area in Taiwan. Then we created three sampling design (from small to large scales for model development and validation by different combinations of CGT samples from aforementioned three sites (Tong-Feng watershed, Yo-Shan Mountain, and Kuan-Dau watershed. These models combine points of known occurrence and topographic variables to infer CGT potential spatial distribution. Our assessment revealed that the method performance from highest to lowest was: MAXENT, DOMAIN, GLM and ANN on small spatial scale. The MAXENT and DOMAIN two models were the most capable for predicting the tree's potential habitat. However, the outcome clearly indicated that the models merely based on topographic variables performed poorly on large spatial extrapolation from Tong-Feng to Kuan-Dau because the humidity and sun illumination of the two watersheds are affected by their microterrains and are quite different from each other. Thus, the models developed from topographic variables can only be applied within a limited geographical extent without a significant error. Future studies will attempt to use variables involving spectral information associated with species extracted from high spatial, spectral resolution remotely sensed data, especially hyperspectral image data, for building a model so that it can be applied on a large spatial scale.

  12. How to handle speciose clades? Mass taxon-sampling as a strategy towards illuminating the natural history of Campanula (Campanuloideae.

    Directory of Open Access Journals (Sweden)

    Guilhem Mansion

    Full Text Available BACKGROUND: Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. METHODOLOGY/PRINCIPAL FINDINGS: Phylogenetic analyses based on maximum parsimony (PAUP, PRAP, Bayesian inference (MrBayes, and maximum likelihood (RAxML were first carried out on the large reference data set (D680. Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from "classification-guided" (D088 and "phylogeny-guided sampling" (D101. Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. CONCLUSIONS/SIGNIFICANCE: A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed

  13. A nested-PCR strategy for molecular diagnosis of mollicutes in uncultured biological samples from cows with vulvovaginitis.

    Science.gov (United States)

    Voltarelli, Daniele Cristina; de Alcântara, Brígida Kussumoto; Lunardi, Michele; Alfieri, Alice Fernandes; de Arruda Leme, Raquel; Alfieri, Amauri Alcindo

    2018-01-01

    Bacteria classified in Mycoplasma (M. bovis and M. bovigenitalium) and Ureaplasma (U. diversum) genera are associated with granular vulvovaginitis that affect heifers and cows at reproductive age. The traditional means for detection and speciation of mollicutes from clinical samples have been culture and serology. However, challenges experienced with these laboratory methods have hampered assessment of their impact in pathogenesis and epidemiology in cattle worldwide. The aim of this study was to develop a PCR strategy to detect and primarily discriminate between the main species of mollicutes associated with reproductive disorders of cattle in uncultured clinical samples. In order to amplify the 16S-23S rRNA internal transcribed spacer region of the genome, a consensual and species-specific nested-PCR assay was developed to identify and discriminate between main species of mollicutes. In addition, 31 vaginal swab samples from dairy and beef affected cows were investigated. This nested-PCR strategy was successfully employed in the diagnosis of single and mixed mollicute infections of diseased cows from cattle herds from Brazil. The developed system enabled the rapid and unambiguous identification of the main mollicute species known to be associated with this cattle reproductive disorder through differential amplification of partial fragments of the ITS region of mollicute genomes. The development of rapid and sensitive tools for mollicute detection and discrimination without the need for previous cultures or sequencing of PCR products is a high priority for accurate diagnosis in animal health. Therefore, the PCR strategy described herein may be helpful for diagnosis of this class of bacteria in genital swabs submitted to veterinary diagnostic laboratories, not demanding expertise in mycoplasma culture and identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Lifetime Prevalence of Suicide Attempts Among Sexual Minority Adults by Study Sampling Strategies: A Systematic Review and Meta-Analysis.

    Science.gov (United States)

    Hottes, Travis Salway; Bogaert, Laura; Rhodes, Anne E; Brennan, David J; Gesink, Dionne

    2016-05-01

    Previous reviews have demonstrated a higher risk of suicide attempts for lesbian, gay, and bisexual (LGB) persons (sexual minorities), compared with heterosexual groups, but these were restricted to general population studies, thereby excluding individuals sampled through LGB community venues. Each sampling strategy, however, has particular methodological strengths and limitations. For instance, general population probability studies have defined sampling frames but are prone to information bias associated with underreporting of LGB identities. By contrast, LGB community surveys may support disclosure of sexuality but overrepresent individuals with strong LGB community attachment. To reassess the burden of suicide-related behavior among LGB adults, directly comparing estimates derived from population- versus LGB community-based samples. In 2014, we searched MEDLINE, EMBASE, PsycInfo, CINAHL, and Scopus databases for articles addressing suicide-related behavior (ideation, attempts) among sexual minorities. We selected quantitative studies of sexual minority adults conducted in nonclinical settings in the United States, Canada, Europe, Australia, and New Zealand. Random effects meta-analysis and meta-regression assessed for a difference in prevalence of suicide-related behavior by sample type, adjusted for study or sample-level variables, including context (year, country), methods (medium, response rate), and subgroup characteristics (age, gender, sexual minority construct). We examined residual heterogeneity by using τ(2). We pooled 30 cross-sectional studies, including 21,201 sexual minority adults, generating the following lifetime prevalence estimates of suicide attempts: 4% (95% confidence interval [CI] = 3%, 5%) for heterosexual respondents to population surveys, 11% (95% CI = 8%, 15%) for LGB respondents to population surveys, and 20% (95% CI = 18%, 22%) for LGB respondents to community surveys (Figure 1). The difference in LGB estimates by sample

  15. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  16. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  17. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  18. Search strategy using LHC pileup interactions as a zero bias sample

    Science.gov (United States)

    Nachman, Benjamin; Rubbo, Francesco

    2018-05-01

    Due to a limited bandwidth and a large proton-proton interaction cross section relative to the rate of interesting physics processes, most events produced at the Large Hadron Collider (LHC) are discarded in real time. A sophisticated trigger system must quickly decide which events should be kept and is very efficient for a broad range of processes. However, there are many processes that cannot be accommodated by this trigger system. Furthermore, there may be models of physics beyond the standard model (BSM) constructed after data taking that could have been triggered, but no trigger was implemented at run time. Both of these cases can be covered by exploiting pileup interactions as an effective zero bias sample. At the end of high-luminosity LHC operations, this zero bias dataset will have accumulated about 1 fb-1 of data from which a bottom line cross section limit of O (1 ) fb can be set for BSM models already in the literature and those yet to come.

  19. A strategy to sample nutrient dynamics across the terrestrial-aquatic interface at NEON sites

    Science.gov (United States)

    Hinckley, E. S.; Goodman, K. J.; Roehm, C. L.; Meier, C. L.; Luo, H.; Ayres, E.; Parnell, J.; Krause, K.; Fox, A. M.; SanClements, M.; Fitzgerald, M.; Barnett, D.; Loescher, H. W.; Schimel, D.

    2012-12-01

    The construction of the National Ecological Observatory Network (NEON) across the U.S. creates the opportunity for researchers to investigate biogeochemical transformations and transfers across ecosystems at local-to-continental scales. Here, we examine a subset of NEON sites where atmospheric, terrestrial, and aquatic observations will be collected for 30 years. These sites are located across a range of hydrological regimes, including flashy rain-driven, shallow sub-surface (perched, pipe-flow, etc), and deep groundwater, which likely affect the chemical forms and quantities of reactive elements that are retained and/or mobilized across landscapes. We present a novel spatial and temporal sampling design that enables researchers to evaluate long-term trends in carbon, nitrogen, and phosphorus biogeochemical cycles under these different hydrological regimes. This design focuses on inputs to the terrestrial system (atmospheric deposition, bulk precipitation), transfers (soil-water and groundwater sources/chemistry), and outputs (surface water, and evapotranspiration). We discuss both data that will be collected as part of the current NEON design, as well as how the research community can supplement the NEON design through collaborative efforts, such as providing additional datasets, including soil biogeochemical processes and trace gas emissions, and developing collaborative research networks. Current engagement with the research community working at the terrestrial-aquatic interface is critical to NEON's success as we begin construction, to ensure that high-quality, standardized and useful data are not only made available, but inspire further, cutting-edge research.

  20. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  1. SAMPLING ADAPTIVE STRATEGY AND SPATIAL ORGANISATION ESTIMATION OF SOIL ANIMAL COMMUNITIES AT VARIOUS HIERARCHICAL LEVELS OF URBANISED TERRITORIES

    Directory of Open Access Journals (Sweden)

    Baljuk J.A.

    2014-12-01

    Full Text Available In work the algorithm of adaptive strategy of optimum spatial sampling for studying of the spatial organisation of communities of soil animals in the conditions of an urbanization have been presented. As operating variables the principal components obtained as a result of the analysis of the field data on soil penetration resistance, soils electrical conductivity and density of a forest stand, collected on a quasiregular grid have been used. The locations of experimental polygons have been stated by means of program ESAP. The sampling has been made on a regular grid within experimental polygons. The biogeocoenological estimation of experimental polygons have been made on a basis of A.L.Belgard's ecomorphic analysis. The spatial configuration of biogeocoenosis types has been established on the basis of the data of earth remote sensing and the analysis of digital elevation model. The algorithm was suggested which allows to reveal the spatial organisation of soil animal communities at investigated point, biogeocoenosis, and landscape.

  2. Isotopic characterization of flight feathers in two pelagic seabirds: Sampling strategies for ecological studies

    Science.gov (United States)

    Wiley, Anne E.; Ostrom, Peggy H.; Stricker, Craig A.; James, Helen F.; Gandhi, Hasand

    2010-01-01

    We wish to use stable-isotope analysis of flight feathers to understand the feeding behavior of pelagic seabirds, such as the Hawaiian Petrel (Pterodroma sandwichensis) and Newell’s Shearwater (Puffinus auricularis newelli). Analysis of remiges is particularly informative because the sequence and timing of remex molt are often known. The initial step, reported here, is to obtain accurate isotope values from whole remiges by means of a minimally invasive protocol appropriate for live birds or museum specimens. The high variability observed in D13C and D15N values within a feather precludes the use of a small section of vane. We found the average range within 42 Hawaiian Petrel remiges to be 1.3‰ for both D13C and D15N and that within 10 Newell’s Shearwater remiges to be 1.3‰ and 0.7‰ for D13C and D15N, respectively. The D13C of all 52 feathers increased from tip to base, and the majority of Hawaiian Petrel feathers showed an analogous trend in D15N. Although the average range of DD in 21 Hawaiian Petrel remiges was 11‰, we found no longitudinal trend. We discuss influences of trophic level, foraging location, metabolism, and pigmentation on isotope values and compare three methods of obtaining isotope averages of whole feathers. Our novel barb-sampling protocol requires only 1.0 mg of feather and minimal preparation time. Because it leaves the feather nearly intact, this protocol will likely facilitate obtaining isotope values from remiges of live birds and museum specimens. As a consequence, it will help expand the understanding of historical trends in foraging behavior

  3. Sustained Attention Across the Life Span in a Sample of 10,000: Dissociating Ability and Strategy.

    Science.gov (United States)

    Fortenbaugh, Francesca C; DeGutis, Joseph; Germine, Laura; Wilmer, Jeremy B; Grosso, Mallory; Russo, Kathryn; Esterman, Michael

    2015-09-01

    Normal and abnormal differences in sustained visual attention have long been of interest to scientists, educators, and clinicians. Still lacking, however, is a clear understanding of how sustained visual attention varies across the broad sweep of the human life span. In the present study, we filled this gap in two ways. First, using an unprecedentedly large 10,430-person sample, we modeled age-related differences with substantially greater precision than have prior efforts. Second, using the recently developed gradual-onset continuous performance test (gradCPT), we parsed sustained-attention performance over the life span into its ability and strategy components. We found that after the age of 15 years, the strategy and ability trajectories saliently diverge. Strategy becomes monotonically more conservative with age, whereas ability peaks in the early 40s and is followed by a gradual decline in older adults. These observed life-span trajectories for sustained attention are distinct from results of other life-span studies focusing on fluid and crystallized intelligence. © The Author(s) 2015.

  4. Robustness Assessment of Spatial Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2012-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern buildi...... to robustness of spatial timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness of structures and provide strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues with respect...

  5. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    Science.gov (United States)

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  6. A sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory.

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)

    2006-10-01

    Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a sampling-based computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive sampling-based (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.

  7. Sample preservation, transport and processing strategies for honeybee RNA extraction: Influence on RNA yield, quality, target quantification and data normalization.

    Science.gov (United States)

    Forsgren, Eva; Locke, Barbara; Semberg, Emilia; Laugen, Ane T; Miranda, Joachim R de

    2017-08-01

    Viral infections in managed honey bees are numerous, and most of them are caused by viruses with an RNA genome. Since RNA degrades rapidly, appropriate sample management and RNA extraction methods are imperative to get high quality RNA for downstream assays. This study evaluated the effect of various sampling-transport scenarios (combinations of temperature, RNA stabilizers, and duration) of transport on six RNA quality parameters; yield, purity, integrity, cDNA synthesis efficiency, target detection and quantification. The use of water and extraction buffer were also compared for a primary bee tissue homogenate prior to RNA extraction. The strategy least affected by time was preservation of samples at -80°C. All other regimens turned out to be poor alternatives unless the samples were frozen or processed within 24h. Chemical stabilizers have the greatest impact on RNA quality and adding an extra homogenization step (a QIAshredder™ homogenizer) to the extraction protocol significantly improves the RNA yield and chemical purity. This study confirms that RIN values (RNA Integrity Number), should be used cautiously with bee RNA. Using water for the primary homogenate has no negative effect on RNA quality as long as this step is no longer than 15min. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. How to Handle Speciose Clades? Mass Taxon-Sampling as a Strategy towards Illuminating the Natural History of Campanula (Campanuloideae)

    Science.gov (United States)

    Mansion, Guilhem; Parolly, Gerald; Crowl, Andrew A.; Mavrodiev, Evgeny; Cellinese, Nico; Oganesian, Marine; Fraunhofer, Katharina; Kamari, Georgia; Phitos, Dimitrios; Haberle, Rosemarie; Akaydin, Galip; Ikinci, Nursel; Raus, Thomas; Borsch, Thomas

    2012-01-01

    Background Speciose clades usually harbor species with a broad spectrum of adaptive strategies and complex distribution patterns, and thus constitute ideal systems to disentangle biotic and abiotic causes underlying species diversification. The delimitation of such study systems to test evolutionary hypotheses is difficult because they often rely on artificial genus concepts as starting points. One of the most prominent examples is the bellflower genus Campanula with some 420 species, but up to 600 species when including all lineages to which Campanula is paraphyletic. We generated a large alignment of petD group II intron sequences to include more than 70% of described species as a reference. By comparison with partial data sets we could then assess the impact of selective taxon sampling strategies on phylogenetic reconstruction and subsequent evolutionary conclusions. Methodology/Principal Findings Phylogenetic analyses based on maximum parsimony (PAUP, PRAP), Bayesian inference (MrBayes), and maximum likelihood (RAxML) were first carried out on the large reference data set (D680). Parameters including tree topology, branch support, and age estimates, were then compared to those obtained from smaller data sets resulting from “classification-guided” (D088) and “phylogeny-guided sampling” (D101). Analyses of D088 failed to fully recover the phylogenetic diversity in Campanula, whereas D101 inferred significantly different branch support and age estimates. Conclusions/Significance A short genomic region with high phylogenetic utility allowed us to easily generate a comprehensive phylogenetic framework for the speciose Campanula clade. Our approach recovered 17 well-supported and circumscribed sub-lineages. Knowing these will be instrumental for developing more specific evolutionary hypotheses and guide future research, we highlight the predictive value of a mass taxon-sampling strategy as a first essential step towards illuminating the detailed evolutionary

  9. Behavioral Contexts, Food-Choice Coping Strategies, and Dietary Quality of a Multiethnic Sample of Employed Parents

    Science.gov (United States)

    Blake, Christine E.; Wethington, Elaine; Farrell, Tracy J.; Bisogni, Carole A.; Devine, Carol M.

    2012-01-01

    Employed parents’ work and family conditions provide behavioral contexts for their food choices. Relationships between employed parents’ food-choice coping strategies, behavioral contexts, and dietary quality were evaluated. Data on work and family conditions, sociodemographic characteristics, eating behavior, and dietary intake from two 24-hour dietary recalls were collected in a random sample cross-sectional pilot telephone survey in the fall of 2006. Black, white, and Latino employed mothers (n=25) and fathers (n=25) were recruited from a low/moderate income urban area in upstate New York. Hierarchical cluster analysis (Ward’s method) identified three clusters of parents differing in use of food-choice coping strategies (ie, Individualized Eating, Missing Meals, and Home Cooking). Cluster sociodemographic, work, and family characteristics were compared using χ2 and Fisher’s exact tests. Cluster differences in dietary quality (Healthy Eating Index 2005) were analyzed using analysis of variance. Clusters differed significantly (P≤0.05) on food-choice coping strategies, dietary quality, and behavioral contexts (ie, work schedule, marital status, partner’s employment, and number of children). Individualized Eating and Missing Meals clusters were characterized by nonstandard work hours, having a working partner, single parenthood and with family meals away from home, grabbing quick food instead of a meal, using convenience entrées at home, and missing meals or individualized eating. The Home Cooking cluster included considerably more married fathers with nonemployed spouses and more home-cooked family meals. Food-choice coping strategies affecting dietary quality reflect parents’ work and family conditions. Nutritional guidance and family policy needs to consider these important behavioral contexts for family nutrition and health. PMID:21338739

  10. Attractive ellipsoids in robust control

    CERN Document Server

    Poznyak, Alexander; Azhmyakov, Vadim

    2014-01-01

    This monograph introduces a newly developed robust-control design technique for a wide class of continuous-time dynamical systems called the “attractive ellipsoid method.” Along with a coherent introduction to the proposed control design and related topics, the monograph studies nonlinear affine control systems in the presence of uncertainty and presents a constructive and easily implementable control strategy that guarantees certain stability properties. The authors discuss linear-style feedback control synthesis in the context of the above-mentioned systems. The development and physical implementation of high-performance robust-feedback controllers that work in the absence of complete information is addressed, with numerous examples to illustrate how to apply the attractive ellipsoid method to mechanical and electromechanical systems. While theorems are proved systematically, the emphasis is on understanding and applying the theory to real-world situations. Attractive Ellipsoids in Robust Control will a...

  11. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  12. Robustness of Long Span Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Balfroid, Nathalie; Kirkegaard, Poul Henning

    2011-01-01

    engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper makes a discussion of such robustness issues related to the future development of reciprocal timber structures. The paper concludes that these kind of structures can have...... a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. The interest has also been facilitated due to recently severe structural failures...

  13. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  14. A general factor of personality in a sample of inmates: Associations with indicators of life-history strategy and covitality

    Directory of Open Access Journals (Sweden)

    Međedović Janko

    2017-01-01

    Full Text Available This study looked for a General Factor of Personality (GFP in a sample of male convicts (N=226; mean age 32 years. The GFP was extracted from seven broad personality traits: FFM factors, Amoralism (the negative pole of the lexical Honesty-Humility factor and Disintegration (operationalization of Schizotypy. Three first-order factors were extracted, labeled Dysfunctionality, Antisociality and Openness, and GFP was found through the hierarchical factor analysis. The nature of the GFP was explored through analysis of its relations with markers of fast Life-History strategy and covitality. The results demonstrated that the GFP is associated with unrestricted sexual behavior, medical problems, mental problems, early involvement in criminal activity and stability of criminal behavior. The evidence shows that the GFP is a meaningful construct on the highest level of personality structure. It may represent a personality indicator of fitness-related characteristics and could be useful in research of personality in an evolutionary context.

  15. Preschool Boys' Development of Emotional Self-regulation Strategies in a Sample At-risk for Behavior Problems

    Science.gov (United States)

    Supplee, Lauren H.; Skuban, Emily Moye; Trentacosta, Christopher J.; Shaw, Daniel S.; Stoltz, Emilee

    2011-01-01

    Little longitudinal research has been conducted on changes in children's emotional self-regulation strategy (SRS) use after infancy, particularly for children at risk. The current study examined changes in boys' emotional SRS from toddlerhood through preschool. Repeated observational assessments using delay of gratification tasks at ages 2, 3, and 4 were examined with both variable- and person-oriented analyses in a low-income sample of boys (N = 117) at-risk for early problem behavior. Results were consistent with theory on emotional SRS development in young children. Children initially used more emotion-focused SRS (e.g., comfort seeking) and transitioned to greater use of planful SRS (e.g., distraction) by age 4. Person-oriented analysis using trajectory analysis found similar patterns from 2–4, with small groups of boys showing delayed movement away from emotion-focused strategies or delay in the onset of regular use of distraction. The results provide a foundation for future research to examine the development of SRS in low-income young children. PMID:21675542

  16. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  17. Robust power system frequency control

    CERN Document Server

    Bevrani, Hassan

    2014-01-01

    This updated edition of the industry standard reference on power system frequency control provides practical, systematic and flexible algorithms for regulating load frequency, offering new solutions to the technical challenges introduced by the escalating role of distributed generation and renewable energy sources in smart electric grids. The author emphasizes the physical constraints and practical engineering issues related to frequency in a deregulated environment, while fostering a conceptual understanding of frequency regulation and robust control techniques. The resulting control strategi

  18. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research.

    Science.gov (United States)

    Gentles, Stephen J; Charles, Cathy; Nicholas, David B; Ploeg, Jenny; McKibbon, K Ann

    2016-10-11

    Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews, might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research. The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type

  19. Impacts of human activities and sampling strategies on soil heavy metal distribution in a rapidly developing region of China.

    Science.gov (United States)

    Shao, Xuexin; Huang, Biao; Zhao, Yongcun; Sun, Weixia; Gu, Zhiquan; Qian, Weifei

    2014-06-01

    The impacts of industrial and agricultural activities on soil Cd, Hg, Pb, and Cu in Zhangjiagang City, a rapidly developing region in China, were evaluated using two sampling strategies. The soil Cu, Cd, and Pb concentrations near industrial locations were greater than those measured away from industrial locations. The converse was true for Hg. The top enrichment factor (TEF) values, calculated as the ratio of metal concentrations between the topsoil and subsoil, were greater near industrial location than away from industrial locations and were further related to the industry type. Thus, the TEF is an effective index to distinguish sources of toxic elements not only between anthropogenic and geogenic but also among different industry types. Target soil sampling near industrial locations resulted in a greater estimation in high levels of soil heavy metals. This study revealed that the soil heavy metal contamination was primarily limited to local areas near industrial locations, despite rapid development over the last 20 years. The prevention and remediation of the soil heavy metal pollution should focus on these high-risk areas in the future. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Limited-sampling strategy models for estimating the pharmacokinetic parameters of 4-methylaminoantipyrine, an active metabolite of dipyrone

    Directory of Open Access Journals (Sweden)

    Suarez-Kurtz G.

    2001-01-01

    Full Text Available Bioanalytical data from a bioequivalence study were used to develop limited-sampling strategy (LSS models for estimating the area under the plasma concentration versus time curve (AUC and the peak plasma concentration (Cmax of 4-methylaminoantipyrine (MAA, an active metabolite of dipyrone. Twelve healthy adult male volunteers received single 600 mg oral doses of dipyrone in two formulations at a 7-day interval in a randomized, crossover protocol. Plasma concentrations of MAA (N = 336, measured by HPLC, were used to develop LSS models. Linear regression analysis and a "jack-knife" validation procedure revealed that the AUC0-¥ and the Cmax of MAA can be accurately predicted (R²>0.95, bias 0.85 of the AUC0-¥ or Cmax for the other formulation. LSS models based on three sampling points (1.5, 4 and 24 h, but using different coefficients for AUC0-¥ and Cmax, predicted the individual values of both parameters for the enrolled volunteers (R²>0.88, bias = -0.65 and -0.37%, precision = 4.3 and 7.4% as well as for plasma concentration data sets generated by simulation (R²>0.88, bias = -1.9 and 8.5%, precision = 5.2 and 8.7%. Bioequivalence assessment of the dipyrone formulations based on the 90% confidence interval of log-transformed AUC0-¥ and Cmax provided similar results when either the best-estimated or the LSS-derived metrics were used.

  1. External calibration strategy for trace element quantification in botanical samples by LA-ICP-MS using filter paper

    International Nuclear Information System (INIS)

    Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L.

    2016-01-01

    The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g −1 of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of 13 C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with 13 C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when 13 C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO 3 (v v −1 ) as calibration blank. Values ranged from 0.05 to 0.81  μg g −1 . Overall, the use of filter paper as support for dried

  2. External calibration strategy for trace element quantification in botanical samples by LA-ICP-MS using filter paper

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Matheus A.G.; Voss, Mônica; Corazza, Gabriela; Flores, Erico M.M.; Dressler, Valderi L., E-mail: vdressler@gmail.com

    2016-01-28

    The use of reference solutions dispersed on filter paper discs is proposed for the first time as an external calibration strategy for matrix matching and determination of As, Cd, Co, Cr, Cu, Mn, Ni, Pb, Sr, V and Zn in plants by laser ablation-inductively coupled plasma mass spectrometry (LA-ICP-MS). The procedure is based on the use of filter paper discs as support for aqueous reference solutions, which are further evaporated, resulting in solid standards with concentrations up to 250 μg g{sup −1} of each element. The use of filter paper for calibration is proposed as matrix matched standards due to the similarities of this material with botanical samples, regarding to carbon concentration and its distribution through both matrices. These characteristics allowed the use of {sup 13}C as internal standard (IS) during the analysis by LA-ICP-MS. In this way, parameters as analyte signal normalization with {sup 13}C, carrier gas flow rate, laser energy, spot size, and calibration range were monitored. The calibration procedure using solution deposition on filter paper discs resulted in precision improvement when {sup 13}C was used as IS. The method precision was calculated by the analysis of a certified reference material (CRM) of botanical matrix, considering the RSD obtained for 5 line scans and was lower than 20%. Accuracy of LA-ICP-MS determinations were evaluated by analysis of four CRM pellets of botanical composition, as well as by comparison with results obtained by ICP-MS using solution nebulization after microwave assisted digestion. Plant samples of unknown elemental composition were analyzed by the proposed LA method and good agreement were obtained with results of solution analysis. Limits of detection (LOD) established for LA-ICP-MS were obtained by the ablation of 10 lines on the filter paper disc containing 40 μL of 5% HNO{sub 3} (v v{sup −1}) as calibration blank. Values ranged from 0.05 to 0.81  μg g{sup −1}. Overall, the use of filter

  3. Planning schistosomiasis control: investigation of alternative sampling strategies for Schistosoma mansoni to target mass drug administration of praziquantel in East Africa.

    Science.gov (United States)

    Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon

    2011-09-01

    In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.

  4. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  5. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  6. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  7. Limited sampling strategies drawn within 3 hours postdose poorly predict mycophenolic acid area-under-the-curve after enteric-coated mycophenolate sodium.

    NARCIS (Netherlands)

    Winter, B.C. de; Gelder, T. van; Mathôt, R.A.A.; Glander, P.; Tedesco-Silva, H.; Hilbrands, L.B.; Budde, K.; Hest, R.M. van

    2009-01-01

    Previous studies predicted that limited sampling strategies (LSS) for estimation of mycophenolic acid (MPA) area-under-the-curve (AUC(0-12)) after ingestion of enteric-coated mycophenolate sodium (EC-MPS) using a clinically feasible sampling scheme may have poor predictive performance. Failure of

  8. Complementary sample preparation strategies for analysis of cereal β-glucan oxidation products by UPLC-MS/MS

    Science.gov (United States)

    Boulos, Samy; Nyström, Laura

    2017-11-01

    The oxidation of cereal (1→3,1→4)-β-D-glucan can influence the health promoting and technological properties of this linear, soluble homopolysaccharide by introduction of new functional groups or chain scission. Apart from deliberate oxidative modifications, oxidation of β-glucan can already occur during processing and storage, which is mediated by hydroxyl radicals (HO•) formed by the Fenton reaction. We present four complementary sample preparation strategies to investigate oat and barley β-glucan oxidation products by hydrophilic interaction ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS), employing selective enzymatic digestion, graphitized carbon solid phase extraction (SPE), and functional group labeling techniques. The combination of these methods allows for detection of both lytic (C1, C3/4, C5) and non-lytic (C2, C4/3, C6) oxidation products resulting from HO•-attack at different glucose-carbons. By treating oxidized β-glucan with lichenase and β-glucosidase, only oxidized parts of the polymer remained in oligomeric form, which could be separated by SPE from the vast majority of non-oxidized glucose units. This allowed for the detection of oligomers with mid-chain glucuronic acids (C6) and carbonyls, as well as carbonyls at the non-reducing end from lytic C3/C4 oxidation. Neutral reducing ends were detected by reductive amination with anthranilic acid/amide as labeled glucose and cross-ring cleaved units (arabinose, erythrose) after enzyme treatment and SPE. New acidic chain termini were observed by carbodiimide-mediated amidation of carboxylic acids as anilides of gluconic, arabinonic, and erythronic acids. Hence, a full characterization of all types of oxidation products was possible by combining complementary sample preparation strategies. Differences in fine structure depending on source (oat vs. barley) translates to the ratio of observed oxidized oligomers, with in-depth analysis corroborating a random HO

  9. Developmental Strategy For Effective Sampling To Detect Possible Nutrient Fluxes In Oligotrophic Coastal Reef Waters In The Caribbean

    Science.gov (United States)

    Mendoza, W. G.; Corredor, J. E.; Ko, D.; Zika, R. G.; Mooers, C. N.

    2008-05-01

    The increasing effort to develop the coastal ocean observing system (COOS) in various institutions has gained momentum due to its high value to climate, environmental, economic, and health issues. The stress contributed by nutrients to the coral reef ecosystem is among many problems that are targeted to be resolved using this system. Traditional nutrient sampling has been inadequate to resolve issues on episodic nutrient fluxes in reef regions due to temporal and spatial variability. This paper illustrates sampling strategy using the COOS information to identify areas that need critical investigation. The area investigated is within the Puerto Rico subdomain (60-70oW, 15-20oN), and Caribbean Time Series (CaTS), World Ocean Circulation Experiment (WOCE), Intra-America Sea (IAS) ocean nowcast/forecast system (IASNFS), and other COOS-related online datasets are utilized. Nutrient profile results indicate nitrate is undetectable in the upper 50 m apparently due to high biological consumption. Nutrients are delivered in Puerto Rico particularly in the CaTS station either via a meridional jet formed from opposing cyclonic and anticyclonic eddies or wind-driven upwelling. The strong vertical fluctuation in the upper 50 m demonstrates a high anomaly in temperature and salinity and a strong cross correlation signal. High chlorophyll a concentration corresponding to seasonal high nutrient influx coincides with higher precipitation accumulation rates and apparent riverine input from the Amazon and Orinoco Rivers during summer (August) than during winter (February) seasons. Non-detectability of nutrients in the upper 50 m is a reflection of poor sampling frequency or the absence of a highly sensitive nutrient analysis method to capture episodic events. Thus, this paper was able to determine the range of depths and concentrations that need to be critically investigated to determine nutrient fluxes, nutrient sources, and climatological factors that can affect nutrient delivery

  10. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  11. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  12. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  13. A six-hour extrapolated sampling strategy for monitoring mycophenolic acid in renal transplant patients in the Indian subcontinent

    Directory of Open Access Journals (Sweden)

    Fleming D

    2006-01-01

    Full Text Available Background : Therapeutic drug monitoring for mycophenolic acid (MPA is increasingly being advocated. Thepresent therapeutic range relates to the 12-hour area under the serum concentration time profile (AUC.However, this is a cumbersome, tedious, cost restricting procedure. Is it possible to reduce this samplingperiod? Aim : To compare the AUC from a reduced sampling strategy with the full 12-hour profile for MPA. Settings and Design : Clinical Pharmacology Unit of a tertiary care hospital in South India. Retrospective, paireddata. Materials and Methods : Thirty-four 12-hour profiles from post-renal transplant patients on Cellcept ® wereevaluated. Profiles were grouped according to steroid and immunosuppressant co-medication and the timeafter transplant. MPA was estimated by high performance liquid chromatography with UV detection. From the12-hour profiles the AUC up to only six hours was calculated by the trapezoidal rule and a correction factorapplied. These two AUCs were then compared. Statistical Analysis : Linear regression, intra-class correlations (ICC and a two-tailed paired t-test were appliedto the data. Results : Comparing the 12-hour AUC with the paired 6-hour extrapolated AUC, the ICC and linear regression(r2 were very good for all three groups. No statistical difference was found by a two-tailed paired t-test. Nobias was seen with a Bland Altman plot or by calculation. Conclusion : For patients on Cellcept ® with prednisolone ± cyclosporine the 6-hour corrected is an accuratemeasure of the full 12-hour AUC.

  14. Estimates of microbial quality and concentration of copper in distributed drinking water are highly dependent on sampling strategy.

    Science.gov (United States)

    Lehtola, Markku J; Miettinen, Ilkka T; Hirvonen, Arja; Vartiainen, Terttu; Martikainen, Pertti J

    2007-12-01

    The numbers of bacteria generally increase in distributed water. Often household pipelines or water fittings (e.g., taps) represent the most critical location for microbial growth in water distribution systems. According to the European Union drinking water directive, there should not be abnormal changes in the colony counts in water. We used a pilot distribution system to study the effects of water stagnation on drinking water microbial quality, concentration of copper and formation of biofilms with two commonly used pipeline materials in households; copper and plastic (polyethylene). Water stagnation for more than 4h significantly increased both the copper concentration and the number of bacteria in water. Heterotrophic plate counts were six times higher in PE pipes and ten times higher in copper pipes after 16 h of stagnation than after only 40 min stagnation. The increase in the heterotrophic plate counts was linear with time in both copper and plastic pipelines. In the distribution system, bacteria originated mainly from biofilms, because in laboratory tests with water, there was only minor growth of bacteria after 16 h stagnation. Our study indicates that water stagnation in the distribution system clearly affects microbial numbers and the concentration of copper in water, and should be considered when planning the sampling strategy for drinking water quality control in distribution systems.

  15. [Strategies for biobank networks. Classification of different approaches for locating samples and an outlook on the future within the BBMRI-ERIC].

    Science.gov (United States)

    Lablans, Martin; Kadioglu, Dennis; Mate, Sebastian; Leb, Ines; Prokosch, Hans-Ulrich; Ückert, Frank

    2016-03-01

    Medical research projects often require more biological material than can be supplied by a single biobank. For this reason, a multitude of strategies support locating potential research partners with matching material without requiring centralization of sample storage. Classification of different strategies for biobank networks, in particular for locating suitable samples. Description of an IT infrastructure combining these strategies. Existing strategies can be classified according to three criteria: (a) granularity of sample data: coarse bank-level data (catalogue) vs. fine-granular sample-level data, (b) location of sample data: central (central search service) vs. decentral storage (federated search services), and (c) level of automation: automatic (query-based, federated search service) vs. semi-automatic (inquiry-based, decentral search). All mentioned search services require data integration. Metadata help to overcome semantic heterogeneity. The "Common Service IT" in BBMRI-ERIC (Biobanking and BioMolecular Resources Research Infrastructure) unites a catalogue, the decentral search and metadata in an integrated platform. As a result, researchers receive versatile tools to search suitable biomaterial, while biobanks retain a high degree of data sovereignty. Despite their differences, the presented strategies for biobank networks do not rule each other out but can complement and even benefit from each other.

  16. The association between implementation strategy use and the uptake of hepatitis C treatment in a national sample.

    Science.gov (United States)

    Rogal, Shari S; Yakovchenko, Vera; Waltz, Thomas J; Powell, Byron J; Kirchner, JoAnn E; Proctor, Enola K; Gonzalez, Rachel; Park, Angela; Ross, David; Morgan, Timothy R; Chartier, Maggie; Chinman, Matthew J

    2017-05-11

    Hepatitis C virus (HCV) is a common and highly morbid illness. New medications that have much higher cure rates have become the new evidence-based practice in the field. Understanding the implementation of these new medications nationally provides an opportunity to advance the understanding of the role of implementation strategies in clinical outcomes on a large scale. The Expert Recommendations for Implementing Change (ERIC) study defined discrete implementation strategies and clustered these strategies into groups. The present evaluation assessed the use of these strategies and clusters in the context of HCV treatment across the US Department of Veterans Affairs (VA), Veterans Health Administration, the largest provider of HCV care nationally. A 73-item survey was developed and sent to all VA sites treating HCV via electronic survey, to assess whether or not a site used each ERIC-defined implementation strategy related to employing the new HCV medication in 2014. VA national data regarding the number of Veterans starting on the new HCV medications at each site were collected. The associations between treatment starts and number and type of implementation strategies were assessed. A total of 80 (62%) sites responded. Respondents endorsed an average of 25 ± 14 strategies. The number of treatment starts was positively correlated with the total number of strategies endorsed (r = 0.43, p strategies endorsed (p strategies, compared to 15 strategies in the lowest quartile. There were significant differences in the types of strategies endorsed by sites in the highest and lowest quartiles of treatment starts. Four of the 10 top strategies for sites in the top quartile had significant correlations with treatment starts compared to only 1 of the 10 top strategies in the bottom quartile sites. Overall, only 3 of the top 15 most frequently used strategies were associated with treatment. These results suggest that sites that used a greater number of implementation

  17. Quasi-Topotactic Transformation of FeOOH Nanorods to Robust Fe2O3 Porous Nanopillars Triggered with a Facile Rapid Dehydration Strategy for Efficient Photoelectrochemical Water Splitting.

    Science.gov (United States)

    Liao, Aizhen; He, Huichao; Tang, Lanqin; Li, Yichang; Zhang, Jiyuan; Chen, Jiani; Chen, Lan; Zhang, Chunfeng; Zhou, Yong; Zou, Zhigang

    2018-03-28

    A facile rapid dehydration (RD) strategy is explored for quasi-topotactic transformation of FeOOH nanorods to robust Fe 2 O 3 porous nanopillars, avoiding collapse, shrink, and coalescence, and compared with a conventional treatment route. Additionally, the so-called RD process is capable of generating a beneficial porous structure for photoelectrochemical water oxidation. The obtained RD-Fe 2 O 3 photoanode exhibits a photocurrent density as high as 2.0 mA cm -2 at 1.23 V versus reversible hydrogen electrode (RHE) and a saturated photocurrent density of 3.5 mA cm -2 at 1.71 V versus RHE without any cocatalysts, which is about 270% improved photocurrent density over Fe 2 O 3 with the conventional temperature-rising route (0.75 mA cm -2 at 1.23 V vs RHE and 1.48 mA cm -2 at 1.71 V vs RHE, respectively). The enhanced photocurrent on RD-Fe 2 O 3 is attributed to a synergistic effect of the following factors: (i) preservation of single crystalline nanopillars decreases the charge-carrier recombination; (ii) formation of long nanopillars enhances light harvesting; and (iii) the porous structure shortens the hole transport distance from the bulk material to the electrode-electrolyte interface.

  18. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  19. Fate of organic microcontaminants in wastewater treatment and river systems: An uncertainty assessment in view of sampling strategy, and compound consumption rate and degradability.

    Science.gov (United States)

    Aymerich, I; Acuña, V; Ort, C; Rodríguez-Roda, I; Corominas, Ll

    2017-11-15

    The growing awareness of the relevance of organic microcontaminants on the environment has led to a growing number of studies on attenuation of these compounds in wastewater treatment plants (WWTP) and rivers. However, the effects of the sampling strategies (frequency and duration of composite samples) on the attenuation estimates are largely unknown. Our goal was to assess how frequency and duration of composite samples influence uncertainty of the attenuation estimates in WWTPs and rivers. Furthermore, we also assessed how compound consumption rate and degradability influence uncertainty. The assessment was conducted through simulating the integrated wastewater system of Puigcerdà (NE Iberian Peninsula) using a sewer pattern generator and a coupled model of WWTP and river. Results showed that the sampling strategy is especially critical at the influent of WWTP, particularly when the number of toilet flushes containing the compound of interest is small (≤100 toilet flushes with compound day -1 ), and less critical at the effluent of the WWTP and in the river due to the mixing effects of the WWTP. For example, at the WWTP, when evaluating a compound that is present in 50 pulses·d -1 using a sampling frequency of 15-min to collect a 24-h composite sample, the attenuation uncertainty can range from 94% (0% degradability) to 9% (90% degradability). The estimation of attenuation in rivers is less critical than in WWTPs, as the attenuation uncertainty was lower than 10% for all evaluated scenarios. Interestingly, the errors in the estimates of attenuation are usually lower than those of loads for most sampling strategies and compound characteristics (e.g. consumption and degradability), although the opposite occurs for compounds with low consumption and inappropriate sampling strategies at the WWTP. Hence, when designing a sampling campaign, one should consider the influence of compounds' consumption and degradability as well as the desired level of accuracy in

  20. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    Science.gov (United States)

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Sustainable Resilient, Robust & Resplendent Enterprises

    DEFF Research Database (Denmark)

    Edgeman, Rick

    to their impact. Resplendent enterprises are introduced with resplendence referring not to some sort of public or private façade, but instead refers to organizations marked by dual brilliance and nobility of strategy, governance and comportment that yields superior and sustainable triple bottom line performance....... Herein resilience, robustness, and resplendence (R3) are integrated with sustainable enterprise excellence (Edgeman and Eskildsen, 2013) or SEE and social-ecological innovation (Eskildsen and Edgeman, 2012) to aid progress of a firm toward producing continuously relevant performance that proceed from...

  2. Robustness analysis of chiller sequencing control

    International Nuclear Information System (INIS)

    Liao, Yundan; Sun, Yongjun; Huang, Gongsheng

    2015-01-01

    Highlights: • Uncertainties with chiller sequencing control were systematically quantified. • Robustness of chiller sequencing control was systematically analyzed. • Different sequencing control strategies were sensitive to different uncertainties. • A numerical method was developed for easy selection of chiller sequencing control. - Abstract: Multiple-chiller plant is commonly employed in the heating, ventilating and air-conditioning system to increase operational feasibility and energy-efficiency under part load condition. In a multiple-chiller plant, chiller sequencing control plays a key role in achieving overall energy efficiency while not sacrifices the cooling sufficiency for indoor thermal comfort. Various sequencing control strategies have been developed and implemented in practice. Based on the observation that (i) uncertainty, which cannot be avoided in chiller sequencing control, has a significant impact on the control performance and may cause the control fail to achieve the expected control and/or energy performance; and (ii) in current literature few studies have systematically addressed this issue, this paper therefore presents a study on robustness analysis of chiller sequencing control in order to understand the robustness of various chiller sequencing control strategies under different types of uncertainty. Based on the robustness analysis, a simple and applicable method is developed to select the most robust control strategy for a given chiller plant in the presence of uncertainties, which will be verified using case studies

  3. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  4. Parallel Solution of Robust Nonlinear Model Predictive Control Problems in Batch Crystallization

    Directory of Open Access Journals (Sweden)

    Yankai Cao

    2016-06-01

    Full Text Available Representing the uncertainties with a set of scenarios, the optimization problem resulting from a robust nonlinear model predictive control (NMPC strategy at each sampling instance can be viewed as a large-scale stochastic program. This paper solves these optimization problems using the parallel Schur complement method developed to solve stochastic programs on distributed and shared memory machines. The control strategy is illustrated with a case study of a multidimensional unseeded batch crystallization process. For this application, a robust NMPC based on min–max optimization guarantees satisfaction of all state and input constraints for a set of uncertainty realizations, and also provides better robust performance compared with open-loop optimal control, nominal NMPC, and robust NMPC minimizing the expected performance at each sampling instance. The performance of robust NMPC can be improved by generating optimization scenarios using Bayesian inference. With the efficient parallel solver, the solution time of one optimization problem is reduced from 6.7 min to 0.5 min, allowing for real-time application.

  5. Robust Active Label Correction

    DEFF Research Database (Denmark)

    Kremer, Jan; Sha, Fei; Igel, Christian

    2018-01-01

    for the noisy data lead to different active label correction algorithms. If loss functions consider the label noise rates, these rates are estimated during learning, where importance weighting compensates for the sampling bias. We show empirically that viewing the true label as a latent variable and computing......Active label correction addresses the problem of learning from input data for which noisy labels are available (e.g., from imprecise measurements or crowd-sourcing) and each true label can be obtained at a significant cost (e.g., through additional measurements or human experts). To minimize......). To select labels for correction, we adopt the active learning strategy of maximizing the expected model change. We consider the change in regularized empirical risk functionals that use different pointwise loss functions for patterns with noisy and true labels, respectively. Different loss functions...

  6. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  7. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  8. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  9. Robust Approaches to Forecasting

    OpenAIRE

    Jennifer Castle; David Hendry; Michael P. Clements

    2014-01-01

    We investigate alternative robust approaches to forecasting, using a new class of robust devices, contrasted with equilibrium correction models. Their forecasting properties are derived facing a range of likely empirical problems at the forecast origin, including measurement errors, implulses, omitted variables, unanticipated location shifts and incorrectly included variables that experience a shift. We derive the resulting forecast biases and error variances, and indicate when the methods ar...

  10. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  11. Stress and coping strategies in a sample of South African managers involved in post-graduate managerial studies

    Directory of Open Access Journals (Sweden)

    Judora J. Spangenberg

    2000-06-01

    Full Text Available To examine the relationships between stress levels and, respectively, stressor appraisal, coping strategies and bio- graphical variables, 107 managers completed a biographical questionnaire. Experience of Work and Life Circumstances Questionnaire, and Coping Strategy Indicator. Significant negative correlations were found between stress levels and appraisal scores on all work-related stressors. An avoidant coping strategy explained significant variance in stress levels in a model also containing social support-seeking and problem-solving coping strategies. It was concluded that an avoidant coping strategy probably contributed to increased stress levels. Female managers experienced significantly higher stress levels and utilized a social support-seeking coping strategy significantly more than male managers did. Opsomming Om die verband tussen stresvlakke en, onderskeidelik, taksering van stressors, streshanteringstrategiee en biografiese veranderlikes te ondersoek, het 107 bestuurders n biografiese vraelys, Ervaring vanWerk- en Lewensomstandighedevraelys en Streshanteringstrategieskaal voltooi. Beduidende negatiewe korrelasies is aangetref tussen stresvlakke en takseringtellings ten opsigte van alle werkverwante stressors. 'nVermydende streshantermgstrategie het beduidende variansie in stresvlakke verklaar in n model wat ook sosiale ondersteuningsoekende en pro-bleemoplossende streshanteringstrategiee ingesluit het. Die gevolgtrekking is bereik dat n vermydende stres- hanteringstrategie waarskynlik bygedra het tot verhoogde stresvlakke. Vroulike bestuurders het beduidend hoer stresvlakke ervaar en het n sosiale ondersteuningsoekende streshanteringstrategie beduidend meer gebnnk as manlike bestuurders.

  12. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  13. A radial sampling strategy for uniform k-space coverage with retrospective respiratory gating in 3D ultrashort-echo-time lung imaging.

    Science.gov (United States)

    Park, Jinil; Shin, Taehoon; Yoon, Soon Ho; Goo, Jin Mo; Park, Jang-Yeon

    2016-05-01

    The purpose of this work was to develop a 3D radial-sampling strategy which maintains uniform k-space sample density after retrospective respiratory gating, and demonstrate its feasibility in free-breathing ultrashort-echo-time lung MRI. A multi-shot, interleaved 3D radial sampling function was designed by segmenting a single-shot trajectory of projection views such that each interleaf samples k-space in an incoherent fashion. An optimal segmentation factor for the interleaved acquisition was derived based on an approximate model of respiratory patterns such that radial interleaves are evenly accepted during the retrospective gating. The optimality of the proposed sampling scheme was tested by numerical simulations and phantom experiments using human respiratory waveforms. Retrospectively, respiratory-gated, free-breathing lung MRI with the proposed sampling strategy was performed in healthy subjects. The simulation yielded the most uniform k-space sample density with the optimal segmentation factor, as evidenced by the smallest standard deviation of the number of neighboring samples as well as minimal side-lobe energy in the point spread function. The optimality of the proposed scheme was also confirmed by minimal image artifacts in phantom images. Human lung images showed that the proposed sampling scheme significantly reduced streak and ring artifacts compared with the conventional retrospective respiratory gating while suppressing motion-related blurring compared with full sampling without respiratory gating. In conclusion, the proposed 3D radial-sampling scheme can effectively suppress the image artifacts due to non-uniform k-space sample density in retrospectively respiratory-gated lung MRI by uniformly distributing gated radial views across the k-space. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Standard operating procedures for collection of soil and sediment samples for the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study

    Science.gov (United States)

    Fisher, Shawn C.; Reilly, Timothy J.; Jones, Daniel K.; Benzel, William M.; Griffin, Dale W.; Loftin, Keith A.; Iwanowicz, Luke R.; Cohl, Jonathan A.

    2015-12-17

    An understanding of the effects on human and ecological health brought by major coastal storms or flooding events is typically limited because of a lack of regionally consistent baseline and trends data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where exposures are probable. In an attempt to close this gap, the U.S. Geological Survey (USGS) has implemented the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study to collect regional sediment-quality data prior to and in response to future coastal storms. The standard operating procedure (SOP) detailed in this document serves as the sample-collection protocol for the SCoRR strategy by providing step-by-step instructions for site preparation, sample collection and processing, and shipping of soil and surficial sediment (for example, bed sediment, marsh sediment, or beach material). The objectives of the SCoRR strategy pilot study are (1) to create a baseline of soil-, sand-, marsh sediment-, and bed-sediment-quality data from sites located in the coastal counties from Maine to Virginia based on their potential risk of being contaminated in the event of a major coastal storm or flooding (defined as Resiliency mode); and (2) respond to major coastal storms and flooding by reoccupying select baseline sites and sampling within days of the event (defined as Response mode). For both modes, samples are collected in a consistent manner to minimize bias and maximize quality control by ensuring that all sampling personnel across the region collect, document, and process soil and sediment samples following the procedures outlined in this SOP. Samples are analyzed using four USGS-developed screening methods—inorganic geochemistry, organic geochemistry, pathogens, and biological assays—which are also outlined in this SOP. Because the SCoRR strategy employs a multi-metric approach for sample analyses, this

  15. The Italian national survey on radon indoors run by several different regional laboratories: Sampling strategy, realization and follow-up

    International Nuclear Information System (INIS)

    Bochicchio, F.; Risica, S.; Piermattei, S.

    1993-01-01

    The paper outlines the criteria and organization adopted by the Italian National Institutions in carrying out a representative national survey to evaluate the distribution of radon concentration and the exposure of the Italian population to natural radiation indoors. The main items of the survey - i.e. sampling design, choice of the sample size (5000 dwellings), organization, analysis of the actual sample structure, questionnaire to collect data about families and their dwellings, experimental set up and communication with the public - are discussed. Some results, concerning a first fraction of the total sample, are also presented. (author). 13 refs, 2 figs, 2 tabs

  16. Monitoring and Sampling Strategy for (Manufactured) Nano Objects Agglomerates and Aggregates (NOAA); Potential Added Value of the NANODEVICE Project

    NARCIS (Netherlands)

    Brouwer, D.H.; Lidén, G.; Asbach, C.; Berges, M.; Tongeren, M. van

    2014-01-01

    The production of nanomaterials and nano-enabled products is associated with the potential for workers' exposure to (manufactured) nano-objects' agglomerates and aggregates (NOAA). Workplace air monitoring studies have been conducted to assess the actual exposure; however, the methods and strategies

  17. Relationships between coping strategies, individual characteristics and job satisfaction in a sample of hospital nurses: cross-sectional questionnaire survey.

    Science.gov (United States)

    Golbasi, Zehra; Kelleci, Meral; Dogan, Selma

    2008-12-01

    This study aims to describe and compare the job satisfaction, coping strategies, personal and organizational characteristics among nurses working in a hospital in Turkey. In this cross-sectional survey design study, 186 nurses from Cumhuriyet University Hospital completed Personal Data Form, Minnesota Satisfaction Questionnaire and Ways of Coping Inventory. Response rate was 74.4%. In this study, it was found that job satisfaction score of nurses showed moderate (mean: 3.46+/-0.56) was found. While nurses mostly used to employ self-confident and optimistic approaches that had already being considered as positive coping strategies with stress, yielding and helpless approaches were employed less than that. While a statistically significant positive relation (pjob satisfaction and dimensions of Ways of Coping Inventory "self-confident approach" and "optimistic approach", negative relation (pjob satisfaction and dimensions of the "helpless approach". Organizational and individual nurse characteristics were not found to be associated with job satisfaction. But, job satisfaction of the nurses who is bounded by a contract was found higher than that of permanent staff nurses (pjob satisfaction of Turkish hospital nurses was at a moderate and that of the nurses who succeeded to coping with the stress was heightened. Higher levels of job satisfaction were associated with positive coping strategies. This study contributes to a growing body of evidence demonstrating the importance of coping strategies to nurses' job satisfaction.

  18. Robust motion estimation using connected operators

    OpenAIRE

    Salembier Clairon, Philippe Jean; Sanson, H

    1997-01-01

    This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...

  19. Stability Constraints for Robust Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Amanda G. S. Ottoni

    2015-01-01

    Full Text Available This paper proposes an approach for the robust stabilization of systems controlled by MPC strategies. Uncertain SISO linear systems with box-bounded parametric uncertainties are considered. The proposed approach delivers some constraints on the control inputs which impose sufficient conditions for the convergence of the system output. These stability constraints can be included in the set of constraints dealt with by existing MPC design strategies, in this way leading to the “robustification” of the MPC.

  20. The role of coping strategies and self-efficacy as predictors of life satisfaction in a sample of parents of children with autism spectrum disorder.

    Science.gov (United States)

    Luque Salas, Bárbara; Yáñez Rodríguez, Virginia; Tabernero Urbieta, Carmen; Cuadrado, Esther

    2017-02-01

    This research aims to understand the role of coping strategies and self-efficacy expectations as predictors of life satisfaction in a sample of parents of boys and girls diagnosed with autistic spectrum disorder. A total of 129 parents (64 men and 65 women) answered a questionnaire on life-satisfaction, coping strategies and self-efficacy scales. Using a regression model, results show that the age of the child is associated with a lower level of satisfaction in parents. The results show that self-efficacy is the variable that best explains the level of satisfaction in mothers, while the use of problem solving explains a higher level of satisfaction in fathers. Men and women show similar levels of life satisfaction; however significant differences were found in coping strategies where women demonstrated higher expressing emotions and social support strategies than men. The development of functional coping strategies and of a high level of self-efficacy represents a key tool for adapting to caring for children with autism. Our results indicated the necessity of early intervention with parents to promote coping strategies, self-efficacy and high level of life satisfaction.

  1. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  2. New solutions for NPP robustness improvement

    International Nuclear Information System (INIS)

    Wolski, Alexander

    2013-01-01

    Fukushima accident has triggered a major re-assessment of robustness of nuclear stations. First round of evaluations has been Finished. Improvement areas and strategies have been identified. Implementation of upgrades has started world-wide. New solutions can provide substantial benefits

  3. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  4. Robust network design for multispecies conservation

    Science.gov (United States)

    Ronan Le Bras; Bistra Dilkina; Yexiang Xue; Carla P. Gomes; Kevin S. McKelvey; Michael K. Schwartz; Claire A. Montgomery

    2013-01-01

    Our work is motivated by an important network design application in computational sustainability concerning wildlife conservation. In the face of human development and climate change, it is important that conservation plans for protecting landscape connectivity exhibit certain level of robustness. While previous work has focused on conservation strategies that result...

  5. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  6. Robust Self Tuning Controllers

    DEFF Research Database (Denmark)

    Poulsen, Niels Kjølstad

    1985-01-01

    The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...... has several operation modes and a detector for controlling the mode. A special self tuning controller has been developed to regulate plant with changing time delay.......The present thesis concerns robustness properties of adaptive controllers. It is addressed to methods for robustifying self tuning controllers with respect to abrupt changes in the plant parameters. In the thesis an algorithm for estimating abruptly changing parameters is presented. The estimator...

  7. Restricted by Whom? A Historical Review of Strategies and Organization for Restricted Earth Return of Samples from NASA Planetary Missions

    Science.gov (United States)

    Pugel, Betsy

    2017-01-01

    This presentation is a review of the timeline for Apollo's approach to Planetary Protection, then known as Planetary Quarantine. Return of samples from Apollo 11, 12 and 14 represented NASA's first attempts into conducting what is now known as Restricted Earth Return, where return of samples is undertaken by the Agency with the utmost care for the impact that the samples may have on Earth's environment due to the potential presence of microbial or other life forms that originate from the parent body (in this case, Earth's Moon).

  8. Seeking Signs of Life on Mars: A Strategy for Selecting and Analyzing Returned Samples from Hydrothermal Deposits

    Science.gov (United States)

    iMOST Team; Campbell, K. A.; Farmer, J. D.; Van Kranendonk, M. J.; Fernandez-Remolar, D. C.; Czaja, A. D.; Altieri, F.; Amelin, Y.; Ammannito, E.; Anand, M.; Beaty, D. W.; Benning, L. G.; Bishop, J. L.; Borg, L. E.; Boucher, D.; Brucato, J. R.; Busemann, H.; Carrier, B. L.; Debaille, V.; Des Marais, D. J.; Dixon, M.; Ehlmann, B. L.; Fogarty, J.; Glavin, D. P.; Goreva, Y. S.; Grady, M. M.; Hallis, L. J.; Harrington, A. D.; Hausrath, E. M.; Herd, C. D. K.; Horgan, B.; Humayun, M.; Kleine, T.; Kleinhenz, J.; Mangold, N.; Mackelprang, R.; Mayhew, L. E.; McCubbin, F. M.; McCoy, J. T.; McLennan, S. M.; McSween, H. Y.; Moser, D. E.; Moynier, F.; Mustard, J. F.; Niles, P. B.; Ori, G. G.; Raulin, F.; Rettberg, P.; Rucker, M. A.; Schmitz, N.; Sefton-Nash, E.; Sephton, M. A.; Shaheen, R.; Shuster, D. L.; Siljestrom, S.; Smith, C. L.; Spry, J. A.; Steele, A.; Swindle, T. D.; ten Kate, I. L.; Tosca, N. J.; Usui, T.; Wadhwa, M.; Weiss, B. P.; Werner, S. C.; Westall, F.; Wheeler, R. M.; Zipfel, J.; Zorzano, M. P.

    2018-04-01

    The iMOST hydrothermal deposits sub-team has identified key samples and investigations required to delineate the character and preservational state of potential biosignatures in ancient hydrothermal deposits.

  9. Coping strategies and behavioural changes following a genital herpes diagnosis among an urban sample of underserved Midwestern women.

    Science.gov (United States)

    Davis, Alissa; Roth, Alexis; Brand, Juanita Ebert; Zimet, Gregory D; Van Der Pol, Barbara

    2016-03-01

    This study focused on understanding the coping strategies and related behavioural changes of women who were recently diagnosed with herpes simplex virus type 2. In particular, we were interested in how coping strategies, condom use, and acyclovir uptake evolve over time. Twenty-eight women screening positive for herpes simplex virus type 2 were recruited through a public health STD clinic and the Indianapolis Community Court. Participants completed three semi-structured interviews with a woman researcher over a six-month period. The interviews focused on coping strategies for dealing with a diagnosis, frequency of condom use, suppressive and episodic acyclovir use, and the utilisation of herpes simplex virus type 2 support groups. Interview data were analysed using content analysis to identify and interpret concepts and themes that emerged from the interviews. Women employed a variety of coping strategies following an herpes simplex virus type 2 diagnosis. Of the women, 32% reported an increase in religious activities, 20% of women reported an increase in substance use, and 56% of women reported engaging in other coping activities. A total of 80% of women reported abstaining from sex immediately following the diagnosis, but 76% of women reported engaging in sex again by the six-month interview. Condom and medication use did not increase and herpes simplex virus type 2 support groups were not utilised by participants. All participants reported engaging in at least one coping mechanism after receiving their diagnosis. A positive diagnosis did not seem to result in increased use of condoms for the majority of participants and the use of acyclovir was low overall. © The Author(s) 2015.

  10. Robust surgery loading

    NARCIS (Netherlands)

    Hans, Elias W.; Wullink, Gerhard; van Houdenhoven, Mark; Kazemier, Geert

    2008-01-01

    We consider the robust surgery loading problem for a hospital’s operating theatre department, which concerns assigning surgeries and sufficient planned slack to operating room days. The objective is to maximize capacity utilization and minimize the risk of overtime, and thus cancelled patients. This

  11. Robustness Envelopes of Networks

    NARCIS (Netherlands)

    Trajanovski, S.; Martín-Hernández, J.; Winterbach, W.; Van Mieghem, P.

    2013-01-01

    We study the robustness of networks under node removal, considering random node failure, as well as targeted node attacks based on network centrality measures. Whilst both of these have been studied in the literature, existing approaches tend to study random failure in terms of average-case

  12. A Sample-Based Forest Monitoring Strategy Using Landsat, AVHRR and MODIS Data to Estimate Gross Forest Cover Loss in Malaysia between 1990 and 2005

    Directory of Open Access Journals (Sweden)

    Peter Potapov

    2013-04-01

    Full Text Available Insular Southeast Asia is a hotspot of humid tropical forest cover loss. A sample-based monitoring approach quantifying forest cover loss from Landsat imagery was implemented to estimate gross forest cover loss for two eras, 1990–2000 and 2000–2005. For each time interval, a probability sample of 18.5 km × 18.5 km blocks was selected, and pairs of Landsat images acquired per sample block were interpreted to quantify forest cover area and gross forest cover loss. Stratified random sampling was implemented for 2000–2005 with MODIS-derived forest cover loss used to define the strata. A probability proportional to x (πpx design was implemented for 1990–2000 with AVHRR-derived forest cover loss used as the x variable to increase the likelihood of including forest loss area in the sample. The estimated annual gross forest cover loss for Malaysia was 0.43 Mha/yr (SE = 0.04 during 1990–2000 and 0.64 Mha/yr (SE = 0.055 during 2000–2005. Our use of the πpx sampling design represents a first practical trial of this design for sampling satellite imagery. Although the design performed adequately in this study, a thorough comparative investigation of the πpx design relative to other sampling strategies is needed before general design recommendations can be put forth.

  13. Use of X-ray diffraction technique and chemometrics to aid soil sampling strategies in traceability studies.

    Science.gov (United States)

    Bertacchini, Lucia; Durante, Caterina; Marchetti, Andrea; Sighinolfi, Simona; Silvestri, Michele; Cocchi, Marina

    2012-08-30

    Aim of this work is to assess the potentialities of the X-ray powder diffraction technique as fingerprinting technique, i.e. as a preliminary tool to assess soil samples variability, in terms of geochemical features, in the context of food geographical traceability. A correct approach to sampling procedure is always a critical issue in scientific investigation. In particular, in food geographical traceability studies, where the cause-effect relations between the soil of origin and the final foodstuff is sought, a representative sampling of the territory under investigation is certainly an imperative. This research concerns a pilot study to investigate the field homogeneity with respect to both field extension and sampling depth, taking also into account the seasonal variability. Four Lambrusco production sites of the Modena district were considered. The X-Ray diffraction spectra, collected on the powder of each soil sample, were treated as fingerprint profiles to be deciphered by multivariate and multi-way data analysis, namely PCA and PARAFAC. The differentiation pattern observed in soil samples, as obtained by this fast and non-destructive analytical approach, well matches with the results obtained by characterization with other costly analytical techniques, such as ICP/MS, GFAAS, FAAS, etc. Thus, the proposed approach furnishes a rational basis to reduce the number of soil samples to be collected for further analytical characterization, i.e. metals content, isotopic ratio of radiogenic element, etc., while maintaining an exhaustive description of the investigated production areas. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Sampling strategies and materials for investigating large reactive particle complaints from Valley Village homeowners near a coal-fired power plant

    International Nuclear Information System (INIS)

    Chang, A.; Davis, H.; Frazar, B.; Haines, B.

    1997-01-01

    This paper will present Phase 3's sampling strategies, techniques, methods and substrates for assisting the District to resolve the complaints involving yellowish-brown staining and spotting of homes, cars, etc. These spots could not be easily washed off and some were permanent. The sampling strategies for the three phases were based on Phase 1 -- the identification of the reactive particles conducted in October, 1989 by APCD and IITRI, Phase 2 -- a study of the size distribution and concentration as a function of distance and direction of reactive particle deposition conducted by Radian and LG and E, and Phase 3 -- the determination of the frequency of soiling events over a full year's duration conducted in 1995 by APCD and IITRI. The sampling methods included two primary substrates -- ACE sheets and painted steel, and four secondary substrates -- mailbox, aluminum siding, painted wood panels and roof tiles. The secondary substrates were the main objects from the Valley Village complaints. The sampling technique included five Valley Village (VV) soiling/staining assessment sites and one southwest of the power plant as background/upwind site. The five VV sites northeast of the power plant covered 50 degrees span sector and 3/4 miles distance from the stacks. Hourly meteorological data for wind speeds and wind directions were collected. Based on this sampling technique, there were fifteen staining episodes detected. Nine of them were in summer, 1995

  15. Subnanogram proteomics: Impact of LC column selection, MS instrumentation and data analysis strategy on proteome coverage for trace samples

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Ying; Zhao, Rui; Piehowski, Paul D.; Moore, Ronald J.; Lim, Sujung; Orphan, Victoria J.; Paša-Tolić, Ljiljana; Qian, Wei-Jun; Smith, Richard D.; Kelly, Ryan T.

    2018-04-01

    One of the greatest challenges for mass spectrometry (MS)-based proteomics is the limited ability to analyze small samples. Here we investigate the relative contributions of liquid chromatography (LC), MS instrumentation and data analysis methods with the aim of improving proteome coverage for sample sizes ranging from 0.5 ng to 50 ng. We show that the LC separations utilizing 30-µm-i.d. columns increase signal intensity by >3-fold relative to those using 75-µm-i.d. columns, leading to 32% increase in peptide identifications. The Orbitrap Fusion Lumos mass spectrometer significantly boosted both sensitivity and sequencing speed relative to earlier generation Orbitraps (e.g., LTQ-Orbitrap), leading to a ~3× increase in peptide identifications and 1.7× increase in identified protein groups for 2 ng tryptic digests of bacterial lysate. The Match Between Runs algorithm of open-source MaxQuant software further increased proteome coverage by ~ 95% for 0.5 ng samples and by ~42% for 2 ng samples. The present platform is capable of identifying >3000 protein groups from tryptic digestion of cell lysates equivalent to 50 HeLa cells and 100 THP-1 cells (~10 ng total proteins), respectively, and >950 proteins from subnanogram bacterial and archaeal cell lysates. The present ultrasensitive LC-MS platform is expected to enable deep proteome coverage for subnanogram samples, including single mammalian cells.

  16. A robust sound perception model suitable for neuromorphic implementation.

    Science.gov (United States)

    Coath, Martin; Sheik, Sadique; Chicca, Elisabetta; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas

    2013-01-01

    We have recently demonstrated the emergence of dynamic feature sensitivity through exposure to formative stimuli in a real-time neuromorphic system implementing a hybrid analog/digital network of spiking neurons. This network, inspired by models of auditory processing in mammals, includes several mutually connected layers with distance-dependent transmission delays and learning in the form of spike timing dependent plasticity, which effects stimulus-driven changes in the network connectivity. Here we present results that demonstrate that the network is robust to a range of variations in the stimulus pattern, such as are found in naturalistic stimuli and neural responses. This robustness is a property critical to the development of realistic, electronic neuromorphic systems. We analyze the variability of the response of the network to "noisy" stimuli which allows us to characterize the acuity in information-theoretic terms. This provides an objective basis for the quantitative comparison of networks, their connectivity patterns, and learning strategies, which can inform future design decisions. We also show, using stimuli derived from speech samples, that the principles are robust to other challenges, such as variable presentation rate, that would have to be met by systems deployed in the real world. Finally we demonstrate the potential applicability of the approach to real sounds.

  17. A Robust Sound Perception Model Suitable for Neuromorphic Implementation

    Directory of Open Access Journals (Sweden)

    Martin eCoath

    2014-01-01

    Full Text Available We have recently demonstrated the emergence of dynamic feature sensitivity through exposure to formative stimuli in a real-time neuromorphic system implementing a hybrid analogue/digital network of spiking neurons. This network, inspired by models of auditory processing in mammals, includes several mutually connected layers with distance-dependent transmission delays and learning in the form of spike timing dependent plasticity, which effects stimulus-driven changes in the network connectivity.Here we present results that demonstrate that the network is robust to a range of variations in the stimulus pattern, such as are found in naturalistic stimuli and neural responses. This robustness is a property critical to the development of realistic, electronic neuromorphic systems.We analyse the variability of the response of the network to `noisy' stimuli which allows us to characterize the acuity in information-theoretic terms. This provides an objective basis for the quantitative comparison of networks, their connectivity patterns, and learning strategies, which can inform future design decisions. We also show, using stimuli derived from speech samples, that the principles are robust to other challenges, such as variable presentation rate, that would have to be met by systems deployed in the real world. Finally we demonstrate the potential applicability of the approach to real sounds.

  18. Robustness Analysis of Typologies of Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Parigi, Dario

    2013-01-01

    to the future development of typologies of reciprocal timber structures. The paper concludes that these kinds of structures can have a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness in structures and provides strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper outlines robustness issues related...

  19. Robustness analysis method for orbit control

    Science.gov (United States)

    Zhang, Jingrui; Yang, Keying; Qi, Rui; Zhao, Shuge; Li, Yanyan

    2017-08-01

    Satellite orbits require periodical maintenance due to the presence of perturbations. However, random errors caused by inaccurate orbit determination and thrust implementation may lead to failure of the orbit control strategy. Therefore, it is necessary to analyze the robustness of the orbit control methods. Feasible strategies which are tolerant to errors of a certain magnitude can be developed to perform reliable orbit control for the satellite. In this paper, first, the orbital dynamic model is formulated by Gauss' form of the planetary equation using the mean orbit elements; the atmospheric drag and the Earth's non-spherical perturbations are taken into consideration in this model. Second, an impulsive control strategy employing the differential correction algorithm is developed to maintain the satellite trajectory parameters in given ranges. Finally, the robustness of the impulsive control method is analyzed through Monte Carlo simulations while taking orbit determination error and thrust error into account.

  20. Robust matching for voice recognition

    Science.gov (United States)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  1. Adaptive Critic Nonlinear Robust Control: A Survey.

    Science.gov (United States)

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.

  2. Sample substitution can be an acceptable data-collection strategy: the case of the Belgian Health Interview Survey.

    Science.gov (United States)

    Demarest, Stefaan; Molenberghs, Geert; Van der Heyden, Johan; Gisle, Lydia; Van Oyen, Herman; de Waleffe, Sandrine; Van Hal, Guido

    2017-11-01

    Substitution of non-participating households is used in the Belgian Health Interview Survey (BHIS) as a method to obtain the predefined net sample size. Yet, possible effects of applying substitution on response rates and health estimates remain uncertain. In this article, the process of substitution with its impact on response rates and health estimates is assessed. The response rates (RR)-both at household and individual level-according to the sampling criteria were calculated for each stage of the substitution process, together with the individual accrual rate (AR). Unweighted and weighted health estimates were calculated before and after applying substitution. Of the 10,468 members of 4878 initial households, 5904 members (RRind: 56.4%) of 2707 households (RRhh: 55.5%) participated. For the three successive (matched) substitutes, the RR dropped to 45%. The composition of the net sample resembles the one of the initial samples. Applying substitution did not produce any important distorting effects on the estimates. Applying substitution leads to an increase in non-participation, but does not impact the estimations.

  3. Actual distribution of Cronobacter spp. in industrial batches of powdered infant formula and consequences for performance of sampling strategies

    NARCIS (Netherlands)

    Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.

    2011-01-01

    The actual spatial distribution of microorganisms within a batch of food influences the results of sampling for microbiological testing when this distribution is non-homogeneous. In the case of pathogens being non-homogeneously distributed, it markedly influences public health risk. This study

  4. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  5. Introduction to Robust Estimation and Hypothesis Testing

    CERN Document Server

    Wilcox, Rand R

    2012-01-01

    This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations.Introduction to R

  6. A robust classic.

    Science.gov (United States)

    Kutzner, Florian; Vogel, Tobias; Freytag, Peter; Fiedler, Klaus

    2011-01-01

    In the present research, we argue for the robustness of illusory correlations (ICs, Hamilton & Gifford, 1976) regarding two boundary conditions suggested in previous research. First, we argue that ICs are maintained under extended experience. Using simulations, we derive conflicting predictions. Whereas noise-based accounts predict ICs to be maintained (Fielder, 2000; Smith, 1991), a prominent account based on discrepancy-reducing feedback learning predicts ICs to disappear (Van Rooy et al., 2003). An experiment involving 320 observations with majority and minority members supports the claim that ICs are maintained. Second, we show that actively using the stereotype to make predictions that are met with reward and punishment does not eliminate the bias. In addition, participants' operant reactions afford a novel online measure of ICs. In sum, our findings highlight the robustness of ICs that can be explained as a result of unbiased but noisy learning.

  7. Robust Airline Schedules

    OpenAIRE

    Eggenberg, Niklaus; Salani, Matteo; Bierlaire, Michel

    2010-01-01

    Due to economic pressure industries, when planning, tend to focus on optimizing the expected profit or the yield. The consequence of highly optimized solutions is an increased sensitivity to uncertainty. This generates additional "operational" costs, incurred by possible modifications of the original plan to be performed when reality does not reflect what was expected in the planning phase. The modern research trend focuses on "robustness" of solutions instead of yield or profit. Although ro...

  8. Randomization of grab-sampling strategies for estimating the annual exposure of U miners to Rn daughters.

    Science.gov (United States)

    Borak, T B

    1986-04-01

    Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.

  9. Sample Preparation Strategies for the Effective Quantitation of Hydrophilic Metabolites in Serum by Multi-Targeted HILIC-MS/MS

    Directory of Open Access Journals (Sweden)

    Elisavet Tsakelidou

    2017-03-01

    Full Text Available The effect of endogenous interferences of serum in multi-targeted metabolite profiling HILIC-MS/MS analysis was investigated by studying different sample preparation procedures. A modified QuEChERS dispersive SPE protocol, a HybridSPE protocol, and a combination of liquid extraction with protein precipitation were compared to a simple protein precipitation. Evaluation of extraction efficiency and sample clean-up was performed for all methods. SPE sorbent materials tested were found to retain hydrophilic analytes together with endogenous interferences, thus additional elution steps were needed. Liquid extraction was not shown to minimise matrix effects. In general, it was observed that a balance should be reached in terms of recovery, efficient clean-up, and sample treatment time when a wide range of metabolites are analysed. A quick step for removing phospholipids prior to the determination of hydrophilic endogenous metabolites is required, however, based on the results from the applied methods, further studies are needed to achieve high recoveries for all metabolites.

  10. The Crane Robust Control

    Directory of Open Access Journals (Sweden)

    Marek Hicar

    2004-01-01

    Full Text Available The article is about a control design for complete structure of the crane: crab, bridge and crane uplift.The most important unknown parameters for simulations are burden weight and length of hanging rope. We will use robustcontrol for crab and bridge control to ensure adaptivity for burden weight and rope length. Robust control will be designed for current control of the crab and bridge, necessary is to know the range of unknown parameters. Whole robust will be splitto subintervals and after correct identification of unknown parameters the most suitable robust controllers will be chosen.The most important condition at the crab and bridge motion is avoiding from burden swinging in the final position. Crab and bridge drive is designed by asynchronous motor fed from frequency converter. We will use crane uplift with burden weightobserver in combination for uplift, crab and bridge drive with cooperation of their parameters: burden weight, rope length and crab and bridge position. Controllers are designed by state control method. We will use preferably a disturbance observerwhich will identify burden weight as a disturbance. The system will be working in both modes at empty hook as well asat maximum load: burden uplifting and dropping down.

  11. Measure of robustness for complex networks

    Science.gov (United States)

    Youssef, Mina Nabil

    to the spread of susceptible/infected/recovered (SIR) epidemics. To compute VCSIR, we propose a novel individual-based approach to model the spread of SIR epidemics in networks, which captures the infection size for a given effective infection rate. Thus, VCSIR quantitatively integrates the infection strength with the corresponding infection size. To optimize the VCSIR metric, a new mitigation strategy is proposed, based on a temporary reduction of contacts in social networks. The social contact network is modeled as a weighted graph that describes the frequency of contacts among the individuals. Thus, we consider the spread of an epidemic as a dynamical system, and the total number of infection cases as the state of the system, while the weight reduction in the social network is the controller variable leading to slow/reduce the spread of epidemics. Using optimal control theory, the obtained solution represents an optimal adaptive weighted network defined over a finite time interval. Moreover, given the high complexity of the optimization problem, we propose two heuristics to find the near optimal solutions by reducing the contacts among the individuals in a decentralized way. Finally, the cascading failures that can take place in power grids and have recently caused several blackouts are studied. We propose a new metric to assess the robustness of the power grid with respect to the cascading failures. The power grid topology is modeled as a network, which consists of nodes and links representing power substations and transmission lines, respectively. We also propose an optimal islanding strategy to protect the power grid when a cascading failure event takes place in the grid. The robustness metrics are numerically evaluated using real and synthetic networks to quantify their robustness with respect to disturbing dynamics. We show that the proposed metrics outperform the classical metrics in quantifying the robustness of networks and the efficiency of the mitigation

  12. Collaboration During the NASA ABoVE Airborne SAR Campaign: Sampling Strategies Used by NGEE Arctic and Other Partners in Alaska and Western Canada

    Science.gov (United States)

    Wullschleger, S. D.; Charsley-Groffman, L.; Baltzer, J. L.; Berg, A. A.; Griffith, P. C.; Jafarov, E. E.; Marsh, P.; Miller, C. E.; Schaefer, K. M.; Siqueira, P.; Wilson, C. J.; Kasischke, E. S.

    2017-12-01

    There is considerable interest in using L- and P-band Synthetic Aperture Radar (SAR) data to monitor variations in aboveground woody biomass, soil moisture, and permafrost conditions in high-latitude ecosystems. Such information is useful for quantifying spatial heterogeneity in surface and subsurface properties, and for model development and evaluation. To conduct these studies, it is desirable that field studies share a common sampling strategy so that the data from multiple sites can be combined and used to analyze variations in conditions across different landscape geomorphologies and vegetation types. In 2015, NASA launched the decade-long Arctic-Boreal Vulnerability Experiment (ABoVE) to study the sensitivity and resilience of these ecosystems to disturbance and environmental change. NASA is able to leverage its remote sensing strengths to collect airborne and satellite observations to capture important ecosystem properties and dynamics across large spatial scales. A critical component of this effort includes collection of ground-based data that can be used to analyze, calibrate and validate remote sensing products. ABoVE researchers at a large number of sites located in important Arctic and boreal ecosystems in Alaska and western Canada are following common design protocols and strategies for measuring soil moisture, thaw depth, biomass, and wetland inundation. Here we elaborate on those sampling strategies as used in the 2017 summer SAR campaign and address the sampling design and measurement protocols for supporting the ABoVE aerial activities. Plot size, transect length, and distribution of replicates across the landscape systematically allowed investigators to optimally sample a site for soil moisture, thaw depth, and organic layer thickness. Specific examples and data sets are described for the Department of Energy's Next-Generation Ecosystem Experiments (NGEE Arctic) project field sites near Nome and Barrow, Alaska. Future airborne and satellite

  13. Latent Class Analysis of Gambling Activities in a Sample of Young Swiss Men: Association with Gambling Problems, Substance Use Outcomes, Personality Traits and Coping Strategies.

    Science.gov (United States)

    Studer, Joseph; Baggio, Stéphanie; Mohler-Kuo, Meichun; Simon, Olivier; Daeppen, Jean-Bernard; Gmel, Gerhard

    2016-06-01

    The study aimed to identify different patterns of gambling activities (PGAs) and to investigate how PGAs differed in gambling problems, substance use outcomes, personality traits and coping strategies. A representative sample of 4989 young Swiss males completed a questionnaire assessing seven distinct gambling activities, gambling problems, substance use outcomes, personality traits and coping strategies. PGAs were identified using latent class analysis (LCA). Differences between PGAs in gambling and substance use outcomes, personality traits and coping strategies were tested. LCA identified six different PGAs. With regard to gambling and substance use outcomes, the three most problematic PGAs were extensive gamblers, followed by private gamblers, and electronic lottery and casino gamblers, respectively. By contrast, the three least detrimental PGAs were rare or non-gamblers, lottery only gamblers and casino gamblers. With regard to personality traits, compared with rare or non-gamblers, private and casino gamblers reported higher levels of sensation seeking. Electronic lottery and casino gamblers, private gamblers and extensive gamblers had higher levels of aggression-hostility. Extensive and casino gamblers reported higher levels of sociability, whereas casino gamblers reported lower levels of anxiety-neuroticism. Extensive gamblers used more maladaptive and less adaptive coping strategies than other groups. Results suggest that gambling is not a homogeneous activity since different types of gamblers exist according to the PGA they are engaged in. Extensive gamblers, electronic and casino gamblers and private gamblers may have the most problematic PGAs. Personality traits and coping skills may predispose individuals to PGAs associated with more or less negative outcomes.

  14. Robustness of airline alliance route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Simo, Pep; Gonzalez-Prieto, David

    2015-05-01

    The aim of this study is to analyze the robustness of the three major airline alliances' (i.e., Star Alliance, oneworld and SkyTeam) route networks. Firstly, the normalization of a multi-scale measure of vulnerability is proposed in order to perform the analysis in networks with different sizes, i.e., number of nodes. An alternative node selection criterion is also proposed in order to study robustness and vulnerability of such complex networks, based on network efficiency. And lastly, a new procedure - the inverted adaptive strategy - is presented to sort the nodes in order to anticipate network breakdown. Finally, the robustness of the three alliance networks are analyzed with (1) a normalized multi-scale measure of vulnerability, (2) an adaptive strategy based on four different criteria and (3) an inverted adaptive strategy based on the efficiency criterion. The results show that Star Alliance has the most resilient route network, followed by SkyTeam and then oneworld. It was also shown that the inverted adaptive strategy based on the efficiency criterion - inverted efficiency - shows a great success in quickly breaking networks similar to that found with betweenness criterion but with even better results.

  15. Reducing regional vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    Science.gov (United States)

    Reed, Patrick; Trindade, Bernardo; Jonathan, Herman; Harrison, Zeff; Gregory, Characklis

    2016-04-01

    Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  16. Effects of Direct Fuel Injection Strategies on Cycle-by-Cycle Variability in a Gasoline Homogeneous Charge Compression Ignition Engine: Sample Entropy Analysis

    Directory of Open Access Journals (Sweden)

    Jacek Hunicz

    2015-01-01

    Full Text Available In this study we summarize and analyze experimental observations of cyclic variability in homogeneous charge compression ignition (HCCI combustion in a single-cylinder gasoline engine. The engine was configured with negative valve overlap (NVO to trap residual gases from prior cycles and thus enable auto-ignition in successive cycles. Correlations were developed between different fuel injection strategies and cycle average combustion and work output profiles. Hypothesized physical mechanisms based on these correlations were then compared with trends in cycle-by-cycle predictability as revealed by sample entropy. The results of these comparisons help to clarify how fuel injection strategy can interact with prior cycle effects to affect combustion stability and so contribute to design control methods for HCCI engines.

  17. Optimisation (sampling strategies and analytical procedures) for site specific environment monitoring at the areas of uranium production legacy sites in Ukraine - 59045

    International Nuclear Information System (INIS)

    Voitsekhovych, Oleg V.; Lavrova, Tatiana V.; Kostezh, Alexander B.

    2012-01-01

    There are many sites in the world, where Environment are still under influence of the contamination related to the Uranium production carried out in past. Author's experience shows that lack of site characterization data, incomplete or unreliable environment monitoring studies can significantly limit quality of Safety Assessment procedures and Priority actions analyses needed for Remediation Planning. During recent decades the analytical laboratories of the many enterprises, currently being responsible for establishing the site specific environment monitoring program have been significantly improved their technical sampling and analytical capacities. However, lack of experience in the optimal site specific sampling strategy planning and also not enough experience in application of the required analytical techniques, such as modern alpha-beta radiometers, gamma and alpha spectrometry and liquid-scintillation analytical methods application for determination of U-Th series radionuclides in the environment, does not allow to these laboratories to develop and conduct efficiently the monitoring programs as a basis for further Safety Assessment in decision making procedures. This paper gives some conclusions, which were gained from the experience establishing monitoring programs in Ukraine and also propose some practical steps on optimization in sampling strategy planning and analytical procedures to be applied for the area required Safety assessment and justification for its potential remediation and safe management. (authors)

  18. Trends in Scottish newborn screening programme for congenital hypothyroidism 1980-2014: strategies for reducing age at notification after initial and repeat sampling.

    Science.gov (United States)

    Mansour, Chourouk; Ouarezki, Yasmine; Jones, Jeremy; Fitch, Moira; Smith, Sarah; Mason, Avril; Donaldson, Malcolm

    2017-10-01

    To determine ages at first capillary sampling and notification and age at notification after second sampling in Scottish newborns referred with elevated thyroid-stimulating hormone (TSH). Referrals between 1980 and 2014 inclusive were grouped into seven 5-year blocks and analysed according to agreed standards. Of 2 116 132 newborn infants screened, 919 were referred with capillary TSH elevation ≥8 mU/L of whom 624 had definite (606) or probable (18) congenital hypothyroidism. Median age at first sampling fell from 7 to 5 days between 1980 and 2014 (standard 4-7 days), with 22, 8 and 3 infants sampled >7 days during 2000-2004, 2005-2009 and 2010-2014. Median age at notification was consistently ≤14 days, range falling during 2000-2004, 2005-2009 and 2010-2014 from 6 to 78, 7-52 and 7-32 days with 12 (14.6%), 6 (5.6%) and 5 (4.3%) infants notified >14 days. However 18/123 (14.6%) of infants undergoing second sampling from 2000 onwards breached the ≤26-day standard for notification. By 2010-2014, the 91 infants with confirmed congenital hypothyroidism had shown favourable median age at first sample (5 days) with start of treatment (10.5 days) approaching age at notification. Most standards for newborn thyroid screening are being met by the Scottish programme, but there is a need to reduce age range at notification, particularly following second sampling. Strategies to improve screening performance include carrying out initial capillary sampling as close to 96 hours as possible; introducing 6-day laboratory reporting and use of electronic transmission for communicating repeat requests. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  19. Calibration strategies for the direct determination of Ca, K, and Mg in commercial samples of powdered milk and solid dietary supplements using laser-induced breakdown spectroscopy (LIBS).

    Science.gov (United States)

    Dos Santos Augusto, Amanda; Barsanelli, Paulo Lopes; Pereira, Fabiola Manhas Verbi; Pereira-Filho, Edenir Rodrigues

    2017-04-01

    This study describes the application of laser-induced breakdown spectroscopy (LIBS) for the direct determination of Ca, K and Mg in powdered milk and solid dietary supplements. The following two calibration strategies were applied: (i) use of the samples to calculate calibration models (milk) and (ii) use of sample mixtures (supplements) to obtain a calibration curve. In both cases, reference values obtained from inductively coupled plasma optical emission spectroscopy (ICP OES) after acid digestion were used. The emission line selection from LIBS spectra was accomplished by analysing the regression coefficients of partial least squares (PLS) regression models, and wavelengths of 534.947, 766.490 and 285.213nm were chosen for Ca, K and Mg, respectively. In the case of the determination of Ca in supplements, it was necessary to perform a dilution (10-fold) of the standards and samples to minimize matrix interference. The average accuracy for powdered milk ranged from 60% to 168% for Ca, 77% to 152% for K and 76% to 131% for Mg. In the case of dietary supplements, standard error of prediction (SEP) varied from 295 (Mg) to 3782mgkg -1 (Ca). The proposed method presented an analytical frequency of around 60 samples per hour and the step of sample manipulation was drastically reduced, with no generation of toxic chemical residues. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    Science.gov (United States)

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  1. Testing the efficiency of rover science protocols for robotic sample selection: A GeoHeuristic Operational Strategies Test

    Science.gov (United States)

    Yingst, R. A.; Bartley, J. K.; Chidsey, T. C.; Cohen, B. A.; Gilleaudeau, G. J.; Hynek, B. M.; Kah, L. C.; Minitti, M. E.; Williams, R. M. E.; Black, S.; Gemperline, J.; Schaufler, R.; Thomas, R. J.

    2018-05-01

    The GHOST field tests are designed to isolate and test science-driven rover operations protocols, to determine best practices. During a recent field test at a potential Mars 2020 landing site analog, we tested two Mars Science Laboratory data-acquisition and decision-making methods to assess resulting science return and sample quality: a linear method, where sites of interest are studied in the order encountered, and a "walkabout-first" method, where sites of interest are examined remotely before down-selecting to a subset of sites that are interrogated with more resource-intensive instruments. The walkabout method cost less time and fewer resources, while increasing confidence in interpretations. Contextual data critical to evaluating site geology was acquired earlier than for the linear method, and given a higher priority, which resulted in development of more mature hypotheses earlier in the analysis process. Combined, this saved time and energy in the collection of data with more limited spatial coverage. Based on these results, we suggest that the walkabout method be used where doing so would provide early context and time for the science team to develop hypotheses-critical tests; and that in gathering context, coverage may be more important than higher resolution.

  2. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    Science.gov (United States)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  3. Swab2know: An HIV-Testing Strategy Using Oral Fluid Samples and Online Communication of Test Results for Men Who Have Sex With Men in Belgium.

    Science.gov (United States)

    Platteau, Tom; Fransen, Katrien; Apers, Ludwig; Kenyon, Chris; Albers, Laura; Vermoesen, Tine; Loos, Jasna; Florence, Eric

    2015-09-01

    As HIV remains a public health concern, increased testing among those at risk for HIV acquisition is important. Men who have sex with men (MSM) are the most important group for targeted HIV testing in Europe. Several new strategies have been developed and implemented to increase HIV-testing uptake in this group, among them the Swab2know project. In this project, we aim to assess the acceptability and feasibility of outreach and online HIV testing using oral fluid samples as well as Web-based delivery of test results. Sample collection happened between December 2012 and April 2014 via outreach and online sampling among MSM. Test results were communicated through a secured website. HIV tests were executed in the laboratory. Each reactive sample needed to be confirmed using state-of-the-art confirmation procedures on a blood sample. Close follow-up of participants who did not pick up their results, and those with reactive results, was included in the protocol. Participants were asked to provide feedback on the methodology using a short survey. During 17 months, 1071 tests were conducted on samples collected from 898 men. Over half of the samples (553/1071, 51.63%) were collected during 23 outreach sessions. During an 8-month period, 430 samples out of 1071 (40.15%) were collected from online sampling. Additionally, 88 samples out of 1071 (8.22%) were collected by two partner organizations during face-to-face consultations with MSM and male sex workers. Results of 983 out of 1071 tests (91.78%) had been collected from the website. The pickup rate was higher among participants who ordered their kit online (421/430, 97.9%) compared to those participating during outreach activities (559/641, 87.2%; Ponline participants were more likely to have never been tested before (17.3% vs 10.0%; P=.001) and reported more sexual partners in the 6 months prior to participation in the project (mean 7.18 vs 3.23; Ponline counseling tool), and in studying the cost effectiveness of the

  4. Nonlinear robust hierarchical control for nonlinear uncertain systems

    Directory of Open Access Journals (Sweden)

    Leonessa Alexander

    1999-01-01

    Full Text Available A nonlinear robust control-system design framework predicated on a hierarchical switching controller architecture parameterized over a set of moving nominal system equilibria is developed. Specifically, using equilibria-dependent Lyapunov functions, a hierarchical nonlinear robust control strategy is developed that robustly stabilizes a given nonlinear system over a prescribed range of system uncertainty by robustly stabilizing a collection of nonlinear controlled uncertain subsystems. The robust switching nonlinear controller architecture is designed based on a generalized (lower semicontinuous Lyapunov function obtained by minimizing a potential function over a given switching set induced by the parameterized nominal system equilibria. The proposed framework robustly stabilizes a compact positively invariant set of a given nonlinear uncertain dynamical system with structured parametric uncertainty. Finally, the efficacy of the proposed approach is demonstrated on a jet engine propulsion control problem with uncertain pressure-flow map data.

  5. Passion, Robustness and Perseverance

    DEFF Research Database (Denmark)

    Lim, Miguel Antonio; Lund, Rebecca

    2016-01-01

    Evaluation and merit in the measured university are increasingly based on taken-for-granted assumptions about the “ideal academic”. We suggest that the scholar now needs to show that she is passionate about her work and that she gains pleasure from pursuing her craft. We suggest that passion...... and pleasure achieve an exalted status as something compulsory. The scholar ought to feel passionate about her work and signal that she takes pleasure also in the difficult moments. Passion has become a signal of robustness and perseverance in a job market characterised by funding shortages, increased pressure...... way to demonstrate their potential and, crucially, their passion for their work. Drawing on the literature on technologies of governance, we reflect on what is captured and what is left out by these two evaluation instruments. We suggest that bibliometric analysis at the individual level is deeply...

  6. Robust Optical Flow Estimation

    Directory of Open Access Journals (Sweden)

    Javier Sánchez Pérez

    2013-10-01

    Full Text Available n this work, we describe an implementation of the variational method proposed by Brox etal. in 2004, which yields accurate optical flows with low running times. It has several benefitswith respect to the method of Horn and Schunck: it is more robust to the presence of outliers,produces piecewise-smooth flow fields and can cope with constant brightness changes. Thismethod relies on the brightness and gradient constancy assumptions, using the information ofthe image intensities and the image gradients to find correspondences. It also generalizes theuse of continuous L1 functionals, which help mitigate the effect of outliers and create a TotalVariation (TV regularization. Additionally, it introduces a simple temporal regularizationscheme that enforces a continuous temporal coherence of the flow fields.

  7. Robust Multimodal Dictionary Learning

    Science.gov (United States)

    Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc

    2014-01-01

    We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674

  8. Robust snapshot interferometric spectropolarimetry.

    Science.gov (United States)

    Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert

    2016-05-15

    This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy.

  9. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  10. Utilizing the ultrasensitive Schistosoma up-converting phosphor lateral flow circulating anodic antigen (UCP-LF CAA) assay for sample pooling-strategies.

    Science.gov (United States)

    Corstjens, Paul L A M; Hoekstra, Pytsje T; de Dood, Claudia J; van Dam, Govert J

    2017-11-01

    Methodological applications of the high sensitivity genus-specific Schistosoma CAA strip test, allowing detection of single worm active infections (ultimate sensitivity), are discussed for efficient utilization in sample pooling strategies. Besides relevant cost reduction, pooling of samples rather than individual testing can provide valuable data for large scale mapping, surveillance, and monitoring. The laboratory-based CAA strip test utilizes luminescent quantitative up-converting phosphor (UCP) reporter particles and a rapid user-friendly lateral flow (LF) assay format. The test includes a sample preparation step that permits virtually unlimited sample concentration with urine, reaching ultimate sensitivity (single worm detection) at 100% specificity. This facilitates testing large urine pools from many individuals with minimal loss of sensitivity and specificity. The test determines the average CAA level of the individuals in the pool thus indicating overall worm burden and prevalence. When requiring test results at the individual level, smaller pools need to be analysed with the pool-size based on expected prevalence or when unknown, on the average CAA level of a larger group; CAA negative pools do not require individual test results and thus reduce the number of tests. Straightforward pooling strategies indicate that at sub-population level the CAA strip test is an efficient assay for general mapping, identification of hotspots, determination of stratified infection levels, and accurate monitoring of mass drug administrations (MDA). At the individual level, the number of tests can be reduced i.e. in low endemic settings as the pool size can be increased as opposed to prevalence decrease. At the sub-population level, average CAA concentrations determined in urine pools can be an appropriate measure indicating worm burden. Pooling strategies allowing this type of large scale testing are feasible with the various CAA strip test formats and do not affect

  11. Comprehensive Study of Human External Exposure to Organophosphate Flame Retardants via Air, Dust, and Hand Wipes: The Importance of Sampling and Assessment Strategy.

    Science.gov (United States)

    Xu, Fuchao; Giovanoulis, Georgios; van Waes, Sofie; Padilla-Sanchez, Juan Antonio; Papadopoulou, Eleni; Magnér, Jorgen; Haug, Line Småstuen; Neels, Hugo; Covaci, Adrian

    2016-07-19

    We compared the human exposure to organophosphate flame retardants (PFRs) via inhalation, dust ingestion, and dermal absorption using different sampling and assessment strategies. Air (indoor stationary air and personal ambient air), dust (floor dust and surface dust), and hand wipes were sampled from 61 participants and their houses. We found that stationary air contains higher levels of ΣPFRs (median = 163 ng/m(3), IQR = 161 ng/m(3)) than personal air (median = 44 ng/m(3), IQR = 55 ng/m(3)), suggesting that the stationary air sample could generate a larger bias for inhalation exposure assessment. Tris(chloropropyl) phosphate isomers (ΣTCPP) accounted for over 80% of ΣPFRs in both stationary and personal air. PFRs were frequently detected in both surface dust (ΣPFRs median = 33 100 ng/g, IQR = 62 300 ng/g) and floor dust (ΣPFRs median = 20 500 ng/g, IQR = 30 300 ng/g). Tris(2-butoxylethyl) phosphate (TBOEP) accounted for 40% and 60% of ΣPFRs in surface and floor dust, respectively, followed by ΣTCPP (30% and 20%, respectively). TBOEP (median = 46 ng, IQR = 69 ng) and ΣTCPP (median = 37 ng, IQR = 49 ng) were also frequently detected in hand wipe samples. For the first time, a comprehensive assessment of human exposure to PFRs via inhalation, dust ingestion, and dermal absorption was conducted with individual personal data rather than reference factors of the general population. Inhalation seems to be the major exposure pathway for ΣTCPP and tris(2-chloroethyl) phosphate (TCEP), while participants had higher exposure to TBOEP and triphenyl phosphate (TPHP) via dust ingestion. Estimated exposure to ΣPFRs was the highest with stationary air inhalation (median =34 ng·kg bw(-1)·day(-1), IQR = 38 ng·kg bw(-1)·day(-1)), followed by surface dust ingestion (median = 13 ng·kg bw(-1)·day(-1), IQR = 28 ng·kg bw(-1)·day(-1)), floor dust ingestion and personal air inhalation. The median dermal exposure on hand wipes was 0.32 ng·kg bw(-1)·day(-1) (IQR

  12. Development of an accurate, sensitive, and robust isotope dilution laser ablation ICP-MS method for simultaneous multi-element analysis (chlorine, sulfur, and heavy metals) in coal samples

    International Nuclear Information System (INIS)

    Boulyga, Sergei F.; Heilmann, Jens; Heumann, Klaus G.; Prohaska, Thomas

    2007-01-01

    A method for the direct multi-element determination of Cl, S, Hg, Pb, Cd, U, Br, Cr, Cu, Fe, and Zn in powdered coal samples has been developed by applying inductively coupled plasma isotope dilution mass spectrometry (ICP-IDMS) with laser-assisted introduction into the plasma. A sector-field ICP-MS with a mass resolution of 4,000 and a high-ablation rate laser ablation system provided significantly better sensitivity, detection limits, and accuracy compared to a conventional laser ablation system coupled with a quadrupole ICP-MS. The sensitivity ranges from about 590 cps for 35 Cl + to more than 6 x 10 5 cps for 238 U + for 1 μg of trace element per gram of coal sample. Detection limits vary from 450 ng g -1 for chlorine and 18 ng g -1 for sulfur to 9.5 pg g -1 for mercury and 0.3 pg g -1 for uranium. Analyses of minor and trace elements in four certified reference materials (BCR-180 Gas Coal, BCR-331 Steam Coal, SRM 1632c Trace Elements in Coal, SRM 1635 Trace Elements in Coal) yielded good agreement of usually not more than 5% deviation from the certified values and precisions of less than 10% relative standard deviation for most elements. Higher relative standard deviations were found for particular elements such as Hg and Cd caused by inhomogeneities due to associations of these elements within micro-inclusions in coal which was demonstrated for Hg in SRM 1635, SRM 1632c, and another standard reference material (SRM 2682b, Sulfur and Mercury in Coal). The developed LA-ICP-IDMS method with its simple sample pretreatment opens the possibility for accurate, fast, and highly sensitive determinations of environmentally critical contaminants in coal as well as of trace impurities in similar sample materials like graphite powder and activated charcoal on a routine basis. (orig.)

  13. Development of an accurate, sensitive, and robust isotope dilution laser ablation ICP-MS method for simultaneous multi-element analysis (chlorine, sulfur, and heavy metals) in coal samples.

    Science.gov (United States)

    Boulyga, Sergei F; Heilmann, Jens; Prohaska, Thomas; Heumann, Klaus G

    2007-10-01

    A method for the direct multi-element determination of Cl, S, Hg, Pb, Cd, U, Br, Cr, Cu, Fe, and Zn in powdered coal samples has been developed by applying inductively coupled plasma isotope dilution mass spectrometry (ICP-IDMS) with laser-assisted introduction into the plasma. A sector-field ICP-MS with a mass resolution of 4,000 and a high-ablation rate laser ablation system provided significantly better sensitivity, detection limits, and accuracy compared to a conventional laser ablation system coupled with a quadrupole ICP-MS. The sensitivity ranges from about 590 cps for (35)Cl+ to more than 6 x 10(5) cps for (238)U+ for 1 microg of trace element per gram of coal sample. Detection limits vary from 450 ng g(-1) for chlorine and 18 ng g(-1) for sulfur to 9.5 pg g(-1) for mercury and 0.3 pg g(-1) for uranium. Analyses of minor and trace elements in four certified reference materials (BCR-180 Gas Coal, BCR-331 Steam Coal, SRM 1632c Trace Elements in Coal, SRM 1635 Trace Elements in Coal) yielded good agreement of usually not more than 5% deviation from the certified values and precisions of less than 10% relative standard deviation for most elements. Higher relative standard deviations were found for particular elements such as Hg and Cd caused by inhomogeneities due to associations of these elements within micro-inclusions in coal which was demonstrated for Hg in SRM 1635, SRM 1632c, and another standard reference material (SRM 2682b, Sulfur and Mercury in Coal). The developed LA-ICP-IDMS method with its simple sample pretreatment opens the possibility for accurate, fast, and highly sensitive determinations of environmentally critical contaminants in coal as well as of trace impurities in similar sample materials like graphite powder and activated charcoal on a routine basis.

  14. SaDA: From Sampling to Data Analysis-An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data.

    Science.gov (United States)

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-06-03

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies.

  15. SaDA: From Sampling to Data Analysis—An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data

    Science.gov (United States)

    Singh, Kumar Saurabh; Thual, Dominique; Spurio, Roberto; Cannata, Nicola

    2015-01-01

    One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS). Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies. PMID:26047146

  16. SaDA: From Sampling to Data Analysis—An Extensible Open Source Infrastructure for Rapid, Robust and Automated Management and Analysis of Modern Ecological High-Throughput Microarray Data

    Directory of Open Access Journals (Sweden)

    Kumar Saurabh Singh

    2015-06-01

    Full Text Available One of the most crucial characteristics of day-to-day laboratory information management is the collection, storage and retrieval of information about research subjects and environmental or biomedical samples. An efficient link between sample data and experimental results is absolutely important for the successful outcome of a collaborative project. Currently available software solutions are largely limited to large scale, expensive commercial Laboratory Information Management Systems (LIMS. Acquiring such LIMS indeed can bring laboratory information management to a higher level, but most of the times this requires a sufficient investment of money, time and technical efforts. There is a clear need for a light weighted open source system which can easily be managed on local servers and handled by individual researchers. Here we present a software named SaDA for storing, retrieving and analyzing data originated from microorganism monitoring experiments. SaDA is fully integrated in the management of environmental samples, oligonucleotide sequences, microarray data and the subsequent downstream analysis procedures. It is simple and generic software, and can be extended and customized for various environmental and biomedical studies.

  17. Soil sampling

    International Nuclear Information System (INIS)

    Fortunati, G.U.; Banfi, C.; Pasturenzi, M.

    1994-01-01

    This study attempts to survey the problems associated with techniques and strategies of soil sampling. Keeping in mind the well defined objectives of a sampling campaign, the aim was to highlight the most important aspect of representativeness of samples as a function of the available resources. Particular emphasis was given to the techniques and particularly to a description of the many types of samplers which are in use. The procedures and techniques employed during the investigations following the Seveso accident are described. (orig.)

  18. Replication and robustness in developmental research.

    Science.gov (United States)

    Duncan, Greg J; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J

    2014-11-01

    Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key results are robust across estimation methods, data sets, and demographic subgroups. This article makes the case for prioritizing both explicit replications and, especially, within-study robustness checks in developmental psychology. It provides evidence on variation in effect sizes in developmental studies and documents strikingly different replication and robustness-checking practices in a sample of journals in developmental psychology and a sister behavioral science-applied economics. Our goal is not to show that any one behavioral science has a monopoly on best practices, but rather to show how journals from a related discipline address vital concerns of replication and generalizability shared by all social and behavioral sciences. We provide recommendations for promoting graduate training in replication and robustness-checking methods and for editorial policies that encourage these practices. Although some of our recommendations may shift the form and substance of developmental research articles, we argue that they would generate considerable scientific benefits for the field. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  19. A new modeling strategy for third-order fast high-performance liquid chromatographic data with fluorescence detection. Quantitation of fluoroquinolones in water samples.

    Science.gov (United States)

    Alcaráz, Mirta R; Bortolato, Santiago A; Goicoechea, Héctor C; Olivieri, Alejandro C

    2015-03-01

    Matrix augmentation is regularly employed in extended multivariate curve resolution-alternating least-squares (MCR-ALS), as applied to analytical calibration based on second- and third-order data. However, this highly useful concept has almost no correspondence in parallel factor analysis (PARAFAC) of third-order data. In the present work, we propose a strategy to process third-order chromatographic data with matrix fluorescence detection, based on an Augmented PARAFAC model. The latter involves decomposition of a three-way data array augmented along the elution time mode with data for the calibration samples and for each of the test samples. A set of excitation-emission fluorescence matrices, measured at different chromatographic elution times for drinking water samples, containing three fluoroquinolones and uncalibrated interferences, were evaluated using this approach. Augmented PARAFAC exploits the second-order advantage, even in the presence of significant changes in chromatographic profiles from run to run. The obtained relative errors of prediction were ca. 10 % for ofloxacin, ciprofloxacin, and danofloxacin, with a significant enhancement in analytical figures of merit in comparison with previous reports. The results are compared with those furnished by MCR-ALS.

  20. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  1. Robust continuous clustering.

    Science.gov (United States)

    Shah, Sohil Atul; Koltun, Vladlen

    2017-09-12

    Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank.

  2. Development of an accurate, sensitive, and robust isotope dilution laser ablation ICP-MS method for simultaneous multi-element analysis (chlorine, sulfur, and heavy metals) in coal samples

    Energy Technology Data Exchange (ETDEWEB)

    Boulyga, Sergei F. [University of Natural Resources and Applied Life Sciences, Department of Chemistry, Division of Analytical Chemistry-VIRIS Laboratory, Vienna (Austria); Johannes Gutenberg-University, Institute of Inorganic Chemistry and Analytical Chemistry, Mainz (Germany); Heilmann, Jens; Heumann, Klaus G. [Johannes Gutenberg-University, Institute of Inorganic Chemistry and Analytical Chemistry, Mainz (Germany); Prohaska, Thomas [University of Natural Resources and Applied Life Sciences, Department of Chemistry, Division of Analytical Chemistry-VIRIS Laboratory, Vienna (Austria)

    2007-10-15

    A method for the direct multi-element determination of Cl, S, Hg, Pb, Cd, U, Br, Cr, Cu, Fe, and Zn in powdered coal samples has been developed by applying inductively coupled plasma isotope dilution mass spectrometry (ICP-IDMS) with laser-assisted introduction into the plasma. A sector-field ICP-MS with a mass resolution of 4,000 and a high-ablation rate laser ablation system provided significantly better sensitivity, detection limits, and accuracy compared to a conventional laser ablation system coupled with a quadrupole ICP-MS. The sensitivity ranges from about 590 cps for {sup 35}Cl{sup +} to more than 6 x 10{sup 5} cps for {sup 238}U{sup +} for 1 {mu}g of trace element per gram of coal sample. Detection limits vary from 450 ng g{sup -1} for chlorine and 18 ng g{sup -1} for sulfur to 9.5 pg g{sup -1} for mercury and 0.3 pg g{sup -1} for uranium. Analyses of minor and trace elements in four certified reference materials (BCR-180 Gas Coal, BCR-331 Steam Coal, SRM 1632c Trace Elements in Coal, SRM 1635 Trace Elements in Coal) yielded good agreement of usually not more than 5% deviation from the certified values and precisions of less than 10% relative standard deviation for most elements. Higher relative standard deviations were found for particular elements such as Hg and Cd caused by inhomogeneities due to associations of these elements within micro-inclusions in coal which was demonstrated for Hg in SRM 1635, SRM 1632c, and another standard reference material (SRM 2682b, Sulfur and Mercury in Coal). The developed LA-ICP-IDMS method with its simple sample pretreatment opens the possibility for accurate, fast, and highly sensitive determinations of environmentally critical contaminants in coal as well as of trace impurities in similar sample materials like graphite powder and activated charcoal on a routine basis. (orig.)

  3. Development of a robust method for isolation of shiga toxin-positive Escherichia coli (STEC from fecal, plant, soil and water samples from a leafy greens production region in California.

    Directory of Open Access Journals (Sweden)

    Michael B Cooley

    Full Text Available During a 2.5-year survey of 33 farms and ranches in a major leafy greens production region in California, 13,650 produce, soil, livestock, wildlife, and water samples were tested for Shiga toxin (stx-producing Escherichia coli (STEC. Overall, 357 and 1,912 samples were positive for E. coli O157:H7 (2.6% or non-O157 STEC (14.0%, respectively. Isolates differentiated by O-typing ELISA and multilocus variable number tandem repeat analysis (MLVA resulted in 697 O157:H7 and 3,256 non-O157 STEC isolates saved for further analysis. Cattle (7.1%, feral swine (4.7%, sediment (4.4%, and water (3.3% samples were positive for E. coli O157:H7; 7/32 birds, 2/145 coyotes, 3/88 samples from elk also were positive. Non-O157 STEC were at approximately 5-fold higher incidence compared to O157 STEC: cattle (37.9%, feral swine (21.4%, birds (2.4%, small mammals (3.5%, deer or elk (8.3%, water (14.0%, sediment (12.3%, produce (0.3% and soil adjacent to produce (0.6%. stx1, stx2 and stx1/stx2 genes were detected in 63%, 74% and 35% of STEC isolates, respectively. Subtilase, intimin and hemolysin genes were present in 28%, 25% and 79% of non-O157 STEC, respectively; 23% were of the "Top 6″ O-types. The initial method was modified twice during the study revealing evidence of culture bias based on differences in virulence and O-antigen profiles. MLVA typing revealed a diverse collection of O157 and non-O157 STEC strains isolated from multiple locations and sources and O157 STEC strains matching outbreak strains. These results emphasize the importance of multiple approaches for isolation of non-O157 STEC, that livestock and wildlife are common sources of potentially virulent STEC, and evidence of STEC persistence and movement in a leafy greens production environment.

  4. Self-optimizing robust nonlinear model predictive control

    NARCIS (Netherlands)

    Lazar, M.; Heemels, W.P.M.H.; Jokic, A.; Thoma, M.; Allgöwer, F.; Morari, M.

    2009-01-01

    This paper presents a novel method for designing robust MPC schemes that are self-optimizing in terms of disturbance attenuation. The method employs convex control Lyapunov functions and disturbance bounds to optimize robustness of the closed-loop system on-line, at each sampling instant - a unique

  5. PRE-EXÁMENES COMO UNA ESTRATEGIA DIDÁCTICA EN LOS CURSOS DE FÍSICA (SAMPLE TEST AS A TEACHING STRATEGY IN PHYSICS COURSES

    Directory of Open Access Journals (Sweden)

    Morales Ríos Herbert

    2010-04-01

    Full Text Available Resumen:Se describe la experiencia del uso de pre-exámenes o exámenes de prueba como una estrategia didáctica para el mejoramiento en el rendimiento y en el desempeño estudiantil en los cursos propios de la carrera de física. El objetivo principal de la experiencia era determinar, a priori, las deficiencias, tanto matemáticas como físicas, que tiene el estudiantado, corregirlas antes de administrarle el examen definitivo y establecer una evaluación formativa en el curso. En particular, el tema evaluado era el de oscilaciones lineales del curso de Mecánica Teórica. Se detalla en qué consiste dicha estrategia, la motivación de su implementación y los roles tanto docente como estudiantil. Se analizan los resultados de la experiencia para concluir con las bondades, limitaciones y proyecciones futuras del uso de los pre-exámenes, con el fin de mostrarlos como una herramienta más dentro de la labor docente universitaria.Abstract:We discuss our experience of using sample tests as a teaching strategy that allows us to improve the student grades in courses that belong to the College Physics Program. The main purpose of our experience was to find out the common mistakes both in mathematics and in physics made by the students and to correct them before the actual test, so that we could accomplish a formative evaluation. In particular, the evaluated subject was linear oscillations in the Classical Mechanics course. We describe what the strategy consists of, our motivation for using it and both the professor and the student roles. We analyze our results obtained in its implementation to conclude with the pros and cons of this teaching strategy and also with its future applications as a useful tool for improving college teaching.

  6. Robust portfolio choice with ambiguity and learning about return predictability

    DEFF Research Database (Denmark)

    Larsen, Linda Sandris; Branger, Nicole; Munk, Claus

    2013-01-01

    We analyze the optimal stock-bond portfolio under both learning and ambiguity aversion. Stock returns are predictable by an observable and an unobservable predictor, and the investor has to learn about the latter. Furthermore, the investor is ambiguity-averse and has a preference for investment...... strategies that are robust to model misspecifications. We derive a closed-form solution for the optimal robust investment strategy. We find that both learning and ambiguity aversion impact the level and structure of the optimal stock investment. Suboptimal strategies resulting either from not learning...... or from not considering ambiguity can lead to economically significant losses....

  7. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  8. Evaluation of Multiple Linear Regression-Based Limited Sampling Strategies for Enteric-Coated Mycophenolate Sodium in Adult Kidney Transplant Recipients.

    Science.gov (United States)

    Brooks, Emily K; Tett, Susan E; Isbel, Nicole M; McWhinney, Brett; Staatz, Christine E

    2018-04-01

    Although multiple linear regression-based limited sampling strategies (LSSs) have been published for enteric-coated mycophenolate sodium, none have been evaluated for the prediction of subsequent mycophenolic acid (MPA) exposure. This study aimed to examine the predictive performance of the published LSS for the estimation of future MPA area under the concentration-time curve from 0 to 12 hours (AUC0-12) in renal transplant recipients. Total MPA plasma concentrations were measured in 20 adult renal transplant patients on 2 occasions a week apart. All subjects received concomitant tacrolimus and were approximately 1 month after transplant. Samples were taken at 0, 0.33, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 6, and 8 hours and 0, 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 3, 4, 6, 9, and 12 hours after dose on the first and second sampling occasion, respectively. Predicted MPA AUC0-12 was calculated using 19 published LSSs and data from the first or second sampling occasion for each patient and compared with the second occasion full MPA AUC0-12 calculated using the linear trapezoidal rule. Bias (median percentage prediction error) and imprecision (median absolute prediction error) were determined. Median percentage prediction error and median absolute prediction error for the prediction of full MPA AUC0-12 were multiple linear regression-based LSS was not possible without concentrations up to at least 8 hours after the dose.

  9. A fully robust PARAFAC method for analyzing fluorescence data

    DEFF Research Database (Denmark)

    Engelen, Sanne; Frosch, Stina; Jørgensen, Bo

    2009-01-01

    and Rayleigh scatter. Recently, a robust PARAFAC method that circumvents the harmful effects of outlying samples has been developed. For removing the scatter effects on the final PARAFAC model, different techniques exist. Newly, an automated scatter identification tool has been constructed. However......, there still exists no robust method for handling fluorescence data encountering both outlying EEM landscapes and scatter. In this paper, we present an iterative algorithm where the robust PARAFAC method and the scatter identification tool are alternately performed. A fully automated robust PARAFAC method...

  10. Robustness of weighted networks

    Science.gov (United States)

    Bellingeri, Michele; Cassi, Davide

    2018-01-01

    Complex network response to node loss is a central question in different fields of network science because node failure can cause the fragmentation of the network, thus compromising the system functioning. Previous studies considered binary networks where the intensity (weight) of the links is not accounted for, i.e. a link is either present or absent. However, in real-world networks the weights of connections, and thus their importance for network functioning, can be widely different. Here, we analyzed the response of real-world and model networks to node loss accounting for link intensity and the weighted structure of the network. We used both classic binary node properties and network functioning measure, introduced a weighted rank for node importance (node strength), and used a measure for network functioning that accounts for the weight of the links (weighted efficiency). We find that: (i) the efficiency of the attack strategies changed using binary or weighted network functioning measures, both for real-world or model networks; (ii) in some cases, removing nodes according to weighted rank produced the highest damage when functioning was measured by the weighted efficiency; (iii) adopting weighted measure for the network damage changed the efficacy of the attack strategy with respect the binary analyses. Our results show that if the weighted structure of complex networks is not taken into account, this may produce misleading models to forecast the system response to node failure, i.e. consider binary links may not unveil the real damage induced in the system. Last, once weighted measures are introduced, in order to discover the best attack strategy, it is important to analyze the network response to node loss using nodes rank accounting the intensity of the links to the node.

  11. An Intercompany Perspective on Biopharmaceutical Drug Product Robustness Studies.

    Science.gov (United States)

    Morar-Mitrica, Sorina; Adams, Monica L; Crotts, George; Wurth, Christine; Ihnat, Peter M; Tabish, Tanvir; Antochshuk, Valentyn; DiLuzio, Willow; Dix, Daniel B; Fernandez, Jason E; Gupta, Kapil; Fleming, Michael S; He, Bing; Kranz, James K; Liu, Dingjiang; Narasimhan, Chakravarthy; Routhier, Eric; Taylor, Katherine D; Truong, Nobel; Stokes, Elaine S E

    2018-02-01

    The Biophorum Development Group (BPDG) is an industry-wide consortium enabling networking and sharing of best practices for the development of biopharmaceuticals. To gain a better understanding of current industry approaches for establishing biopharmaceutical drug product (DP) robustness, the BPDG-Formulation Point Share group conducted an intercompany collaboration exercise, which included a bench-marking survey and extensive group discussions around the scope, design, and execution of robustness studies. The results of this industry collaboration revealed several key common themes: (1) overall DP robustness is defined by both the formulation and the manufacturing process robustness; (2) robustness integrates the principles of quality by design (QbD); (3) DP robustness is an important factor in setting critical quality attribute control strategies and commercial specifications; (4) most companies employ robustness studies, along with prior knowledge, risk assessments, and statistics, to develop the DP design space; (5) studies are tailored to commercial development needs and the practices of each company. Three case studies further illustrate how a robustness study design for a biopharmaceutical DP balances experimental complexity, statistical power, scientific understanding, and risk assessment to provide the desired product and process knowledge. The BPDG-Formulation Point Share discusses identified industry challenges with regard to biopharmaceutical DP robustness and presents some recommendations for best practices. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  12. Cleanup strategies and advantages in the determination of several therapeutic classes of pharmaceuticals in wastewater samples by SPE-LC-MS/MS.

    Science.gov (United States)

    Sousa, M A; Gonçalves, C; Cunha, E; Hajšlová, J; Alpendurada, M F

    2011-01-01

    This work describes the development and validation of an offline solid-phase extraction with simultaneous cleanup capability, followed by liquid chromatography-(electrospray ionisation)-ion trap mass spectrometry, enabling the concurrent determination of 23 pharmaceuticals of diverse chemical nature, among the most consumed in Portugal, in wastewater samples. Several cleanup strategies, exploiting the physical and chemical properties of the analytes vs. interferences, alongside with the use of internal standards, were assayed in order to minimise the influence of matrix components in the ionisation efficiency of target analytes. After testing all combinations of adsorbents (normal-phase, ion exchange and mixed composition) and elution solvents, the best results were achieved with the mixed-anion exchange Oasis MAX cartridges. They provided recovery rates generally higher than 60%. The precision of the method ranged from 2% to 18% and 4% to 19% (except for diclofenac (22%) and simvastatin (26%)) for intra- and inter-day analysis, respectively. Method detection limits varied between 1 and 20 ng L(-1), while method quantification limits were diclofenac and bezafibrate were detected in concentrations ranging from 1 to 20 μg L(-1), while gemfibrozil, simvastatin, ketoprofen, azithromycin, bisoprolol, lorazepam and paroxetine were quantified in levels below 1 μg L(-1). These WWTPs were given particular attention since they discharge their effluents into the Douro river, where water is extracted for the production of drinking water. Some sampling spots in this river were also analysed.

  13. Limited Sampling Strategy for Accurate Prediction of Pharmacokinetics of Saroglitazar: A 3-point Linear Regression Model Development and Successful Prediction of Human Exposure.

    Science.gov (United States)

    Joshi, Shuchi N; Srinivas, Nuggehally R; Parmar, Deven V

    2018-03-01

    Our aim was to develop and validate the extrapolative performance of a regression model using a limited sampling strategy for accurate estimation of the area under the plasma concentration versus time curve for saroglitazar. Healthy subject pharmacokinetic data from a well-powered food-effect study (fasted vs fed treatments; n = 50) was used in this work. The first 25 subjects' serial plasma concentration data up to 72 hours and corresponding AUC 0-t (ie, 72 hours) from the fasting group comprised a training dataset to develop the limited sampling model. The internal datasets for prediction included the remaining 25 subjects from the fasting group and all 50 subjects from the fed condition of the same study. The external datasets included pharmacokinetic data for saroglitazar from previous single-dose clinical studies. Limited sampling models were composed of 1-, 2-, and 3-concentration-time points' correlation with AUC 0-t of saroglitazar. Only models with regression coefficients (R 2 ) >0.90 were screened for further evaluation. The best R 2 model was validated for its utility based on mean prediction error, mean absolute prediction error, and root mean square error. Both correlations between predicted and observed AUC 0-t of saroglitazar and verification of precision and bias using Bland-Altman plot were carried out. None of the evaluated 1- and 2-concentration-time points models achieved R 2 > 0.90. Among the various 3-concentration-time points models, only 4 equations passed the predefined criterion of R 2 > 0.90. Limited sampling models with time points 0.5, 2, and 8 hours (R 2 = 0.9323) and 0.75, 2, and 8 hours (R 2 = 0.9375) were validated. Mean prediction error, mean absolute prediction error, and root mean square error were prediction of saroglitazar. The same models, when applied to the AUC 0-t prediction of saroglitazar sulfoxide, showed mean prediction error, mean absolute prediction error, and root mean square error model predicts the exposure of

  14. Efficient robust conditional random fields.

    Science.gov (United States)

    Song, Dongjin; Liu, Wei; Zhou, Tianyi; Tao, Dacheng; Meyer, David A

    2015-10-01

    Conditional random fields (CRFs) are a flexible yet powerful probabilistic approach and have shown advantages for popular applications in various areas, including text analysis, bioinformatics, and computer vision. Traditional CRF models, however, are incapable of selecting relevant features as well as suppressing noise from noisy original features. Moreover, conventional optimization methods often converge slowly in solving the training procedure of CRFs, and will degrade significantly for tasks with a large number of samples and features. In this paper, we propose robust CRFs (RCRFs) to simultaneously select relevant features. An optimal gradient method (OGM) is further designed to train RCRFs efficiently. Specifically, the proposed RCRFs employ the l1 norm of the model parameters to regularize the objective used by traditional CRFs, therefore enabling discovery of the relevant unary features and pairwise features of CRFs. In each iteration of OGM, the gradient direction is determined jointly by the current gradient together with the historical gradients, and the Lipschitz constant is leveraged to specify the proper step size. We show that an OGM can tackle the RCRF model training very efficiently, achieving the optimal convergence rate [Formula: see text] (where k is the number of iterations). This convergence rate is theoretically superior to the convergence rate O(1/k) of previous first-order optimization methods. Extensive experiments performed on three practical image segmentation tasks demonstrate the efficacy of OGM in training our proposed RCRFs.

  15. Robustness of IPTV business models

    NARCIS (Netherlands)

    Bouwman, H.; Zhengjia, M.; Duin, P. van der; Limonard, S.

    2008-01-01

    The final stage in the STOF method is an evaluation of the robustness of the design, for which the method provides some guidelines. For many innovative services, the future holds numerous uncertainties, which makes evaluating the robustness of a business model a difficult task. In this chapter, we

  16. Robustness Evaluation of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard

    2009-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure....

  17. Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    Science.gov (United States)

    Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.

    2017-06-01

    Emerging water scarcity concerns in many urban regions are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative drought management strategies. Our results show that appropriately designing adaptive risk-of-failure action triggers required stressing them with a comprehensive sample of deeply uncertain factors in the computational search phase of MORDM. Search under the new ensemble of states-of-the-world is shown to fundamentally change perceived performance tradeoffs and substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Search under deep uncertainty enhanced the discovery of how cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be employed jointly to improve regional robustness and decrease robustness conflicts between the utilities. Insights from this work have general merit for regions where

  18. Analytical strategies for uranium determination in natural water and industrial effluents samples; Estrategias analiticas para determinacao de uranio em amostras de aguas e efluentes industriais

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Juracir Silva

    2011-07-01

    The work was developed under the project 993/2007 - 'Development of analytical strategies for uranium determination in environmental and industrial samples - Environmental monitoring in the Caetite city, Bahia, Brazil' and made possible through a partnership established between Universidade Federal da Bahia and the Comissao Nacional de Energia Nuclear. Strategies were developed to uranium determination in natural water and effluents of uranium mine. The first one was a critical evaluation of the determination of uranium by inductively coupled plasma optical emission spectrometry (ICP OES) performed using factorial and Doehlert designs involving the factors: acid concentration, radio frequency power and nebuliser gas flow rate. Five emission lines were simultaneously studied (namely: 367.007, 385.464, 385.957, 386.592 and 409.013 nm), in the presence of HN0{sub 3}, H{sub 3}C{sub 2}00H or HCI. The determinations in HN0{sub 3} medium were the most sensitive. Among the factors studied, the gas flow rate was the most significant for the five emission lines. Calcium caused interference in the emission intensity for some lines and iron did not interfere (at least up to 10 mg L{sup -1}) in the five lines studied. The presence of 13 other elements did not affect the emission intensity of uranium for the lines chosen. The optimized method, using the line at 385.957 nm, allows the determination of uranium with limit of quantification of 30 {mu}g L{sup -1} and precision expressed as RSD lower than 2.2% for uranium concentrations of either 500 and 1000 {mu}g L{sup -1}. In second one, a highly sensitive flow-based procedure for uranium determination in natural waters is described. A 100-cm optical path flow cell based on a liquid-core waveguide (LCW) was exploited to increase sensitivity of the arsenazo 111 method, aiming to achieve the limits established by environmental regulations. The flow system was designed with solenoid micro-pumps in order to improve mixing and

  19. Robust loss functions for boosting.

    Science.gov (United States)

    Kanamori, Takafumi; Takenouchi, Takashi; Eguchi, Shinto; Murata, Noboru

    2007-08-01

    Boosting is known as a gradient descent algorithm over loss functions. It is often pointed out that the typical boosting algorithm, Adaboost, is highly affected by outliers. In this letter, loss functions for robust boosting are studied. Based on the concept of robust statistics, we propose a transformation of loss functions that makes boosting algorithms robust against extreme outliers. Next, the truncation of loss functions is applied to contamination models that describe the occurrence of mislabels near decision boundaries. Numerical experiments illustrate that the proposed loss functions derived from the contamination models are useful for handling highly noisy data in comparison with other loss functions.

  20. Theoretical Framework for Robustness Evaluation

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2011-01-01

    This paper presents a theoretical framework for evaluation of robustness of structural systems, incl. bridges and buildings. Typically modern structural design codes require that ‘the consequence of damages to structures should not be disproportional to the causes of the damages’. However, although...... the importance of robustness for structural design is widely recognized the code requirements are not specified in detail, which makes the practical use difficult. This paper describes a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines...

  1. Robustness of airline route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  2. Optimization of robustness of interdependent network controllability by redundant design.

    Directory of Open Access Journals (Sweden)

    Zenghu Zhang

    Full Text Available Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy or DBS (degree based strategy for node backup and HDF(high degree first for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability.

  3. A multi-model fusion strategy for multivariate calibration using near and mid-infrared spectra of samples from brewing industry

    Science.gov (United States)

    Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo

    2013-03-01

    Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.

  4. Engineering Robustness of Microbial Cell Factories.

    Science.gov (United States)

    Gong, Zhiwei; Nielsen, Jens; Zhou, Yongjin J

    2017-10-01

    Metabolic engineering and synthetic biology offer great prospects in developing microbial cell factories capable of converting renewable feedstocks into fuels, chemicals, food ingredients, and pharmaceuticals. However, prohibitively low production rate and mass concentration remain the major hurdles in industrial processes even though the biosynthetic pathways are comprehensively optimized. These limitations are caused by a variety of factors unamenable for host cell survival, such as harsh industrial conditions, fermentation inhibitors from biomass hydrolysates, and toxic compounds including metabolic intermediates and valuable target products. Therefore, engineered microbes with robust phenotypes is essential for achieving higher yield and productivity. In this review, the recent advances in engineering robustness and tolerance of cell factories is described to cope with these issues and briefly introduce novel strategies with great potential to enhance the robustness of cell factories, including metabolic pathway balancing, transporter engineering, and adaptive laboratory evolution. This review also highlights the integration of advanced systems and synthetic biology principles toward engineering the harmony of overall cell function, more than the specific pathways or enzymes. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. POD Mode Robustness for the Turbulent Jet Sampled with PIV

    DEFF Research Database (Denmark)

    Hodzic, Azur; Meyer, Knud Erik; Velte, Clara Marika

    2017-01-01

    An important challenge in the description and simulation of turbulence is the large amount of information that is needed to describe even relatively simple flows in detail. The frequent disagreement between Reynolds averaged Navier–Stokes-based simulations and experiments is well known. Albeit, d...... and even high speed volumetric PIV systems providing fully three dimensional velocity fields. Another challenge is how do we verify simulations against experiments and ensure that we indeed have simulated the same flow that we have measured?......An important challenge in the description and simulation of turbulence is the large amount of information that is needed to describe even relatively simple flows in detail. The frequent disagreement between Reynolds averaged Navier–Stokes-based simulations and experiments is well known. Albeit......, direct numerical simulations and in certain cases large eddy simulations tend to agree fairly well with experiments, their practical implementation introduces the problem of data storage. The experimentalist, however, experiences the same problem, using highspeed particle image velocimetry (PIV) systems...

  6. Robust methods for data reduction

    CERN Document Server

    Farcomeni, Alessio

    2015-01-01

    Robust Methods for Data Reduction gives a non-technical overview of robust data reduction techniques, encouraging the use of these important and useful methods in practical applications. The main areas covered include principal components analysis, sparse principal component analysis, canonical correlation analysis, factor analysis, clustering, double clustering, and discriminant analysis.The first part of the book illustrates how dimension reduction techniques synthesize available information by reducing the dimensionality of the data. The second part focuses on cluster and discriminant analy

  7. Robust boosting via convex optimization

    Science.gov (United States)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems

  8. The effectiveness of robust RMCD control chart as outliers’ detector

    Science.gov (United States)

    Darmanto; Astutik, Suci

    2017-12-01

    A well-known control chart to monitor a multivariate process is Hotelling’s T 2 which its parameters are estimated classically, very sensitive and also marred by masking and swamping of outliers data effect. To overcome these situation, robust estimators are strongly recommended. One of robust estimators is re-weighted minimum covariance determinant (RMCD) which has robust characteristics as same as MCD. In this paper, the effectiveness term is accuracy of the RMCD control chart in detecting outliers as real outliers. In other word, how effectively this control chart can identify and remove masking and swamping effects of outliers. We assessed the effectiveness the robust control chart based on simulation by considering different scenarios: n sample sizes, proportion of outliers, number of p quality characteristics. We found that in some scenarios, this RMCD robust control chart works effectively.

  9. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  10. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  11. Design and implementation of robust controllers for a gait trainer.

    Science.gov (United States)

    Wang, F C; Yu, C H; Chou, T Y

    2009-08-01

    This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.

  12. Real-time control systems: feedback, scheduling and robustness

    Science.gov (United States)

    Simon, Daniel; Seuret, Alexandre; Sename, Olivier

    2017-08-01

    The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.

  13. Aromatic hydrocarbons in produced water from offshore oil and gas production. Test of sample strategy; Aromatiske kulbrinter i produceret vand fra offshore olie- og gas industrien. Test af proevetagningsstrategi

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, A.

    2005-07-01

    In co-operation with the Danish EPA, the National Environmental Research Institute (NERI) has carried out a series of measurements of aromatic hydrocarbons in produced water from an offshore oil and gas production platform in the Danish sector of the North Sea as part of the project 'Testing of sampling strategy for aromatic hydrocarbons in produced water from the offshore oil and gas industry'. The measurements included both volatile (BTEX: benzene, toluene, ethylbenzene and xylenes) and semi-volatile aromatic hydrocarbons: NPD (naphthalenes, phenanthrenes and dibenzothiophenes) and selected PAHs (polycyclic aromatic hydrocarbons). In total, 12 samples of produced water were sampled at the Dan FF production platform located in the North Sea by the operator, Maersk Oil and Gas, as four sets of three parallel samples from November 24 - December 02, 2004. After collection of the last set, the samples were shipped to NERI for analysis. The water samples were collected in 1 L glass bottles that were filled completely (without overfilling) and tightly closed. After sampling, the samples were preserved with hydrochloric acid and cooled below ambient until being shipped off to NERI. Here all samples were analysed in dublicates, and the results show that for BTEX, levels were reduced compared to similar measurements carried out by NERI in 2002 and others. In this work, BTEX levels were approximately 5 mg/L, while similar studies showed levels in the range 0,5 - 35 mg/L. For NPD levels were similar, 0,5 - 1,4 mg/L, while for PAH they seerred elevated; 0,1 - 0,4 mg/L in this work compared to 0,001 - 0,3 mg/L in similar studies. The applied sampling strategy has been tested by performing analysis of variance on the analytical data. The test of the analytical data has shown that the mean values of the three parallel samples collected in series constituted a good estimate of the levels at the time of sampling; thus, the variance between the parallel samples was not

  14. Advances in robust fractional control

    CERN Document Server

    Padula, Fabrizio

    2015-01-01

    This monograph presents design methodologies for (robust) fractional control systems. It shows the reader how to take advantage of the superior flexibility of fractional control systems compared with integer-order systems in achieving more challenging control requirements. There is a high degree of current interest in fractional systems and fractional control arising from both academia and industry and readers from both milieux are catered to in the text. Different design approaches having in common a trade-off between robustness and performance of the control system are considered explicitly. The text generalizes methodologies, techniques and theoretical results that have been successfully applied in classical (integer) control to the fractional case. The first part of Advances in Robust Fractional Control is the more industrially-oriented. It focuses on the design of fractional controllers for integer processes. In particular, it considers fractional-order proportional-integral-derivative controllers, becau...

  15. Robustness of digital artist authentication

    DEFF Research Database (Denmark)

    Jacobsen, Robert; Nielsen, Morten

    In many cases it is possible to determine the authenticity of a painting from digital reproductions of the paintings; this has been demonstrated for a variety of artists and with different approaches. Common to all these methods in digital artist authentication is that the potential of the method...... is in focus, while the robustness has not been considered, i.e. the degree to which the data collection process influences the decision of the method. However, in order for an authentication method to be successful in practice, it needs to be robust to plausible error sources from the data collection....... In this paper we investigate the robustness of the newly proposed authenticity method introduced by the authors based on second generation multiresolution analysis. This is done by modelling a number of realistic factors that can occur in the data collection....

  16. Robustness of holonomic quantum gates

    International Nuclear Information System (INIS)

    Solinas, P.; Zanardi, P.; Zanghi, N.

    2005-01-01

    Full text: If the driving field fluctuates during the quantum evolution this produces errors in the applied operator. The holonomic (and geometrical) quantum gates are believed to be robust against some kind of noise. Because of the geometrical dependence of the holonomic operators can be robust against this kind of noise; in fact if the fluctuations are fast enough they cancel out leaving the final operator unchanged. I present the numerical studies of holonomic quantum gates subject to this parametric noise, the fidelity of the noise and ideal evolution is calculated for different noise correlation times. The holonomic quantum gates seem robust not only for fast fluctuating fields but also for slow fluctuating fields. These results can be explained as due to the geometrical feature of the holonomic operator: for fast fluctuating fields the fluctuations are canceled out, for slow fluctuating fields the fluctuations do not perturb the loop in the parameter space. (author)

  17. Stable isotope labeling strategy based on coding theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori, E-mail: kigawa@riken.jp [RIKEN Quantitative Biology Center (QBiC), Laboratory for Biomolecular Structure and Dynamics (Japan)

    2015-10-15

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells.

  18. Stable isotope labeling strategy based on coding theory

    International Nuclear Information System (INIS)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori

    2015-01-01

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells

  19. Robustness in Railway Operations (RobustRailS)

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker

    This study considers the problem of enhancing railway timetable robustness without adding slack time, hence increasing the travel time. The approach integrates a transit assignment model to assess how passengers adapt their behaviour whenever operations are changed. First, the approach considers...

  20. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  1. The Effect of Self-Explaining on Robust Learning

    Science.gov (United States)

    Hausmann, Robert G. M.; VanLehn, Kurt

    2010-01-01

    Self-explaining is a domain-independent learning strategy that generally leads to a robust understanding of the domain material. However, there are two potential explanations for its effectiveness. First, self-explanation generates additional "content" that does not exist in the instructional materials. Second, when compared to…

  2. Getting men in the room: perceptions of effective strategies to initiate men's involvement in gender-based violence prevention in a global sample.

    Science.gov (United States)

    Casey, Erin A; Leek, Cliff; Tolman, Richard M; Allen, Christopher T; Carlson, Juliana M

    2017-09-01

    As engaging men in gender-based violence prevention efforts becomes an increasingly institutionalised component of gender equity work globally, clarity is needed about the strategies that best initiate male-identified individuals' involvement in these efforts. The purpose of this study was to examine the perceived relevance and effectiveness of men's engagement strategies from the perspective of men around the world who have organised or attended gender-based violence prevention events. Participants responded to an online survey (available in English, French and Spanish) and rated the effectiveness of 15 discrete engagement strategies derived from earlier qualitative work. Participants also provided suggestions regarding strategies in open-ended comments. Listed strategies cut across the social ecological spectrum and represented both venues in which to reach men, and the content of violence prevention messaging. Results suggest that all strategies, on average, were perceived as effective across regions of the world, with strategies that tailor messaging to topics of particular concern to men (such as fatherhood and healthy relationships) rated most highly. Open-ended comments also surfaced tensions, particularly related to the role of a gender analysis in initial men's engagement efforts. Findings suggest the promise of cross-regional adaptation and information sharing regarding successful approaches to initiating men's anti-violence involvement.

  3. A Robust Design Applicability Model

    DEFF Research Database (Denmark)

    Ebro, Martin; Lars, Krogstie; Howard, Thomas J.

    2015-01-01

    to be applicable in organisations assigning a high importance to one or more factors that are known to be impacted by RD, while also experiencing a high level of occurrence of this factor. The RDAM supplements existing maturity models and metrics to provide a comprehensive set of data to support management......This paper introduces a model for assessing the applicability of Robust Design (RD) in a project or organisation. The intention of the Robust Design Applicability Model (RDAM) is to provide support for decisions by engineering management considering the relevant level of RD activities...

  4. Ins-Robust Primitive Words

    OpenAIRE

    Srivastava, Amit Kumar; Kapoor, Kalpesh

    2017-01-01

    Let Q be the set of primitive words over a finite alphabet with at least two symbols. We characterize a class of primitive words, Q_I, referred to as ins-robust primitive words, which remain primitive on insertion of any letter from the alphabet and present some properties that characterizes words in the set Q_I. It is shown that the language Q_I is dense. We prove that the language of primitive words that are not ins-robust is not context-free. We also present a linear time algorithm to reco...

  5. Soil sampling and analytical strategies for mapping fallout in nuclear emergencies based on the Fukushima Dai-ichi Nuclear Power Plant accident

    International Nuclear Information System (INIS)

    Onda, Yuichi; Kato, Hiroaki; Hoshi, Masaharu; Takahashi, Yoshio; Nguyen, Minh-Long

    2015-01-01

    The Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident resulted in extensive radioactive contamination of the environment via deposited radionuclides such as radiocesium and 131 I. Evaluating the extent and level of environmental contamination is critical to protecting citizens in affected areas and to planning decontamination efforts. However, a standardized soil