WorldWideScience

Sample records for robust sampling strategies

  1. A Fast and Robust Feature-Based Scan-Matching Method in 3D SLAM and the Effect of Sampling Strategies

    Directory of Open Access Journals (Sweden)

    Cihan Ulas

    2013-11-01

    Full Text Available Simultaneous localization and mapping (SLAM plays an important role in fully autonomous systems when a GNSS (global navigation satellite system is not available. Studies in both 2D indoor and 3D outdoor SLAM are based on the appearance of environments and utilize scan-matching methods to find rigid body transformation parameters between two consecutive scans. In this study, a fast and robust scan-matching method based on feature extraction is introduced. Since the method is based on the matching of certain geometric structures, like plane segments, the outliers and noise in the point cloud are considerably eliminated. Therefore, the proposed scan-matching algorithm is more robust than conventional methods. Besides, the registration time and the number of iterations are significantly reduced, since the number of matching points is efficiently decreased. As a scan-matching framework, an improved version of the normal distribution transform (NDT is used. The probability density functions (PDFs of the reference scan are generated as in the traditional NDT, and the feature extraction - based on stochastic plane detection - is applied to the only input scan. By using experimental dataset belongs to an outdoor environment like a university campus, we obtained satisfactory performance results. Moreover, the feature extraction part of the algorithm is considered as a special sampling strategy for scan-matching and compared to other sampling strategies, such as random sampling and grid-based sampling, the latter of which is first used in the NDT. Thus, this study also shows the effect of the subsampling on the performance of the NDT.

  2. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  3. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  4. Robust H2 performance for sampled-data systems

    DEFF Research Database (Denmark)

    Rank, Mike Lind

    1997-01-01

    Robust H2 performance conditions under structured uncertainty, analogous to well known methods for H∞ performance, have recently emerged in both discrete and continuous-time. This paper considers the extension into uncertain sampled-data systems, taking into account inter-sample behavior. Convex...... conditions for robust H2 performance are derived for different uncertainty sets...

  5. Robustness analysis of interdependent networks under multiple-attacking strategies

    Science.gov (United States)

    Gao, Yan-Li; Chen, Shi-Ming; Nie, Sen; Ma, Fei; Guan, Jun-Jie

    2018-04-01

    The robustness of complex networks under attacks largely depends on the structure of a network and the nature of the attacks. Previous research on interdependent networks has focused on two types of initial attack: random attack and degree-based targeted attack. In this paper, a deliberate attack function is proposed, where six kinds of deliberate attacking strategies can be derived by adjusting the tunable parameters. Moreover, the robustness of four types of interdependent networks (BA-BA, ER-ER, BA-ER and ER-BA) with different coupling modes (random, positive and negative correlation) is evaluated under different attacking strategies. Interesting conclusions could be obtained. It can be found that the positive coupling mode can make the vulnerability of the interdependent network to be absolutely dependent on the most vulnerable sub-network under deliberate attacks, whereas random and negative coupling modes make the vulnerability of interdependent network to be mainly dependent on the being attacked sub-network. The robustness of interdependent network will be enhanced with the degree-degree correlation coefficient varying from positive to negative. Therefore, The negative coupling mode is relatively more optimal than others, which can substantially improve the robustness of the ER-ER network and ER-BA network. In terms of the attacking strategies on interdependent networks, the degree information of node is more valuable than the betweenness. In addition, we found a more efficient attacking strategy for each coupled interdependent network and proposed the corresponding protection strategy for suppressing cascading failure. Our results can be very useful for safety design and protection of interdependent networks.

  6. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  7. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  8. Mars Sample Return - Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/cache rover in 2018, an orbiter with an Earth return vehicle in 2022, and a fetch rover and ascent vehicle in 2024. Strategies are presented to launch the sample into a coplanar orbit with the Orbiter which facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits exist at 457 and 572 km which provide multiple launch opportunities with similar geometries for detection and rendezvous.

  9. Mars Sample Return: Launch and Detection Strategies for Orbital Rendezvous

    Science.gov (United States)

    Woolley, Ryan C.; Mattingly, Richard L.; Riedel, Joseph E.; Sturm, Erick J.

    2011-01-01

    This study sets forth conceptual mission design strategies for the ascent and rendezvous phase of the proposed NASA/ESA joint Mars Sample Return Campaign. The current notional mission architecture calls for the launch of an acquisition/ caching rover in 2018, an Earth return orbiter in 2022, and a fetch rover with ascent vehicle in 2024. Strategies are presented to launch the sample into a nearly coplanar orbit with the Orbiter which would facilitate robust optical detection, orbit determination, and rendezvous. Repeating ground track orbits existat 457 and 572 km which would provide multiple launch opportunities with similar geometries for detection and rendezvous.

  10. GFC-Robust Risk Management Strategies under the Basel Accord

    NARCIS (Netherlands)

    M.J. McAleer (Michael); J.A. Jiménez-Martín (Juan-Ángel); T. Pérez-Amaral (Teodosio)

    2010-01-01

    textabstractA risk management strategy is proposed as being robust to the Global Financial Crisis (GFC) by selecting a Value-at-Risk (VaR) forecast that combines the forecasts of different VaR models. The robust forecast is based on the median of the point VaR forecasts of a set of conditional

  11. Effects of methodology and analysis strategy on robustness of pestivirus phylogeny.

    Science.gov (United States)

    Liu, Lihong; Xia, Hongyan; Baule, Claudia; Belák, Sándor; Wahlberg, Niklas

    2010-01-01

    Phylogenetic analysis of pestiviruses is a useful tool for classifying novel pestiviruses and for revealing their phylogenetic relationships. In this study, robustness of pestivirus phylogenies has been compared by analyses of the 5'UTR, and complete N(pro) and E2 gene regions separately and combined, performed by four methods: neighbour-joining (NJ), maximum parsimony (MP), maximum likelihood (ML), and Bayesian inference (BI). The strategy of analysing the combined sequence dataset by BI, ML, and MP methods resulted in a single, well-supported tree topology, indicating a reliable and robust pestivirus phylogeny. By contrast, the single-gene analysis strategy resulted in 12 trees of different topologies, revealing different relationships among pestiviruses. These results indicate that the strategies and methodologies are two vital aspects affecting the robustness of the pestivirus phylogeny. The strategy and methodologies outlined in this paper may have a broader application in inferring phylogeny of other RNA viruses.

  12. A robust model predictive control strategy for improving the control performance of air-conditioning systems

    International Nuclear Information System (INIS)

    Huang Gongsheng; Wang Shengwei; Xu Xinhua

    2009-01-01

    This paper presents a robust model predictive control strategy for improving the supply air temperature control of air-handling units by dealing with the associated uncertainties and constraints directly. This strategy uses a first-order plus time-delay model with uncertain time-delay and system gain to describe air-conditioning process of an air-handling unit usually operating at various weather conditions. The uncertainties of the time-delay and system gain, which imply the nonlinearities and the variable dynamic characteristics, are formulated using an uncertainty polytope. Based on this uncertainty formulation, an offline LMI-based robust model predictive control algorithm is employed to design a robust controller for air-handling units which can guarantee a good robustness subject to uncertainties and constraints. The proposed robust strategy is evaluated in a dynamic simulation environment of a variable air volume air-conditioning system in various operation conditions by comparing with a conventional PI control strategy. The robustness analysis of both strategies under different weather conditions is also presented.

  13. Robust Adaptive Stabilization of Linear Time-Invariant Dynamic Systems by Using Fractional-Order Holds and Multirate Sampling Controls

    Directory of Open Access Journals (Sweden)

    S. Alonso-Quesada

    2010-01-01

    Full Text Available This paper presents a strategy for designing a robust discrete-time adaptive controller for stabilizing linear time-invariant (LTI continuous-time dynamic systems. Such systems may be unstable and noninversely stable in the worst case. A reduced-order model is considered to design the adaptive controller. The control design is based on the discretization of the system with the use of a multirate sampling device with fast-sampled control signal. A suitable on-line adaptation of the multirate gains guarantees the stability of the inverse of the discretized estimated model, which is used to parameterize the adaptive controller. A dead zone is included in the parameters estimation algorithm for robustness purposes under the presence of unmodeled dynamics in the controlled dynamic system. The adaptive controller guarantees the boundedness of the system measured signal for all time. Some examples illustrate the efficacy of this control strategy.

  14. Optimal robust control strategy of a solid oxide fuel cell system

    Science.gov (United States)

    Wu, Xiaojuan; Gao, Danhui

    2018-01-01

    Optimal control can ensure system safe operation with a high efficiency. However, only a few papers discuss optimal control strategies for solid oxide fuel cell (SOFC) systems. Moreover, the existed methods ignore the impact of parameter uncertainty on system instantaneous performance. In real SOFC systems, several parameters may vary with the variation of operation conditions and can not be identified exactly, such as load current. Therefore, a robust optimal control strategy is proposed, which involves three parts: a SOFC model with parameter uncertainty, a robust optimizer and robust controllers. During the model building process, boundaries of the uncertain parameter are extracted based on Monte Carlo algorithm. To achieve the maximum efficiency, a two-space particle swarm optimization approach is employed to obtain optimal operating points, which are used as the set points of the controllers. To ensure the SOFC safe operation, two feed-forward controllers and a higher-order robust sliding mode controller are presented to control fuel utilization ratio, air excess ratio and stack temperature afterwards. The results show the proposed optimal robust control method can maintain the SOFC system safe operation with a maximum efficiency under load and uncertainty variations.

  15. Robust multi-objective calibration strategies – possibilities for improving flood forecasting

    Directory of Open Access Journals (Sweden)

    G. H. Schmitz

    2012-10-01

    Full Text Available Process-oriented rainfall-runoff models are designed to approximate the complex hydrologic processes within a specific catchment and in particular to simulate the discharge at the catchment outlet. Most of these models exhibit a high degree of complexity and require the determination of various parameters by calibration. Recently, automatic calibration methods became popular in order to identify parameter vectors with high corresponding model performance. The model performance is often assessed by a purpose-oriented objective function. Practical experience suggests that in many situations one single objective function cannot adequately describe the model's ability to represent any aspect of the catchment's behaviour. This is regardless of whether the objective is aggregated of several criteria that measure different (possibly opposite aspects of the system behaviour. One strategy to circumvent this problem is to define multiple objective functions and to apply a multi-objective optimisation algorithm to identify the set of Pareto optimal or non-dominated solutions. Nonetheless, there is a major disadvantage of automatic calibration procedures that understand the problem of model calibration just as the solution of an optimisation problem: due to the complex-shaped response surface, the estimated solution of the optimisation problem can result in different near-optimum parameter vectors that can lead to a very different performance on the validation data. Bárdossy and Singh (2008 studied this problem for single-objective calibration problems using the example of hydrological models and proposed a geometrical sampling approach called Robust Parameter Estimation (ROPE. This approach applies the concept of data depth in order to overcome the shortcomings of automatic calibration procedures and find a set of robust parameter vectors. Recent studies confirmed the effectivity of this method. However, all ROPE approaches published so far just identify

  16. Conditioning and Robustness of RNA Boltzmann Sampling under Thermodynamic Parameter Perturbations.

    Science.gov (United States)

    Rogers, Emily; Murrugarra, David; Heitsch, Christine

    2017-07-25

    Understanding how RNA secondary structure prediction methods depend on the underlying nearest-neighbor thermodynamic model remains a fundamental challenge in the field. Minimum free energy (MFE) predictions are known to be "ill conditioned" in that small changes to the thermodynamic model can result in significantly different optimal structures. Hence, the best practice is now to sample from the Boltzmann distribution, which generates a set of suboptimal structures. Although the structural signal of this Boltzmann sample is known to be robust to stochastic noise, the conditioning and robustness under thermodynamic perturbations have yet to be addressed. We present here a mathematically rigorous model for conditioning inspired by numerical analysis, and also a biologically inspired definition for robustness under thermodynamic perturbation. We demonstrate the strong correlation between conditioning and robustness and use its tight relationship to define quantitative thresholds for well versus ill conditioning. These resulting thresholds demonstrate that the majority of the sequences are at least sample robust, which verifies the assumption of sampling's improved conditioning over the MFE prediction. Furthermore, because we find no correlation between conditioning and MFE accuracy, the presence of both well- and ill-conditioned sequences indicates the continued need for both thermodynamic model refinements and alternate RNA structure prediction methods beyond the physics-based ones. Copyright © 2017. Published by Elsevier Inc.

  17. TU-H-CAMPUS-JeP3-01: Towards Robust Adaptive Radiation Therapy Strategies

    International Nuclear Information System (INIS)

    Boeck, M; Eriksson, K; Hardemark, B; Forsgren, A

    2016-01-01

    Purpose: To set up a framework combining robust treatment planning with adaptive reoptimization in order to maintain high treatment quality, to respond to interfractional variations and to identify those patients who will benefit the most from an adaptive fractionation schedule. Methods: We propose adaptive strategies based on stochastic minimax optimization for a series of simulated treatments on a one-dimensional patient phantom. The plan should be able to handle anticipated systematic and random errors and is applied during the first fractions. Information on the individual geometric variations is gathered at each fraction. At scheduled fractions, the impact of the measured errors on the delivered dose distribution is evaluated. For a patient that receives a dose that does not satisfy specified plan quality criteria, the plan is reoptimized based on these individual measurements using one of three different adaptive strategies. The reoptimized plan is then applied during future fractions until a new scheduled adaptation becomes necessary. In the first adaptive strategy the measured systematic and random error scenarios and their assigned probabilities are updated to guide the robust reoptimization. The focus of the second strategy lies on variation of the fraction of the worst scenarios taken into account during robust reoptimization. In the third strategy the uncertainty margins around the target are recalculated with the measured errors. Results: By studying the effect of the three adaptive strategies combined with various adaptation schedules on the same patient population, the group which benefits from adaptation is identified together with the most suitable strategy and schedule. Preliminary computational results indicate when and how best to adapt for the three different strategies. Conclusion: A workflow is presented that provides robust adaptation of the treatment plan throughout the course of treatment and useful measures to identify patients in need

  18. High-throughput droplet analysis and multiplex DNA detection in the microfluidic platform equipped with a robust sample-introduction technique

    International Nuclear Information System (INIS)

    Chen, Jinyang; Ji, Xinghu; He, Zhike

    2015-01-01

    In this work, a simple, flexible and low-cost sample-introduction technique was developed and integrated with droplet platform. The sample-introduction strategy was realized based on connecting the components of positive pressure input device, sample container and microfluidic chip through the tygon tubing with homemade polydimethylsiloxane (PDMS) adaptor, so the sample was delivered into the microchip from the sample container under the driving of positive pressure. This sample-introduction technique is so robust and compatible that could be integrated with T-junction, flow-focus or valve-assisted droplet microchips. By choosing the PDMS adaptor with proper dimension, the microchip could be flexibly equipped with various types of familiar sample containers, makes the sampling more straightforward without trivial sample transfer or loading. And the convenient sample changing was easily achieved by positioning the adaptor from one sample container to another. Benefiting from the proposed technique, the time-dependent concentration gradient was generated and applied for quantum dot (QD)-based fluorescence barcoding within droplet chip. High-throughput droplet screening was preliminarily demonstrated through the investigation of the quenching efficiency of ruthenium complex to the fluorescence of QD. More importantly, multiplex DNA assay was successfully carried out in the integrated system, which shows the practicability and potentials in high-throughput biosensing. - Highlights: • A simple, robust and low-cost sample-introduction technique was developed. • Convenient and flexible sample changing was achieved in microfluidic system. • Novel strategy of concentration gradient generation was presented for barcoding. • High-throughput droplet screening could be realized in the integrated platform. • Multiplex DNA assay was successfully carried out in the droplet platform

  19. High-throughput droplet analysis and multiplex DNA detection in the microfluidic platform equipped with a robust sample-introduction technique

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Jinyang; Ji, Xinghu [Key Laboratory of Analytical Chemistry for Biology and Medicine (Ministry of Education), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); He, Zhike, E-mail: zhkhe@whu.edu.cn [Key Laboratory of Analytical Chemistry for Biology and Medicine (Ministry of Education), College of Chemistry and Molecular Sciences, Wuhan University, Wuhan 430072 (China); Suzhou Institute of Wuhan University, Suzhou 215123 (China)

    2015-08-12

    In this work, a simple, flexible and low-cost sample-introduction technique was developed and integrated with droplet platform. The sample-introduction strategy was realized based on connecting the components of positive pressure input device, sample container and microfluidic chip through the tygon tubing with homemade polydimethylsiloxane (PDMS) adaptor, so the sample was delivered into the microchip from the sample container under the driving of positive pressure. This sample-introduction technique is so robust and compatible that could be integrated with T-junction, flow-focus or valve-assisted droplet microchips. By choosing the PDMS adaptor with proper dimension, the microchip could be flexibly equipped with various types of familiar sample containers, makes the sampling more straightforward without trivial sample transfer or loading. And the convenient sample changing was easily achieved by positioning the adaptor from one sample container to another. Benefiting from the proposed technique, the time-dependent concentration gradient was generated and applied for quantum dot (QD)-based fluorescence barcoding within droplet chip. High-throughput droplet screening was preliminarily demonstrated through the investigation of the quenching efficiency of ruthenium complex to the fluorescence of QD. More importantly, multiplex DNA assay was successfully carried out in the integrated system, which shows the practicability and potentials in high-throughput biosensing. - Highlights: • A simple, robust and low-cost sample-introduction technique was developed. • Convenient and flexible sample changing was achieved in microfluidic system. • Novel strategy of concentration gradient generation was presented for barcoding. • High-throughput droplet screening could be realized in the integrated platform. • Multiplex DNA assay was successfully carried out in the droplet platform.

  20. Optimal strategy analysis based on robust predictive control for inventory system with random demand

    Science.gov (United States)

    Saputra, Aditya; Widowati, Sutrisno

    2017-12-01

    In this paper, the optimal strategy for a single product single supplier inventory system with random demand is analyzed by using robust predictive control with additive random parameter. We formulate the dynamical system of this system as a linear state space with additive random parameter. To determine and analyze the optimal strategy for the given inventory system, we use robust predictive control approach which gives the optimal strategy i.e. the optimal product volume that should be purchased from the supplier for each time period so that the expected cost is minimal. A numerical simulation is performed with some generated random inventory data. We simulate in MATLAB software where the inventory level must be controlled as close as possible to a set point decided by us. From the results, robust predictive control model provides the optimal strategy i.e. the optimal product volume that should be purchased and the inventory level was followed the given set point.

  1. Flexible and robust strategies for waste management in Sweden

    International Nuclear Information System (INIS)

    Finnveden, Goeran; Bjoerklund, Anna; Reich, Marcus Carlsson; Eriksson, Ola; Soerbom, Adrienne

    2007-01-01

    Treatment of solid waste continues to be on the political agenda. Waste disposal issues are often viewed from an environmental perspective, but economic and social aspects also need to be considered when deciding on waste strategies and policy instruments. The aim of this paper is to suggest flexible and robust strategies for waste management in Sweden, and to discuss different policy instruments. Emphasis is on environmental aspects, but social and economic aspects are also considered. The results show that most waste treatment methods have a role to play in a robust and flexible integrated waste management system, and that the waste hierarchy is valid as a rule of thumb from an environmental perspective. A review of social aspects shows that there is a general willingness among people to source separate wastes. A package of policy instruments can include landfill tax, an incineration tax which is differentiated with respect to the content of fossil fuels and a weight based incineration tax, as well as support to the use of biogas and recycled materials

  2. An Integrated Environmental Assessment of Green and Gray Infrastructure Strategies for Robust Decision Making.

    Science.gov (United States)

    Casal-Campos, Arturo; Fu, Guangtao; Butler, David; Moore, Andrew

    2015-07-21

    The robustness of a range of watershed-scale "green" and "gray" drainage strategies in the future is explored through comprehensive modeling of a fully integrated urban wastewater system case. Four socio-economic future scenarios, defined by parameters affecting the environmental performance of the system, are proposed to account for the uncertain variability of conditions in the year 2050. A regret-based approach is applied to assess the relative performance of strategies in multiple impact categories (environmental, economic, and social) as well as to evaluate their robustness across future scenarios. The concept of regret proves useful in identifying performance trade-offs and recognizing states of the world most critical to decisions. The study highlights the robustness of green strategies (particularly rain gardens, resulting in half the regret of most options) over end-of-pipe gray alternatives (surface water separation or sewer and storage rehabilitation), which may be costly (on average, 25% of the total regret of these options) and tend to focus on sewer flooding and CSO alleviation while compromising on downstream system performance (this accounts for around 50% of their total regret). Trade-offs and scenario regrets observed in the analysis suggest that the combination of green and gray strategies may still offer further potential for robustness.

  3. Robustness analysis of pull strategies in multi-product systems

    Directory of Open Access Journals (Sweden)

    Chukwunonyelum Emmanuel Onyeocha

    2015-09-01

    Full Text Available Purpose: This paper examines the behaviour of shared and dedicated Kanban allocation policies of Hybrid Kanban-CONWIP and Basestock-Kanban-CONWIP control strategies in multi-product systems; with considerations to robustness of optimal solutions to environmental and system variabilities. Design/methodology/approach: Discrete event simulation and evolutionary multi-objective optimisation approach were utilised to develop Pareto-frontier or sets of non-dominated optimal solutions and for selection of an appropriate decision set for the control parameters in the shared Kanban allocation policy (S-KAP and dedicated Kanban allocation policy (D-KAP. Simulation experiments were carried out via ExtendSim simulation application software. The outcomes of PCS+KAP performances were compared via all pairwise comparison and Nelson’s screening and selection procedure for superior PCS+KAP under negligible environmental and system stability. To determine superior PCS+KAP under systems’ and environmental variability, the optimal solutions were tested for robustness using Latin hypercube sampling technique and stochastic dominance test. Findings: The outcome of this study shows that under uncontrollable environmental variability, dedicated Kanban allocation policy outperformed shared Kanban allocation policy in serial manufacturing system with negligible and in complex assembly line with setup times. Moreover, the BK-CONWIP is shown as superior strategy to HK-CONWIP. Research limitations/implications: Future research should be conducted to verify the level of flexibility of BK-CONWIP with respect to product mix and product demand volume variations in a complex multi-product system Practical implications: The outcomes of this work are applicable to multi-product manufacturing industries with significant setup times and systems with negligible setup times. The multi-objective optimisation provides decision support for selection of control-parameters such that

  4. On the robust optimization to the uncertain vaccination strategy problem

    International Nuclear Information System (INIS)

    Chaerani, D.; Anggriani, N.; Firdaniza

    2014-01-01

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented

  5. On the robust optimization to the uncertain vaccination strategy problem

    Energy Technology Data Exchange (ETDEWEB)

    Chaerani, D., E-mail: d.chaerani@unpad.ac.id; Anggriani, N., E-mail: d.chaerani@unpad.ac.id; Firdaniza, E-mail: d.chaerani@unpad.ac.id [Department of Mathematics, Faculty of Mathematics and Natural Sciences, University of Padjadjaran Indonesia, Jalan Raya Bandung Sumedang KM 21 Jatinangor Sumedang 45363 (Indonesia)

    2014-02-21

    In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.

  6. On a Robust MaxEnt Process Regression Model with Sample-Selection

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2018-04-01

    Full Text Available In a regression analysis, a sample-selection bias arises when a dependent variable is partially observed as a result of the sample selection. This study introduces a Maximum Entropy (MaxEnt process regression model that assumes a MaxEnt prior distribution for its nonparametric regression function and finds that the MaxEnt process regression model includes the well-known Gaussian process regression (GPR model as a special case. Then, this special MaxEnt process regression model, i.e., the GPR model, is generalized to obtain a robust sample-selection Gaussian process regression (RSGPR model that deals with non-normal data in the sample selection. Various properties of the RSGPR model are established, including the stochastic representation, distributional hierarchy, and magnitude of the sample-selection bias. These properties are used in the paper to develop a hierarchical Bayesian methodology to estimate the model. This involves a simple and computationally feasible Markov chain Monte Carlo algorithm that avoids analytical or numerical derivatives of the log-likelihood function of the model. The performance of the RSGPR model in terms of the sample-selection bias correction, robustness to non-normality, and prediction, is demonstrated through results in simulations that attest to its good finite-sample performance.

  7. Synthesis of robust disturbance-feedback strategies by using semi-definite programming

    NARCIS (Netherlands)

    Trottemant, E.J.

    2015-01-01

    Systems in real-life have to deal with uncertainty in such a manner that a high level of performance is guaranteed under all conditions. The objective in this thesis is to obtain robust strategies that provide an upper bound (worst-case) on the performance of an uncertain system against all

  8. Optimal and Robust Switching Control Strategies : Theory, and Applications in Traffic Management

    NARCIS (Netherlands)

    Hajiahmadi, M.

    2015-01-01

    Macroscopic modeling, predictive and robust control and route guidance for large-scale freeway and urban traffic networks are the main focus of this thesis. In order to increase the efficiency of our control strategies, we propose several mathematical and optimization techniques. Moreover, in the

  9. Soil sampling strategies: Evaluation of different approaches

    Energy Technology Data Exchange (ETDEWEB)

    De Zorzi, Paolo [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy)], E-mail: paolo.dezorzi@apat.it; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy); Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia [Agenzia Regionale per la Prevenzione e Protezione dell' Ambiente del Veneto, ARPA Veneto, U.O. Centro Qualita Dati, Via Spalato, 14-36045 Vicenza (Italy)

    2008-11-15

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2{sigma}, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  10. Soil sampling strategies: Evaluation of different approaches

    International Nuclear Information System (INIS)

    De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-01-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2σ, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies

  11. Soil sampling strategies: evaluation of different approaches.

    Science.gov (United States)

    de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia

    2008-11-01

    The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.

  12. Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.

    Science.gov (United States)

    Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge

    2017-02-22

    Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Robust sampled-data control of hydraulic flight control actuators

    OpenAIRE

    Kliffken, Markus Gustav

    1997-01-01

    In todays flight-by-wire systems the primary flight control surfaces of modern commercial and transport aircraft are driven by electro hydraulic linear actuators. Changing flight conditions as well as nonlinear actuator dynamics may be interpreted as parameter uncertainties of the linear actuator model. This demands a robust design for the controller. Here the parameter space design is used for the direct sampled-data controller synthesis. Therefore, a static output controller is choosen, the...

  14. The Robust Control Mixer Method for Reconfigurable Control Design By Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Z.; Blanke, Mogens; Verhagen, M.

    2001-01-01

    This paper proposes a robust reconfigurable control synthesis method based on the combination of the control mixer method and robust H1 con- trol techniques through the model-matching strategy. The control mixer modules are extended from the conventional matrix-form into the LTI sys- tem form....... By regarding the nominal control system as the desired model, an augmented control system is constructed through the model-matching formulation, such that the current robust control techniques can be usedto synthesize these dynamical modules. One extension of this method with respect to the performance...... recovery besides the functionality recovery is also discussed under this framework. Comparing with the conventional control mixer method, the proposed method considers the recon gured system's stability, performance and robustness simultaneously. Finally, the proposed method is illustrated by a case study...

  15. Robust Learning Control Design for Quantum Unitary Transformations.

    Science.gov (United States)

    Wu, Chengzhi; Qi, Bo; Chen, Chunlin; Dong, Daoyi

    2017-12-01

    Robust control design for quantum unitary transformations has been recognized as a fundamental and challenging task in the development of quantum information processing due to unavoidable decoherence or operational errors in the experimental implementation of quantum operations. In this paper, we extend the systematic methodology of sampling-based learning control (SLC) approach with a gradient flow algorithm for the design of robust quantum unitary transformations. The SLC approach first uses a "training" process to find an optimal control strategy robust against certain ranges of uncertainties. Then a number of randomly selected samples are tested and the performance is evaluated according to their average fidelity. The approach is applied to three typical examples of robust quantum transformation problems including robust quantum transformations in a three-level quantum system, in a superconducting quantum circuit, and in a spin chain system. Numerical results demonstrate the effectiveness of the SLC approach and show its potential applications in various implementation of quantum unitary transformations.

  16. Robustness Improvement of Superconducting Magnetic Energy Storage System in Microgrids Using an Energy Shaping Passivity-Based Control Strategy

    Directory of Open Access Journals (Sweden)

    Rui Hou

    2017-05-01

    Full Text Available Superconducting magnetic energy storage (SMES systems, in which the proportional-integral (PI method is usually used to control the SMESs, have been used in microgrids for improving the control performance. However, the robustness of PI-based SMES controllers may be unsatisfactory due to the high nonlinearity and coupling of the SMES system. In this study, the energy shaping passivity (ESP-based control strategy, which is a novel nonlinear control based on the methodology of interconnection and damping assignment (IDA, is proposed for robustness improvement of SMES systems. A step-by-step design of the ESP-based method considering the robustness of SMES systems is presented. A comparative analysis of the performance between ESP-based and PI control strategies is shown. Simulation and experimental results prove that the ESP-based strategy achieves the stronger robustness toward the system parameter uncertainties than the conventional PI control. Besides, the use of ESP-based control method can reduce the eddy current losses of a SMES system due to the significant reduction of 2nd and 3rd harmonics of superconducting coil DC current.

  17. Spent nuclear fuel sampling strategy

    International Nuclear Information System (INIS)

    Bergmann, D.W.

    1995-01-01

    This report proposes a strategy for sampling the spent nuclear fuel (SNF) stored in the 105-K Basins (105-K East and 105-K West). This strategy will support decisions concerning the path forward SNF disposition efforts in the following areas: (1) SNF isolation activities such as repackaging/overpacking to a newly constructed staging facility; (2) conditioning processes for fuel stabilization; and (3) interim storage options. This strategy was developed without following the Data Quality Objective (DQO) methodology. It is, however, intended to augment the SNF project DQOS. The SNF sampling is derived by evaluating the current storage condition of the SNF and the factors that effected SNF corrosion/degradation

  18. Sampling stored product insect pests: a comparison of four statistical sampling models for probability of pest detection

    Science.gov (United States)

    Statistically robust sampling strategies form an integral component of grain storage and handling activities throughout the world. Developing sampling strategies to target biological pests such as insects in stored grain is inherently difficult due to species biology and behavioral characteristics. ...

  19. Near infrared spectroscopy to estimate the temperature reached on burned soils: strategies to develop robust models.

    Science.gov (United States)

    Guerrero, César; Pedrosa, Elisabete T.; Pérez-Bejarano, Andrea; Keizer, Jan Jacob

    2014-05-01

    The temperature reached on soils is an important parameter needed to describe the wildfire effects. However, the methods for measure the temperature reached on burned soils have been poorly developed. Recently, the use of the near-infrared (NIR) spectroscopy has been pointed as a valuable tool for this purpose. The NIR spectrum of a soil sample contains information of the organic matter (quantity and quality), clay (quantity and quality), minerals (such as carbonates and iron oxides) and water contents. Some of these components are modified by the heat, and each temperature causes a group of changes, leaving a typical fingerprint on the NIR spectrum. This technique needs the use of a model (or calibration) where the changes in the NIR spectra are related with the temperature reached. For the development of the model, several aliquots are heated at known temperatures, and used as standards in the calibration set. This model offers the possibility to make estimations of the temperature reached on a burned sample from its NIR spectrum. However, the estimation of the temperature reached using NIR spectroscopy is due to changes in several components, and cannot be attributed to changes in a unique soil component. Thus, we can estimate the temperature reached by the interaction between temperature and the thermo-sensible soil components. In addition, we cannot expect the uniform distribution of these components, even at small scale. Consequently, the proportion of these soil components can vary spatially across the site. This variation will be present in the samples used to construct the model and also in the samples affected by the wildfire. Therefore, the strategies followed to develop robust models should be focused to manage this expected variation. In this work we compared the prediction accuracy of models constructed with different approaches. These approaches were designed to provide insights about how to distribute the efforts needed for the development of robust

  20. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  1. Parallel Solution of Robust Nonlinear Model Predictive Control Problems in Batch Crystallization

    Directory of Open Access Journals (Sweden)

    Yankai Cao

    2016-06-01

    Full Text Available Representing the uncertainties with a set of scenarios, the optimization problem resulting from a robust nonlinear model predictive control (NMPC strategy at each sampling instance can be viewed as a large-scale stochastic program. This paper solves these optimization problems using the parallel Schur complement method developed to solve stochastic programs on distributed and shared memory machines. The control strategy is illustrated with a case study of a multidimensional unseeded batch crystallization process. For this application, a robust NMPC based on min–max optimization guarantees satisfaction of all state and input constraints for a set of uncertainty realizations, and also provides better robust performance compared with open-loop optimal control, nominal NMPC, and robust NMPC minimizing the expected performance at each sampling instance. The performance of robust NMPC can be improved by generating optimization scenarios using Bayesian inference. With the efficient parallel solver, the solution time of one optimization problem is reduced from 6.7 min to 0.5 min, allowing for real-time application.

  2. Production and robustness of a Cacao agroecosystem: effects of two contrasting types of management strategies.

    Science.gov (United States)

    Sabatier, Rodolphe; Wiegand, Kerstin; Meyer, Katrin

    2013-01-01

    Ecological intensification, i.e. relying on ecological processes to replace chemical inputs, is often presented as the ideal alternative to conventional farming based on an intensive use of chemicals. It is said to both maintain high yield and provide more robustness to the agroecosystem. However few studies compared the two types of management with respect to their consequences for production and robustness toward perturbation. In this study our aim is to assess productive performance and robustness toward diverse perturbations of a Cacao agroecosystem managed with two contrasting groups of strategies: one group of strategies relying on a high level of pesticides and a second relying on low levels of pesticides. We conducted this study using a dynamical model of a Cacao agroecosystem that includes Cacao production dynamics, and dynamics of three insects: a pest (the Cacao Pod Borer, Conopomorpha cramerella) and two characteristic but unspecified beneficial insects (a pollinator of Cacao and a parasitoid of the Cacao Pod Borer). Our results showed two opposite behaviors of the Cacao agroecosystem depending on its management, i.e. an agroecosystem relying on a high input of pesticides and showing low ecosystem functioning and an agroecosystem with low inputs, relying on a high functioning of the ecosystem. From the production point of view, no type of management clearly outclassed the other and their ranking depended on the type of pesticide used. From the robustness point of view, the two types of managements performed differently when subjected to different types of perturbations. Ecologically intensive systems were more robust to pest outbreaks and perturbations related to pesticide characteristics while chemically intensive systems were more robust to Cacao production and management-related perturbation.

  3. Robust non-parametric one-sample tests for the analysis of recurrent events.

    Science.gov (United States)

    Rebora, Paola; Galimberti, Stefania; Valsecchi, Maria Grazia

    2010-12-30

    One-sample non-parametric tests are proposed here for inference on recurring events. The focus is on the marginal mean function of events and the basis for inference is the standardized distance between the observed and the expected number of events under a specified reference rate. Different weights are considered in order to account for various types of alternative hypotheses on the mean function of the recurrent events process. A robust version and a stratified version of the test are also proposed. The performance of these tests was investigated through simulation studies under various underlying event generation processes, such as homogeneous and nonhomogeneous Poisson processes, autoregressive and renewal processes, with and without frailty effects. The robust versions of the test have been shown to be suitable in a wide variety of event generating processes. The motivating context is a study on gene therapy in a very rare immunodeficiency in children, where a major end-point is the recurrence of severe infections. Robust non-parametric one-sample tests for recurrent events can be useful to assess efficacy and especially safety in non-randomized studies or in epidemiological studies for comparison with a standard population. Copyright © 2010 John Wiley & Sons, Ltd.

  4. User-driven sampling strategies in image exploitation

    Science.gov (United States)

    Harvey, Neal; Porter, Reid

    2013-12-01

    Visual analytics and interactive machine learning both try to leverage the complementary strengths of humans and machines to solve complex data exploitation tasks. These fields overlap most significantly when training is involved: the visualization or machine learning tool improves over time by exploiting observations of the human-computer interaction. This paper focuses on one aspect of the human-computer interaction that we call user-driven sampling strategies. Unlike relevance feedback and active learning sampling strategies, where the computer selects which data to label at each iteration, we investigate situations where the user selects which data is to be labeled at each iteration. User-driven sampling strategies can emerge in many visual analytics applications but they have not been fully developed in machine learning. User-driven sampling strategies suggest new theoretical and practical research questions for both visualization science and machine learning. In this paper we identify and quantify the potential benefits of these strategies in a practical image analysis application. We find user-driven sampling strategies can sometimes provide significant performance gains by steering tools towards local minima that have lower error than tools trained with all of the data. In preliminary experiments we find these performance gains are particularly pronounced when the user is experienced with the tool and application domain.

  5. A Robust Longitudinal Control Strategy of Platoons under Model Uncertainties and Time Delays

    Directory of Open Access Journals (Sweden)

    Na Chen

    2018-01-01

    Full Text Available Automated vehicles are designed to free drivers from driving tasks and are expected to improve traffic safety and efficiency when connected via vehicle-to-vehicle communication, that is, connected automated vehicles (CAVs. The time delays and model uncertainties in vehicle control systems pose challenges for automated driving in real world. Ignoring them may render the performance of cooperative driving systems unsatisfactory or even unstable. This paper aims to design a robust and flexible platooning control strategy for CAVs. A centralized control method is presented, where the leader of a CAV platoon collects information from followers, computes the desired accelerations of all controlled vehicles, and broadcasts the desired accelerations to followers. The robust platooning is formulated as a Min-Max Model Predictive Control (MM-MPC problem, where optimal accelerations are generated to minimize the cost function under the worst case, where the worst case is taken over the possible models. The proposed method is flexible in such a way that it can be applied to both homogeneous platoon and heterogeneous platoon with mixed human-driven and automated controlled vehicles. A third-order linear vehicle model with fixed feedback delay and stochastic actuator lag is used to predict the platoon behavior. Actuator lag is assumed to vary randomly with unknown distributions but a known upper bound. The controller regulates platoon accelerations over a time horizon to minimize a cost function representing driving safety, efficiency, and ride comfort, subject to speed limits, plausible acceleration range, and minimal net spacing. The designed strategy is tested by simulating homogeneous and heterogeneous platoons in a number of typical and extreme scenarios to assess the system stability and performance. The test results demonstrate that the designed control strategy for CAV can ensure the robustness of stability and performance against model uncertainties

  6. Evaluation of sampling strategies to estimate crown biomass

    Directory of Open Access Journals (Sweden)

    Krishna P Poudel

    2015-01-01

    Full Text Available Background Depending on tree and site characteristics crown biomass accounts for a significant portion of the total aboveground biomass in the tree. Crown biomass estimation is useful for different purposes including evaluating the economic feasibility of crown utilization for energy production or forest products, fuel load assessments and fire management strategies, and wildfire modeling. However, crown biomass is difficult to predict because of the variability within and among species and sites. Thus the allometric equations used for predicting crown biomass should be based on data collected with precise and unbiased sampling strategies. In this study, we evaluate the performance different sampling strategies to estimate crown biomass and to evaluate the effect of sample size in estimating crown biomass. Methods Using data collected from 20 destructively sampled trees, we evaluated 11 different sampling strategies using six evaluation statistics: bias, relative bias, root mean square error (RMSE, relative RMSE, amount of biomass sampled, and relative biomass sampled. We also evaluated the performance of the selected sampling strategies when different numbers of branches (3, 6, 9, and 12 are selected from each tree. Tree specific log linear model with branch diameter and branch length as covariates was used to obtain individual branch biomass. Results Compared to all other methods stratified sampling with probability proportional to size estimation technique produced better results when three or six branches per tree were sampled. However, the systematic sampling with ratio estimation technique was the best when at least nine branches per tree were sampled. Under the stratified sampling strategy, selecting unequal number of branches per stratum produced approximately similar results to simple random sampling, but it further decreased RMSE when information on branch diameter is used in the design and estimation phases. Conclusions Use of

  7. Estimator-based multiobjective robust control strategy for an active pantograph in high-speed railways

    DEFF Research Database (Denmark)

    Lu, Xiaobing; Liu, Zhigang; Song, Yang

    2018-01-01

    Active control of the pantograph is one of the promising measures for decreasing fluctuation in the contact force between the pantograph and the catenary. In this paper, an estimator-based multiobjective robust control strategy is proposed for an active pantograph, which consists of a state estim...

  8. Direct infusion-SIM as fast and robust method for absolute protein quantification in complex samples

    Directory of Open Access Journals (Sweden)

    Christina Looße

    2015-06-01

    Full Text Available Relative and absolute quantification of proteins in biological and clinical samples are common approaches in proteomics. Until now, targeted protein quantification is mainly performed using a combination of HPLC-based peptide separation and selected reaction monitoring on triple quadrupole mass spectrometers. Here, we show for the first time the potential of absolute quantification using a direct infusion strategy combined with single ion monitoring (SIM on a Q Exactive mass spectrometer. By using complex membrane fractions of Escherichia coli, we absolutely quantified the recombinant expressed heterologous human cytochrome P450 monooxygenase 3A4 (CYP3A4 comparing direct infusion-SIM with conventional HPLC-SIM. Direct-infusion SIM revealed only 14.7% (±4.1 (s.e.m. deviation on average, compared to HPLC-SIM and a decreased processing and analysis time of 4.5 min (that could be further decreased to 30 s for a single sample in contrast to 65 min by the LC–MS method. Summarized, our simplified workflow using direct infusion-SIM provides a fast and robust method for quantification of proteins in complex protein mixtures.

  9. Developing a Robust Strategy Map in Balanced Scorecard Model Using Scenario Planning

    Directory of Open Access Journals (Sweden)

    Mostafa Jafari

    2015-01-01

    Full Text Available The key to successful strategy implementation in an organization is for people in the organization to understand it, which requires the establishment of complicated but vital processes whereby the intangible assets are converted into tangible outputs. In this regard, a strategy map is a useful tool that helps execute this difficult task. However, such maps are typically developed based on ambiguous cause-effect relationships that result from the extrapolation of past data and flawed links with possible futures. However, if the strategy map is a mere reflection of the status quo but not future conditions and does not embrace real-world uncertainties, it will endanger the organization since it posits that the current situation will continue. In order to compensate for this deficiency, the environmental scenarios affecting an organization were identified in the present study. Then the strategy map was developed in the form of a scenario-based balanced scorecard. Besides, the effect of environmental changes on the components of the strategy map was investigated using the strategy maps illustrated over time together with the corresponding cash flow vectors. Subsequently, a method was proposed to calculate the degree of robustness of every component of the strategy map for the contingency of every scenario. Finally, the results were applied to a post office.

  10. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  12. Multi-Phase Sub-Sampling Fractional-N PLL with soft loop switching for fast robust locking

    NARCIS (Netherlands)

    Liao, Dongyi; Dai, FA Foster; Nauta, Bram; Klumperink, Eric A.M.

    2017-01-01

    This paper presents a low phase noise sub-sampling PLL (SSPLL) with multi-phase outputs. Automatic soft switching between the sub-sampling phase loop and frequency loop is proposed to improve robustness against perturbations and interferences that may cause a traditional SSPLL to lose lock. A

  13. Robustness analysis of complex networks with power decentralization strategy via flow-sensitive centrality against cascading failures

    Science.gov (United States)

    Guo, Wenzhang; Wang, Hao; Wu, Zhengping

    2018-03-01

    Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.

  14. Climate change on the Colorado River: a method to search for robust management strategies

    Science.gov (United States)

    Keefe, R.; Fischbach, J. R.

    2010-12-01

    The Colorado River is a principal source of water for the seven Basin States, providing approximately 16.5 maf per year to users in the southwestern United States and Mexico. Though the dynamics of the river ensure Upper Basin users a reliable supply of water, the three Lower Basin states (California, Nevada, and Arizona) are in danger of delivery interruptions as Upper Basin demand increases and climate change threatens to reduce future streamflows. In light of the recent drought and uncertain effects of climate change on Colorado River flows, we evaluate the performance of a suite of policies modeled after the shortage sharing agreement adopted in December 2007 by the Department of the Interior. We build on the current literature by using a simplified model of the Lower Colorado River to consider future streamflow scenarios given climate change uncertainty. We also generate different scenarios of parametric consumptive use growth in the Upper Basin and evaluate alternate management strategies in light of these uncertainties. Uncertainty associated with climate change is represented with a multi-model ensemble from the literature, using a nearest neighbor perturbation to increase the size of the ensemble. We use Robust Decision Making to compare near-term or long-term management strategies across an ensemble of plausible future scenarios with the goal of identifying one or more approaches that are robust to alternate assumptions about the future. This method entails using search algorithms to quantitatively identify vulnerabilities that may threaten a given strategy (including the current operating policy) and characterize key tradeoffs between strategies under different scenarios.

  15. Robust online tracking via adaptive samples selection with saliency detection

    Science.gov (United States)

    Yan, Jia; Chen, Xi; Zhu, QiuPing

    2013-12-01

    Online tracking has shown to be successful in tracking of previously unknown objects. However, there are two important factors which lead to drift problem of online tracking, the one is how to select the exact labeled samples even when the target locations are inaccurate, and the other is how to handle the confusors which have similar features with the target. In this article, we propose a robust online tracking algorithm with adaptive samples selection based on saliency detection to overcome the drift problem. To deal with the problem of degrading the classifiers using mis-aligned samples, we introduce the saliency detection method to our tracking problem. Saliency maps and the strong classifiers are combined to extract the most correct positive samples. Our approach employs a simple yet saliency detection algorithm based on image spectral residual analysis. Furthermore, instead of using the random patches as the negative samples, we propose a reasonable selection criterion, in which both the saliency confidence and similarity are considered with the benefits that confusors in the surrounding background are incorporated into the classifiers update process before the drift occurs. The tracking task is formulated as a binary classification via online boosting framework. Experiment results in several challenging video sequences demonstrate the accuracy and stability of our tracker.

  16. Robust identification of noncoding RNA from transcriptomes requires phylogenetically-informed sampling.

    Directory of Open Access Journals (Sweden)

    Stinus Lindgreen

    2014-10-01

    Full Text Available Noncoding RNAs are integral to a wide range of biological processes, including translation, gene regulation, host-pathogen interactions and environmental sensing. While genomics is now a mature field, our capacity to identify noncoding RNA elements in bacterial and archaeal genomes is hampered by the difficulty of de novo identification. The emergence of new technologies for characterizing transcriptome outputs, notably RNA-seq, are improving noncoding RNA identification and expression quantification. However, a major challenge is to robustly distinguish functional outputs from transcriptional noise. To establish whether annotation of existing transcriptome data has effectively captured all functional outputs, we analysed over 400 publicly available RNA-seq datasets spanning 37 different Archaea and Bacteria. Using comparative tools, we identify close to a thousand highly-expressed candidate noncoding RNAs. However, our analyses reveal that capacity to identify noncoding RNA outputs is strongly dependent on phylogenetic sampling. Surprisingly, and in stark contrast to protein-coding genes, the phylogenetic window for effective use of comparative methods is perversely narrow: aggregating public datasets only produced one phylogenetic cluster where these tools could be used to robustly separate unannotated noncoding RNAs from a null hypothesis of transcriptional noise. Our results show that for the full potential of transcriptomics data to be realized, a change in experimental design is paramount: effective transcriptomics requires phylogeny-aware sampling.

  17. Improving the Robustness of Electromyogram-Pattern Recognition for Prosthetic Control by a Postprocessing Strategy

    Directory of Open Access Journals (Sweden)

    Xu Zhang

    2017-09-01

    Full Text Available Electromyogram (EMG contains rich information for motion decoding. As one of its major applications, EMG-pattern recognition (PR-based control of prostheses has been proposed and investigated in the field of rehabilitation robotics for decades. These prostheses can offer a higher level of dexterity compared to the commercially available ones. However, limited progress has been made toward clinical application of EMG-PR-based prostheses, due to their unsatisfactory robustness against various interferences during daily use. These interferences may lead to misclassifications of motion intentions, which damage the control performance of EMG-PR-based prostheses. A number of studies have applied methods that undergo a postprocessing stage to determine the current motion outputs, based on previous outputs or other information, which have proved effective in reducing erroneous outputs. In this study, we proposed a postprocessing strategy that locks the outputs during the constant contraction to block out occasional misclassifications, upon detecting the motion onset using a threshold. The strategy was investigated using three different motion onset detectors, namely mean absolute value, Teager–Kaiser energy operator, or mechanomyogram (MMG. Our results indicate that the proposed strategy could suppress erroneous outputs, during rest and constant contractions in particular. In addition, with MMG as the motion onset detector, the strategy was found to produce the most significant improvement in the performance, reducing the total errors up to around 50% (from 22.9 to 11.5% in comparison to the original classification output in the online test, and it is the most robust against threshold value changes. We speculate that motion onset detectors that are both smooth and responsive would further enhance the efficacy of the proposed postprocessing strategy, which would facilitate the clinical application of EMG-PR-based prosthetic control.

  18. A robust control strategy for a class of distributed network with transmission delays

    DEFF Research Database (Denmark)

    Vahid Naghavi, S.; A. Safavi, A.; Khooban, Mohammad Hassan

    2016-01-01

    Purpose The purpose of this paper is to concern the design of a robust model predictive controller for distributed networked systems with transmission delays. Design/methodology/approach The overall system is composed of a number of interconnected nonlinear subsystems with time-varying transmission...... as an optimization problem of a “worst-case” objective function over an infinite moving horizon. Findings The aim is to propose control synthesis approach that depends on nonlinearity and time varying delay characteristics. The MPC problem is represented in a time varying delayed state feedback structure....... Then the synthesis sufficient condition is provided in the form of a linear matrix inequality (LMI) optimization and is solved online at each time instant. In the rest, an LMI-based decentralized observer-based robust model predictive control strategy is proposed. Originality/value The authors develop RMPC...

  19. Robust Control Mixer Method for Reconfigurable Control Design Using Model Matching Strategy

    DEFF Research Database (Denmark)

    Yang, Zhenyu; Blanke, Mogens; Verhagen, Michel

    2007-01-01

    A novel control mixer method for recon¯gurable control designs is developed. The proposed method extends the matrix-form of the conventional control mixer concept into a LTI dynamic system-form. The H_inf control technique is employed for these dynamic module designs after an augmented control...... system is constructed through a model-matching strategy. The stability, performance and robustness of the reconfigured system can be guaranteed when some conditions are satisfied. To illustrate the effectiveness of the proposed method, a robot system subjected to failures is used to demonstrate...

  20. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  1. MPLEx: a Robust and Universal Protocol for Single-Sample Integrative Proteomic, Metabolomic, and Lipidomic Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Nakayasu, Ernesto S.; Nicora, Carrie D.; Sims, Amy C.; Burnum-Johnson, Kristin E.; Kim, Young-Mo; Kyle, Jennifer E.; Matzke, Melissa M.; Shukla, Anil K.; Chu, Rosalie K.; Schepmoes, Athena A.; Jacobs, Jon M.; Baric, Ralph S.; Webb-Robertson, Bobbie-Jo; Smith, Richard D.; Metz, Thomas O.; Chia, Nicholas

    2016-05-03

    ABSTRACT

    Integrative multi-omics analyses can empower more effective investigation and complete understanding of complex biological systems. Despite recent advances in a range of omics analyses, multi-omic measurements of the same sample are still challenging and current methods have not been well evaluated in terms of reproducibility and broad applicability. Here we adapted a solvent-based method, widely applied for extracting lipids and metabolites, to add proteomics to mass spectrometry-based multi-omics measurements. Themetabolite,protein, andlipidextraction (MPLEx) protocol proved to be robust and applicable to a diverse set of sample types, including cell cultures, microbial communities, and tissues. To illustrate the utility of this protocol, an integrative multi-omics analysis was performed using a lung epithelial cell line infected with Middle East respiratory syndrome coronavirus, which showed the impact of this virus on the host glycolytic pathway and also suggested a role for lipids during infection. The MPLEx method is a simple, fast, and robust protocol that can be applied for integrative multi-omic measurements from diverse sample types (e.g., environmental,in vitro, and clinical).

    IMPORTANCEIn systems biology studies, the integration of multiple omics measurements (i.e., genomics, transcriptomics, proteomics, metabolomics, and lipidomics) has been shown to provide a more complete and informative view of biological pathways. Thus, the prospect of extracting different types of molecules (e.g., DNAs, RNAs, proteins, and metabolites) and performing multiple omics measurements on single samples is very attractive, but such studies are challenging due to the fact that the extraction conditions differ according to the molecule type. Here, we adapted an organic solvent-based extraction method that demonstrated

  2. Robustness of Structures

    DEFF Research Database (Denmark)

    Faber, Michael Havbro; Vrouwenvelder, A.C.W.M.; Sørensen, John Dalsgaard

    2011-01-01

    In 2005, the Joint Committee on Structural Safety (JCSS) together with Working Commission (WC) 1 of the International Association of Bridge and Structural Engineering (IABSE) organized a workshop on robustness of structures. Two important decisions resulted from this workshop, namely...... ‘COST TU0601: Robustness of Structures’ was initiated in February 2007, aiming to provide a platform for exchanging and promoting research in the area of structural robustness and to provide a basic framework, together with methods, strategies and guidelines enhancing robustness of structures...... the development of a joint European project on structural robustness under the COST (European Cooperation in Science and Technology) programme and the decision to develop a more elaborate document on structural robustness in collaboration between experts from the JCSS and the IABSE. Accordingly, a project titled...

  3. A robust control strategy for mitigating renewable energy fluctuations in a real hybrid power system combined with SMES

    Science.gov (United States)

    Magdy, G.; Shabib, G.; Elbaset, Adel A.; Qudaih, Yaser; Mitani, Yasunori

    2018-05-01

    Utilizing Renewable Energy Sources (RESs) is attracting great attention as a solution to future energy shortages. However, the irregular nature of RESs and random load deviations cause a large frequency and voltage fluctuations. Therefore, in order to benefit from a maximum capacity of the RESs, a robust mitigation strategy of power fluctuations from RESs must be applied. Hence, this paper proposes a design of Load Frequency Control (LFC) coordinated with Superconducting Magnetic Energy Storage (SMES) technology (i.e., an auxiliary LFC), using an optimal PID controller-based Particle Swarm Optimization (PSO) in the Egyptian Power System (EPS) considering high penetration of Photovoltaics (PV) power generation. Thus, from the perspective of LFC, the robust control strategy is proposed to maintain the nominal system frequency and mitigating the power fluctuations from RESs against all disturbances sources for the EPS with the multi-source environment. The EPS is decomposed into three dynamics subsystems, which are non-reheat, reheat and hydro power plants taking into consideration the system nonlinearity. The results by nonlinear simulation Matlab/Simulink for the EPS combined with SMES system considering PV solar power approves that, the proposed control strategy achieves a robust stability by reducing transient time, minimizing the frequency deviations, maintaining the system frequency, preventing conventional generators from exceeding their power ratings during load disturbances, and mitigating the power fluctuations from the RESs.

  4. An efficient, robust, and inexpensive grinding device for herbal samples like Cinchona bark

    DEFF Research Database (Denmark)

    Hansen, Steen Honoré; Holmfred, Else Skovgaard; Cornett, Claus

    2015-01-01

    An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum...... of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery...

  5. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  6. Practical experiences with an extended screening strategy for genetically modified organisms (GMOs) in real-life samples.

    Science.gov (United States)

    Scholtens, Ingrid; Laurensse, Emile; Molenaar, Bonnie; Zaaijer, Stephanie; Gaballo, Heidi; Boleij, Peter; Bak, Arno; Kok, Esther

    2013-09-25

    Nowadays most animal feed products imported into Europe have a GMO (genetically modified organism) label. This means that they contain European Union (EU)-authorized GMOs. For enforcement of these labeling requirements, it is necessary, with the rising number of EU-authorized GMOs, to perform an increasing number of analyses. In addition to this, it is necessary to test products for the potential presence of EU-unauthorized GMOs. Analysis for EU-authorized and -unauthorized GMOs in animal feed has thus become laborious and expensive. Initial screening steps may reduce the number of GMO identification methods that need to be applied, but with the increasing diversity also screening with GMO elements has become more complex. For the present study, the application of an informative detailed 24-element screening and subsequent identification strategy was applied in 50 animal feed samples. Almost all feed samples were labeled as containing GMO-derived materials. The main goal of the study was therefore to investigate if a detailed screening strategy would reduce the number of subsequent identification analyses. An additional goal was to test the samples in this way for the potential presence of EU-unauthorized GMOs. Finally, to test the robustness of the approach, eight of the samples were tested in a concise interlaboratory study. No significant differences were found between the results of the two laboratories.

  7. Robustness of networks against propagating attacks under vaccination strategies

    International Nuclear Information System (INIS)

    Hasegawa, Takehisa; Masuda, Naoki

    2011-01-01

    We study the effect of vaccination on the robustness of networks against propagating attacks that obey the susceptible–infected–removed model. By extending the generating function formalism developed by Newman (2005 Phys. Rev. Lett. 95 108701), we analytically determine the robustness of networks that depends on the vaccination parameters. We consider the random defense where nodes are vaccinated randomly and the degree-based defense where hubs are preferentially vaccinated. We show that, when vaccines are inefficient, the random graph is more robust against propagating attacks than the scale-free network. When vaccines are relatively efficient, the scale-free network with the degree-based defense is more robust than the random graph with the random defense and the scale-free network with the random defense

  8. Robustness analysis of chiller sequencing control

    International Nuclear Information System (INIS)

    Liao, Yundan; Sun, Yongjun; Huang, Gongsheng

    2015-01-01

    Highlights: • Uncertainties with chiller sequencing control were systematically quantified. • Robustness of chiller sequencing control was systematically analyzed. • Different sequencing control strategies were sensitive to different uncertainties. • A numerical method was developed for easy selection of chiller sequencing control. - Abstract: Multiple-chiller plant is commonly employed in the heating, ventilating and air-conditioning system to increase operational feasibility and energy-efficiency under part load condition. In a multiple-chiller plant, chiller sequencing control plays a key role in achieving overall energy efficiency while not sacrifices the cooling sufficiency for indoor thermal comfort. Various sequencing control strategies have been developed and implemented in practice. Based on the observation that (i) uncertainty, which cannot be avoided in chiller sequencing control, has a significant impact on the control performance and may cause the control fail to achieve the expected control and/or energy performance; and (ii) in current literature few studies have systematically addressed this issue, this paper therefore presents a study on robustness analysis of chiller sequencing control in order to understand the robustness of various chiller sequencing control strategies under different types of uncertainty. Based on the robustness analysis, a simple and applicable method is developed to select the most robust control strategy for a given chiller plant in the presence of uncertainties, which will be verified using case studies

  9. Robust Scientists

    DEFF Research Database (Denmark)

    Gorm Hansen, Birgitte

    their core i nterests, 2) developing a selfsupply of industry interests by becoming entrepreneurs and thus creating their own compliant industry partner and 3) balancing resources within a larger collective of researchers, thus countering changes in the influx of funding caused by shifts in political...... knowledge", Danish research policy seems to have helped develop politically and economically "robust scientists". Scientific robustness is acquired by way of three strategies: 1) tasting and discriminating between resources so as to avoid funding that erodes academic profiles and push scientists away from...

  10. Optimum and robust 3D facies interpolation strategies in a heterogeneous coal zone (Tertiary As Pontes basin, NW Spain)

    Energy Technology Data Exchange (ETDEWEB)

    Falivene, Oriol; Cabrera, Lluis; Saez, Alberto [Geomodels Institute, Group of Geodynamics and Basin Analysis, Department of Stratigraphy, Paleontology and Marine Geosciences, Universitat de Barcelona, c/ Marti i Franques s/n, Facultat de Geologia, 08028 Barcelona (Spain)

    2007-07-02

    Coal exploration and mining in extensively drilled and sampled coal zones can benefit from 3D statistical facies interpolation. Starting from closely spaced core descriptions, and using interpolation methods, a 3D optimum and robust facies distribution model was obtained for a thick, heterogeneous coal zone deposited in the non-marine As Pontes basin (Oligocene-Early Miocene, NW Spain). Several grid layering styles, interpolation methods (truncated inverse squared distance weighting, truncated kriging, truncated kriging with an areal trend, indicator inverse squared distance weighting, indicator kriging, and indicator kriging with an areal trend) and searching conditions were compared. Facies interpolation strategies were evaluated using visual comparison and cross validation. Moreover, robustness of the resultant facies distribution with respect to variations in interpolation method input parameters was verified by taking into account several scenarios of uncertainty. The resultant 3D facies reconstruction improves the understanding of the distribution and geometry of the coal facies. Furthermore, since some coal quality properties (e.g. calorific value or sulphur percentage) display a good statistical correspondence with facies, predicting the distribution of these properties using the reconstructed facies distribution as a template proved to be a powerful approach, yielding more accurate and realistic reconstructions of these properties in the coal zone. (author)

  11. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  12. An Efficient, Robust, and Inexpensive Grinding Device for Herbal Samples like Cinchona Bark.

    Science.gov (United States)

    Hansen, Steen Honoré; Holmfred, Else; Cornett, Claus; Maldonado, Carla; Rønsted, Nina

    2015-01-01

    An effective, robust, and inexpensive grinding device for the grinding of herb samples like bark and roots was developed by rebuilding a commercially available coffee grinder. The grinder was constructed to be able to provide various particle sizes, to be easy to clean, and to have a minimum of dead volume. The recovery of the sample when grinding as little as 50 mg of crude Cinchona bark was about 60%. Grinding is performed in seconds with no rise in temperature, and the grinder is easily disassembled to be cleaned. The influence of the particle size of the obtained powders on the recovery of analytes in extracts of Cinchona bark was investigated using HPLC.

  13. Robust and distributed hypothesis testing

    CERN Document Server

    Gül, Gökhan

    2017-01-01

    This book generalizes and extends the available theory in robust and decentralized hypothesis testing. In particular, it presents a robust test for modeling errors which is independent from the assumptions that a sufficiently large number of samples is available, and that the distance is the KL-divergence. Here, the distance can be chosen from a much general model, which includes the KL-divergence as a very special case. This is then extended by various means. A minimax robust test that is robust against both outliers as well as modeling errors is presented. Minimax robustness properties of the given tests are also explicitly proven for fixed sample size and sequential probability ratio tests. The theory of robust detection is extended to robust estimation and the theory of robust distributed detection is extended to classes of distributions, which are not necessarily stochastically bounded. It is shown that the quantization functions for the decision rules can also be chosen as non-monotone. Finally, the boo...

  14. A Bayesian sampling strategy for hazardous waste site characterization

    International Nuclear Information System (INIS)

    Skalski, J.R.

    1987-12-01

    Prior knowledge based on historical records or physical evidence often suggests the existence of a hazardous waste site. Initial surveys may provide additional or even conflicting evidence of site contamination. This article presents a Bayes sampling strategy that allocates sampling at a site using this prior knowledge. This sampling strategy minimizes the environmental risks of missing chemical or radionuclide hot spots at a waste site. The environmental risk is shown to be proportional to the size of the undetected hot spot or inversely proportional to the probability of hot spot detection. 12 refs., 2 figs

  15. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability.

    Science.gov (United States)

    Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj

    2012-03-20

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.

  16. Robustness Analysis of Typologies of Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Parigi, Dario

    2013-01-01

    to the future development of typologies of reciprocal timber structures. The paper concludes that these kinds of structures can have a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness in structures and provides strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper outlines robustness issues related...

  17. Research on Robust Control Strategies for VSC-HVDC

    Science.gov (United States)

    Zhu, Kaicheng; Bao, Hai

    2018-01-01

    In the control system of VSC-HVDC, the phase locked loop provides phase signals to voltage vector control and trigger pulses to generate the required reference phase. The PLL is a typical second-order system. When the system is in unstable state, it will oscillate, make the trigger angle shift, produce harmonic, and make active power and reactive power coupled. Thus, considering the external disturbances introduced by the PLL in VSC-HVDC control system, the parameter perturbations of the controller and the model uncertainties, a H∞ robust controller of mixed sensitivity optimization problem is designed by using the Hinf function provided by the robust control toolbox. Then, compare it with the proportional integral controller through the MATLAB simulation experiment. By contrast, when the H∞ robust controller is added, active and reactive power of the converter station can track the change of reference values more accurately and quickly, and reduce overshoot. When the step change of active and reactive power occurs, mutual influence is reduced and better independent regulation is achieved.

  18. Robust remediation strategies at gas-work sites: a case of source recognition and source characterization

    International Nuclear Information System (INIS)

    Vries, P.O. de

    2005-01-01

    In The Netherlands there have been gasworks at about 260 to 270 locations. Most of these locations are or were heavily polluted with tar, ashes and cyanides and many of them belong to the locations where remediation actions have already been executed. It seems however that many of them also belong to the locations where remediation actions were not quite as successful as was expected. So, for many gas-work sites that were already 'remedied' in the 80's and early 90's of the foregoing century, new programs for site remediation are planned. Of course the mistakes from the past should now be avoided. The current remediation strategy in The Netherlands for gas-work sites can be comprised in four steps: 1 - removing spots in the top soil, 2 - removing spots with mobile components in the shallow subsoil, 3 - controlling spots with mobile components in the deep subsoil, 4 - creating a 'steady endpoint situation' in the plume. At many former gas-work sites real sources, i.e. in a physico-chemical sense, are not very well known. This can easily lead to insufficient removal of some or part of these sources and cause a longer delivery of contaminants to the groundwater plume, with higher endpoint concentrations, higher costs and more restrictions for future use. The higher concentrations and longer deliveries originating from not recognized or not localized sources are often not sufficiently compensated by the proposed plume management in current remediation strategies. Remediation results can be improved by using knowledge about the processes that determine the delivery of contaminants to the groundwater, the materials that cause these delivery and the locations at the site where these are most likely found. When sources are present in the deep subsoil or the exact localization of sources is uncertain, robust remediation strategies should be chosen and wishful thinking about removing sources with in situ techniques should be avoided. Robust strategies are probably less

  19. A strategy for tissue self-organization that is robust to cellular heterogeneity and plasticity.

    Science.gov (United States)

    Cerchiari, Alec E; Garbe, James C; Jee, Noel Y; Todhunter, Michael E; Broaders, Kyle E; Peehl, Donna M; Desai, Tejal A; LaBarge, Mark A; Thomson, Matthew; Gartner, Zev J

    2015-02-17

    Developing tissues contain motile populations of cells that can self-organize into spatially ordered tissues based on differences in their interfacial surface energies. However, it is unclear how self-organization by this mechanism remains robust when interfacial energies become heterogeneous in either time or space. The ducts and acini of the human mammary gland are prototypical heterogeneous and dynamic tissues comprising two concentrically arranged cell types. To investigate the consequences of cellular heterogeneity and plasticity on cell positioning in the mammary gland, we reconstituted its self-organization from aggregates of primary cells in vitro. We find that self-organization is dominated by the interfacial energy of the tissue-ECM boundary, rather than by differential homo- and heterotypic energies of cell-cell interaction. Surprisingly, interactions with the tissue-ECM boundary are binary, in that only one cell type interacts appreciably with the boundary. Using mathematical modeling and cell-type-specific knockdown of key regulators of cell-cell cohesion, we show that this strategy of self-organization is robust to severe perturbations affecting cell-cell contact formation. We also find that this mechanism of self-organization is conserved in the human prostate. Therefore, a binary interfacial interaction with the tissue boundary provides a flexible and generalizable strategy for forming and maintaining the structure of two-component tissues that exhibit abundant heterogeneity and plasticity. Our model also predicts that mutations affecting binary cell-ECM interactions are catastrophic and could contribute to loss of tissue architecture in diseases such as breast cancer.

  20. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  1. Robustness in econometrics

    CERN Document Server

    Sriboonchitta, Songsak; Huynh, Van-Nam

    2017-01-01

    This book presents recent research on robustness in econometrics. Robust data processing techniques – i.e., techniques that yield results minimally affected by outliers – and their applications to real-life economic and financial situations are the main focus of this book. The book also discusses applications of more traditional statistical techniques to econometric problems. Econometrics is a branch of economics that uses mathematical (especially statistical) methods to analyze economic systems, to forecast economic and financial dynamics, and to develop strategies for achieving desirable economic performance. In day-by-day data, we often encounter outliers that do not reflect the long-term economic trends, e.g., unexpected and abrupt fluctuations. As such, it is important to develop robust data processing techniques that can accommodate these fluctuations.

  2. Robust multivariate analysis

    CERN Document Server

    J Olive, David

    2017-01-01

    This text presents methods that are robust to the assumption of a multivariate normal distribution or methods that are robust to certain types of outliers. Instead of using exact theory based on the multivariate normal distribution, the simpler and more applicable large sample theory is given.  The text develops among the first practical robust regression and robust multivariate location and dispersion estimators backed by theory.   The robust techniques  are illustrated for methods such as principal component analysis, canonical correlation analysis, and factor analysis.  A simple way to bootstrap confidence regions is also provided. Much of the research on robust multivariate analysis in this book is being published for the first time. The text is suitable for a first course in Multivariate Statistical Analysis or a first course in Robust Statistics. This graduate text is also useful for people who are familiar with the traditional multivariate topics, but want to know more about handling data sets with...

  3. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    Directory of Open Access Journals (Sweden)

    Takehisa Yamamoto

    Full Text Available Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  4. Robustness of airline alliance route networks

    Science.gov (United States)

    Lordan, Oriol; Sallan, Jose M.; Simo, Pep; Gonzalez-Prieto, David

    2015-05-01

    The aim of this study is to analyze the robustness of the three major airline alliances' (i.e., Star Alliance, oneworld and SkyTeam) route networks. Firstly, the normalization of a multi-scale measure of vulnerability is proposed in order to perform the analysis in networks with different sizes, i.e., number of nodes. An alternative node selection criterion is also proposed in order to study robustness and vulnerability of such complex networks, based on network efficiency. And lastly, a new procedure - the inverted adaptive strategy - is presented to sort the nodes in order to anticipate network breakdown. Finally, the robustness of the three alliance networks are analyzed with (1) a normalized multi-scale measure of vulnerability, (2) an adaptive strategy based on four different criteria and (3) an inverted adaptive strategy based on the efficiency criterion. The results show that Star Alliance has the most resilient route network, followed by SkyTeam and then oneworld. It was also shown that the inverted adaptive strategy based on the efficiency criterion - inverted efficiency - shows a great success in quickly breaking networks similar to that found with betweenness criterion but with even better results.

  5. Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P; Ching, Christopher R K; Boyle, Christina P; Rajagopalan, Priya; Gutman, Boris A; Leow, Alex D; Toga, Arthur W; Jack, Clifford R; Harvey, Danielle; Weiner, Michael W; Thompson, Paul M

    2013-02-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  7. Robustness Analyses of Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Sørensen, John Dalsgaard; Hald, Frederik

    2013-01-01

    The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many mo...... with respect to robustness of timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......The robustness of structural systems has obtained a renewed interest arising from a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures, many...... modern building codes consider the need for the robustness of structures and provide strategies and methods to obtain robustness. Therefore, a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues...

  8. Effective sampling strategy to detect food and feed contamination

    NARCIS (Netherlands)

    Bouzembrak, Yamine; Fels, van der Ine

    2018-01-01

    Sampling plans for food safety hazards are aimed to be used to determine whether a lot of food is contaminated (with microbiological or chemical hazards) or not. One of the components of sampling plans is the sampling strategy. The aim of this study was to compare the performance of three

  9. Novel strategies for sample preparation in forensic toxicology.

    Science.gov (United States)

    Samanidou, Victoria; Kovatsi, Leda; Fragou, Domniki; Rentifis, Konstantinos

    2011-09-01

    This paper provides a review of novel strategies for sample preparation in forensic toxicology. The review initially outlines the principle of each technique, followed by sections addressing each class of abused drugs separately. The novel strategies currently reviewed focus on the preparation of various biological samples for the subsequent determination of opiates, benzodiazepines, amphetamines, cocaine, hallucinogens, tricyclic antidepressants, antipsychotics and cannabinoids. According to our experience, these analytes are the most frequently responsible for intoxications in Greece. The applications of techniques such as disposable pipette extraction, microextraction by packed sorbent, matrix solid-phase dispersion, solid-phase microextraction, polymer monolith microextraction, stir bar sorptive extraction and others, which are rapidly gaining acceptance in the field of toxicology, are currently reviewed.

  10. Limited-sampling strategies for anti-infective agents: systematic review.

    Science.gov (United States)

    Sprague, Denise A; Ensom, Mary H H

    2009-09-01

    Area under the concentration-time curve (AUC) is a pharmacokinetic parameter that represents overall exposure to a drug. For selected anti-infective agents, pharmacokinetic-pharmacodynamic parameters, such as AUC/MIC (where MIC is the minimal inhibitory concentration), have been correlated with outcome in a few studies. A limited-sampling strategy may be used to estimate pharmacokinetic parameters such as AUC, without the frequent, costly, and inconvenient blood sampling that would be required to directly calculate the AUC. To discuss, by means of a systematic review, the strengths, limitations, and clinical implications of published studies involving a limited-sampling strategy for anti-infective agents and to propose improvements in methodology for future studies. The PubMed and EMBASE databases were searched using the terms "anti-infective agents", "limited sampling", "optimal sampling", "sparse sampling", "AUC monitoring", "abbreviated AUC", "abbreviated sampling", and "Bayesian". The reference lists of retrieved articles were searched manually. Included studies were classified according to modified criteria from the US Preventive Services Task Force. Twenty studies met the inclusion criteria. Six of the studies (involving didanosine, zidovudine, nevirapine, ciprofloxacin, efavirenz, and nelfinavir) were classified as providing level I evidence, 4 studies (involving vancomycin, didanosine, lamivudine, and lopinavir-ritonavir) provided level II-1 evidence, 2 studies (involving saquinavir and ceftazidime) provided level II-2 evidence, and 8 studies (involving ciprofloxacin, nelfinavir, vancomycin, ceftazidime, ganciclovir, pyrazinamide, meropenem, and alpha interferon) provided level III evidence. All of the studies providing level I evidence used prospectively collected data and proper validation procedures with separate, randomly selected index and validation groups. However, most of the included studies did not provide an adequate description of the methods or

  11. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    Science.gov (United States)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  12. Potential-Decomposition Strategy in Markov Chain Monte Carlo Sampling Algorithms

    International Nuclear Information System (INIS)

    Shangguan Danhua; Bao Jingdong

    2010-01-01

    We introduce the potential-decomposition strategy (PDS), which can he used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insufficient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.

  13. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  14. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  15. Optimization of robustness of interdependent network controllability by redundant design.

    Directory of Open Access Journals (Sweden)

    Zenghu Zhang

    Full Text Available Controllability of complex networks has been a hot topic in recent years. Real networks regarded as interdependent networks are always coupled together by multiple networks. The cascading process of interdependent networks including interdependent failure and overload failure will destroy the robustness of controllability for the whole network. Therefore, the optimization of the robustness of interdependent network controllability is of great importance in the research area of complex networks. In this paper, based on the model of interdependent networks constructed first, we determine the cascading process under different proportions of node attacks. Then, the structural controllability of interdependent networks is measured by the minimum driver nodes. Furthermore, we propose a parameter which can be obtained by the structure and minimum driver set of interdependent networks under different proportions of node attacks and analyze the robustness for interdependent network controllability. Finally, we optimize the robustness of interdependent network controllability by redundant design including node backup and redundancy edge backup and improve the redundant design by proposing different strategies according to their cost. Comparative strategies of redundant design are conducted to find the best strategy. Results shows that node backup and redundancy edge backup can indeed decrease those nodes suffering from failure and improve the robustness of controllability. Considering the cost of redundant design, we should choose BBS (betweenness-based strategy or DBS (degree based strategy for node backup and HDF(high degree first for redundancy edge backup. Above all, our proposed strategies are feasible and effective at improving the robustness of interdependent network controllability.

  16. A comparative proteomics method for multiple samples based on a 18O-reference strategy and a quantitation and identification-decoupled strategy.

    Science.gov (United States)

    Wang, Hongbin; Zhang, Yongqian; Gui, Shuqi; Zhang, Yong; Lu, Fuping; Deng, Yulin

    2017-08-15

    Comparisons across large numbers of samples are frequently necessary in quantitative proteomics. Many quantitative methods used in proteomics are based on stable isotope labeling, but most of these are only useful for comparing two samples. For up to eight samples, the iTRAQ labeling technique can be used. For greater numbers of samples, the label-free method has been used, but this method was criticized for low reproducibility and accuracy. An ingenious strategy has been introduced, comparing each sample against a 18 O-labeled reference sample that was created by pooling equal amounts of all samples. However, it is necessary to use proportion-known protein mixtures to investigate and evaluate this new strategy. Another problem for comparative proteomics of multiple samples is the poor coincidence and reproducibility in protein identification results across samples. In present study, a method combining 18 O-reference strategy and a quantitation and identification-decoupled strategy was investigated with proportion-known protein mixtures. The results obviously demonstrated that the 18 O-reference strategy had greater accuracy and reliability than other previously used comparison methods based on transferring comparison or label-free strategies. By the decoupling strategy, the quantification data acquired by LC-MS and the identification data acquired by LC-MS/MS are matched and correlated to identify differential expressed proteins, according to retention time and accurate mass. This strategy made protein identification possible for all samples using a single pooled sample, and therefore gave a good reproducibility in protein identification across multiple samples, and allowed for optimizing peptide identification separately so as to identify more proteins. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The Study of an Optimal Robust Design and Adjustable Ordering Strategies in the HSCM.

    Science.gov (United States)

    Liao, Hung-Chang; Chen, Yan-Kwang; Wang, Ya-huei

    2015-01-01

    The purpose of this study was to establish a hospital supply chain management (HSCM) model in which three kinds of drugs in the same class and with the same indications were used in creating an optimal robust design and adjustable ordering strategies to deal with a drug shortage. The main assumption was that although each doctor has his/her own prescription pattern, when there is a shortage of a particular drug, the doctor may choose a similar drug with the same indications as a replacement. Four steps were used to construct and analyze the HSCM model. The computation technology used included a simulation, a neural network (NN), and a genetic algorithm (GA). The mathematical methods of the simulation and the NN were used to construct a relationship between the factor levels and performance, while the GA was used to obtain the optimal combination of factor levels from the NN. A sensitivity analysis was also used to assess the change in the optimal factor levels. Adjustable ordering strategies were also developed to prevent drug shortages.

  18. Unbiased tensor-based morphometry: Improved robustness and sample size estimates for Alzheimer’s disease clinical trials

    Science.gov (United States)

    Hua, Xue; Hibar, Derrek P.; Ching, Christopher R.K.; Boyle, Christina P.; Rajagopalan, Priya; Gutman, Boris A.; Leow, Alex D.; Toga, Arthur W.; Jack, Clifford R.; Harvey, Danielle; Weiner, Michael W.; Thompson, Paul M.

    2013-01-01

    Various neuroimaging measures are being evaluated for tracking Alzheimer’s disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer’s Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24 months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39 AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI. PMID:23153970

  19. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  20. Robustness Assessment of Spatial Timber Structures

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    2012-01-01

    Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern buildi...... to robustness of spatial timber structures and will discuss the consequences of such robustness issues related to the future development of timber structures.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. In order to minimise the likelihood of such disproportionate structural failures many modern building...... codes consider the need for robustness of structures and provide strategies and methods to obtain robustness. Therefore a structural engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper summaries issues with respect...

  1. Robustness and Strategies of Adaptation among Farmer Varieties of African Rice (Oryza glaberrima) and Asian Rice (Oryza sativa) across West Africa

    NARCIS (Netherlands)

    Mokuwa, A.; Nuijten, H.A.C.P.; Okry, F.; Teeken, B.W.E.; Maat, H.; Richards, P.; Struik, P.C.

    2013-01-01

    This study offers evidence of the robustness of farmer rice varieties (Oryza glaberrima and O. sativa) in West Africa. Our experiments in five West African countries showed that farmer varieties were tolerant of sub-optimal conditions, but employed a range of strategies to cope with stress.

  2. Measurement of radioactivity in the environment - Soil - Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples

    International Nuclear Information System (INIS)

    2007-01-01

    This part of ISO 18589 specifies the general requirements, based on ISO 11074 and ISO/IEC 17025, for all steps in the planning (desk study and area reconnaissance) of the sampling and the preparation of samples for testing. It includes the selection of the sampling strategy, the outline of the sampling plan, the presentation of general sampling methods and equipment, as well as the methodology of the pre-treatment of samples adapted to the measurements of the activity of radionuclides in soil. This part of ISO 18589 is addressed to the people responsible for determining the radioactivity present in soil for the purpose of radiation protection. It is applicable to soil from gardens, farmland, urban or industrial sites, as well as soil not affected by human activities. This part of ISO 18589 is applicable to all laboratories regardless of the number of personnel or the range of the testing performed. When a laboratory does not undertake one or more of the activities covered by this part of ISO 18589, such as planning, sampling or testing, the corresponding requirements do not apply. Information is provided on scope, normative references, terms and definitions and symbols, principle, sampling strategy, sampling plan, sampling process, pre-treatment of samples and recorded information. Five annexes inform about selection of the sampling strategy according to the objectives and the radiological characterization of the site and sampling areas, diagram of the evolution of the sample characteristics from the sampling site to the laboratory, example of sampling plan for a site divided in three sampling areas, example of a sampling record for a single/composite sample and example for a sample record for a soil profile with soil description. A bibliography is provided

  3. Reducing regional vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    Science.gov (United States)

    Reed, Patrick; Trindade, Bernardo; Jonathan, Herman; Harrison, Zeff; Gregory, Characklis

    2016-04-01

    Emerging water scarcity concerns in southeastern US are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify regionally coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative management strategies. Results show that the sampling of deeply uncertain factors in the computational search phase of MORDM can aid in the discovery of management actions that substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be explored jointly to decrease robustness conflicts between the utilities. The insights from this work have general merit for regions where adjacent municipalities can benefit from cooperative regional water portfolio planning.

  4. Robustness evaluation of cutting tool maintenance planning for soft ground tunneling projects

    Directory of Open Access Journals (Sweden)

    Alena Conrads

    2018-03-01

    Full Text Available Tunnel boring machines require extensive maintenance and inspection effort to provide a high availability. The cutting tools of the cutting wheel must be changed timely upon reaching a critical condition. While one possible maintenance strategy is to change tools only when it is absolutely necessary, tools can also be changed preventively to avoid further damages. Such different maintenance strategies influence the maintenance duration and the overall project performance. However, determine downtime related to a particular maintenance strategy is still a challenging task. This paper shows an analysis of the robustness to achieve the planned project performance of a maintenance strategy considering uncertainties of wear behavior of the cutting tools. A simulation based analysis is presented, implementing an empirical wear prediction model. Different strategies of maintenance planning are compared by performing a parameter variation study including Monte-Carlo simulations. The maintenance costs are calculated and evaluated with respect to their robustness. Finally, an improved and robust maintenance strategy has been determined. Keywords: Mechanized tunneling, Maintenance, Wear of cutting tools, Process simulation, Robustness, Uncertainty modeling

  5. Calibration sets selection strategy for the construction of robust PLS models for prediction of biodiesel/diesel blends physico-chemical properties using NIR spectroscopy

    Science.gov (United States)

    Palou, Anna; Miró, Aira; Blanco, Marcelo; Larraz, Rafael; Gómez, José Francisco; Martínez, Teresa; González, Josep Maria; Alcalà, Manel

    2017-06-01

    Even when the feasibility of using near infrared (NIR) spectroscopy combined with partial least squares (PLS) regression for prediction of physico-chemical properties of biodiesel/diesel blends has been widely demonstrated, inclusion in the calibration sets of the whole variability of diesel samples from diverse production origins still remains as an important challenge when constructing the models. This work presents a useful strategy for the systematic selection of calibration sets of samples of biodiesel/diesel blends from diverse origins, based on a binary code, principal components analysis (PCA) and the Kennard-Stones algorithm. Results show that using this methodology the models can keep their robustness over time. PLS calculations have been done using a specialized chemometric software as well as the software of the NIR instrument installed in plant, and both produced RMSEP under reproducibility values of the reference methods. The models have been proved for on-line simultaneous determination of seven properties: density, cetane index, fatty acid methyl esters (FAME) content, cloud point, boiling point at 95% of recovery, flash point and sulphur.

  6. Robustness analysis method for orbit control

    Science.gov (United States)

    Zhang, Jingrui; Yang, Keying; Qi, Rui; Zhao, Shuge; Li, Yanyan

    2017-08-01

    Satellite orbits require periodical maintenance due to the presence of perturbations. However, random errors caused by inaccurate orbit determination and thrust implementation may lead to failure of the orbit control strategy. Therefore, it is necessary to analyze the robustness of the orbit control methods. Feasible strategies which are tolerant to errors of a certain magnitude can be developed to perform reliable orbit control for the satellite. In this paper, first, the orbital dynamic model is formulated by Gauss' form of the planetary equation using the mean orbit elements; the atmospheric drag and the Earth's non-spherical perturbations are taken into consideration in this model. Second, an impulsive control strategy employing the differential correction algorithm is developed to maintain the satellite trajectory parameters in given ranges. Finally, the robustness of the impulsive control method is analyzed through Monte Carlo simulations while taking orbit determination error and thrust error into account.

  7. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  8. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  9. Sampling strategies and stopping criteria for stochastic dual dynamic programming: a case study in long-term hydrothermal scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)

    2011-03-15

    The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)

  10. Robustness of Long Span Reciprocal Timber Structures

    DEFF Research Database (Denmark)

    Balfroid, Nathalie; Kirkegaard, Poul Henning

    2011-01-01

    engineer may take necessary steps to design robust structures that are insensitive to accidental circumstances. The present paper makes a discussion of such robustness issues related to the future development of reciprocal timber structures. The paper concludes that these kind of structures can have...... a potential as long span timber structures in real projects if they are carefully designed with respect to the overall robustness strategies.......Robustness of structural systems has obtained a renewed interest due to a much more frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure. The interest has also been facilitated due to recently severe structural failures...

  11. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  12. Adaptive sampling strategies with high-throughput molecular dynamics

    Science.gov (United States)

    Clementi, Cecilia

    Despite recent significant hardware and software developments, the complete thermodynamic and kinetic characterization of large macromolecular complexes by molecular simulations still presents significant challenges. The high dimensionality of these systems and the complexity of the associated potential energy surfaces (creating multiple metastable regions connected by high free energy barriers) does not usually allow to adequately sample the relevant regions of their configurational space by means of a single, long Molecular Dynamics (MD) trajectory. Several different approaches have been proposed to tackle this sampling problem. We focus on the development of ensemble simulation strategies, where data from a large number of weakly coupled simulations are integrated to explore the configurational landscape of a complex system more efficiently. Ensemble methods are of increasing interest as the hardware roadmap is now mostly based on increasing core counts, rather than clock speeds. The main challenge in the development of an ensemble approach for efficient sampling is in the design of strategies to adaptively distribute the trajectories over the relevant regions of the systems' configurational space, without using any a priori information on the system global properties. We will discuss the definition of smart adaptive sampling approaches that can redirect computational resources towards unexplored yet relevant regions. Our approaches are based on new developments in dimensionality reduction for high dimensional dynamical systems, and optimal redistribution of resources. NSF CHE-1152344, NSF CHE-1265929, Welch Foundation C-1570.

  13. A Robust PCR Protocol for HIV Drug Resistance Testing on Low-Level Viremia Samples

    Directory of Open Access Journals (Sweden)

    Shivani Gupta

    2017-01-01

    Full Text Available The prevalence of drug resistance (DR mutations in people with HIV-1 infection, particularly those with low-level viremia (LLV, supports the need to improve the sensitivity of amplification methods for HIV DR genotyping in order to optimize antiretroviral regimen and facilitate HIV-1 DR surveillance and relevant research. Here we report on a fully validated PCR-based protocol that achieves consistent amplification of the protease (PR and reverse transcriptase (RT regions of HIV-1 pol gene across many HIV-1 subtypes from LLV plasma samples. HIV-spiked plasma samples from the External Quality Assurance Program Oversight Laboratory (EQAPOL, covering various HIV-1 subtypes, as well as clinical specimens were used to optimize and validate the protocol. Our results demonstrate that this protocol has a broad HIV-1 subtype coverage and viral load span with high sensitivity and reproducibility. Moreover, the protocol is robust even when plasma sample volumes are limited, the HIV viral load is unknown, and/or the HIV subtype is undetermined. Thus, the protocol is applicable for the initial amplification of the HIV-1 PR and RT genes required for subsequent genotypic DR assays.

  14. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  15. Mendelian breeding units versus standard sampling strategies: mitochondrial DNA variation in southwest Sardinia

    Directory of Open Access Journals (Sweden)

    Daria Sanna

    2011-01-01

    Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.

  16. Robust portfolio choice with ambiguity and learning about return predictability

    DEFF Research Database (Denmark)

    Larsen, Linda Sandris; Branger, Nicole; Munk, Claus

    2013-01-01

    We analyze the optimal stock-bond portfolio under both learning and ambiguity aversion. Stock returns are predictable by an observable and an unobservable predictor, and the investor has to learn about the latter. Furthermore, the investor is ambiguity-averse and has a preference for investment...... strategies that are robust to model misspecifications. We derive a closed-form solution for the optimal robust investment strategy. We find that both learning and ambiguity aversion impact the level and structure of the optimal stock investment. Suboptimal strategies resulting either from not learning...... or from not considering ambiguity can lead to economically significant losses....

  17. Multiple strategies to improve sensitivity, speed and robustness of isothermal nucleic acid amplification for rapid pathogen detection

    Directory of Open Access Journals (Sweden)

    Lemieux Bertrand

    2011-05-01

    Full Text Available Abstract Background In the past decades the rapid growth of molecular diagnostics (based on either traditional PCR or isothermal amplification technologies meet the demand for fast and accurate testing. Although isothermal amplification technologies have the advantages of low cost requirements for instruments, the further improvement on sensitivity, speed and robustness is a prerequisite for the applications in rapid pathogen detection, especially at point-of-care diagnostics. Here, we describe and explore several strategies to improve one of the isothermal technologies, helicase-dependent amplification (HDA. Results Multiple strategies were approached to improve the overall performance of the isothermal amplification: the restriction endonuclease-mediated DNA helicase homing, macromolecular crowding agents, and the optimization of reaction enzyme mix. The effect of combing all strategies was compared with that of the individual strategy. With all of above methods, we are able to detect 50 copies of Neisseria gonorrhoeae DNA in just 20 minutes of amplification using a nearly instrument-free detection platform (BESt™ cassette. Conclusions The strategies addressed in this proof-of-concept study are independent of expensive equipments, and are not limited to particular primers, targets or detection format. However, they make a large difference in assay performance. Some of them can be adjusted and applied to other formats of nucleic acid amplification. Furthermore, the strategies to improve the in vitro assays by maximally simulating the nature conditions may be useful in the general field of developing molecular assays. A new fast molecular assay for Neisseria gonorrhoeae has also been developed which has great potential to be used at point-of-care diagnostics.

  18. Stable isotope labeling strategy based on coding theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori, E-mail: kigawa@riken.jp [RIKEN Quantitative Biology Center (QBiC), Laboratory for Biomolecular Structure and Dynamics (Japan)

    2015-10-15

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells.

  19. Stable isotope labeling strategy based on coding theory

    International Nuclear Information System (INIS)

    Kasai, Takuma; Koshiba, Seizo; Yokoyama, Jun; Kigawa, Takanori

    2015-01-01

    We describe a strategy for stable isotope-aided protein nuclear magnetic resonance (NMR) analysis, called stable isotope encoding. The basic idea of this strategy is that amino-acid selective labeling can be considered as “encoding and decoding” processes, in which the information of amino acid type is encoded by the stable isotope labeling ratio of the corresponding residue and it is decoded by analyzing NMR spectra. According to the idea, the strategy can diminish the required number of labelled samples by increasing information content per sample, enabling discrimination of 19 kinds of non-proline amino acids with only three labeled samples. The idea also enables this strategy to combine with information technologies, such as error detection by check digit, to improve the robustness of analyses with low quality data. Stable isotope encoding will facilitate NMR analyses of proteins under non-ideal conditions, such as those in large complex systems, with low-solubility, and in living cells

  20. Sampling strategies for indoor radon investigations

    International Nuclear Information System (INIS)

    Prichard, H.M.

    1983-01-01

    Recent investigations prompted by concern about the environmental effects of residential energy conservation have produced many accounts of indoor radon concentrations far above background levels. In many instances time-normalized annual exposures exceeded the 4 WLM per year standard currently used for uranium mining. Further investigations of indoor radon exposures are necessary to judge the extent of the problem and to estimate the practicality of health effects studies. A number of trends can be discerned as more indoor surveys are reported. It is becoming increasingly clear that local geological factors play a major, if not dominant role in determining the distribution of indoor radon concentrations in a given area. Within a giving locale, indoor radon concentrations tend to be log-normally distributed, and sample means differ markedly from one region to another. The appreciation of geological factors and the general log-normality of radon distributions will improve the accuracy of population dose estimates and facilitate the design of preliminary health effects studies. The relative merits of grab samples, short and long term integrated samples, and more complicated dose assessment strategies are discussed in the context of several types of epidemiological investigations. A new passive radon sampler with a 24 hour integration time is described and evaluated as a tool for pilot investigations

  1. Robust statistical methods with R

    CERN Document Server

    Jureckova, Jana

    2005-01-01

    Robust statistical methods were developed to supplement the classical procedures when the data violate classical assumptions. They are ideally suited to applied research across a broad spectrum of study, yet most books on the subject are narrowly focused, overly theoretical, or simply outdated. Robust Statistical Methods with R provides a systematic treatment of robust procedures with an emphasis on practical application.The authors work from underlying mathematical tools to implementation, paying special attention to the computational aspects. They cover the whole range of robust methods, including differentiable statistical functions, distance of measures, influence functions, and asymptotic distributions, in a rigorous yet approachable manner. Highlighting hands-on problem solving, many examples and computational algorithms using the R software supplement the discussion. The book examines the characteristics of robustness, estimators of real parameter, large sample properties, and goodness-of-fit tests. It...

  2. Small Sample Robust Testing for Normality against Pareto Tails

    Czech Academy of Sciences Publication Activity Database

    Stehlík, M.; Fabián, Zdeněk; Střelec, L.

    2012-01-01

    Roč. 41, č. 7 (2012), s. 1167-1194 ISSN 0361-0918 Grant - others:Aktion(CZ-AT) 51p7, 54p21, 50p14, 54p13 Institutional research plan: CEZ:AV0Z10300504 Keywords : consistency * Hill estimator * t-Hill estimator * location functional * Pareto tail * power comparison * returns * robust tests for normality Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.295, year: 2012

  3. A robust sound perception model suitable for neuromorphic implementation.

    Science.gov (United States)

    Coath, Martin; Sheik, Sadique; Chicca, Elisabetta; Indiveri, Giacomo; Denham, Susan L; Wennekers, Thomas

    2013-01-01

    We have recently demonstrated the emergence of dynamic feature sensitivity through exposure to formative stimuli in a real-time neuromorphic system implementing a hybrid analog/digital network of spiking neurons. This network, inspired by models of auditory processing in mammals, includes several mutually connected layers with distance-dependent transmission delays and learning in the form of spike timing dependent plasticity, which effects stimulus-driven changes in the network connectivity. Here we present results that demonstrate that the network is robust to a range of variations in the stimulus pattern, such as are found in naturalistic stimuli and neural responses. This robustness is a property critical to the development of realistic, electronic neuromorphic systems. We analyze the variability of the response of the network to "noisy" stimuli which allows us to characterize the acuity in information-theoretic terms. This provides an objective basis for the quantitative comparison of networks, their connectivity patterns, and learning strategies, which can inform future design decisions. We also show, using stimuli derived from speech samples, that the principles are robust to other challenges, such as variable presentation rate, that would have to be met by systems deployed in the real world. Finally we demonstrate the potential applicability of the approach to real sounds.

  4. A Robust Sound Perception Model Suitable for Neuromorphic Implementation

    Directory of Open Access Journals (Sweden)

    Martin eCoath

    2014-01-01

    Full Text Available We have recently demonstrated the emergence of dynamic feature sensitivity through exposure to formative stimuli in a real-time neuromorphic system implementing a hybrid analogue/digital network of spiking neurons. This network, inspired by models of auditory processing in mammals, includes several mutually connected layers with distance-dependent transmission delays and learning in the form of spike timing dependent plasticity, which effects stimulus-driven changes in the network connectivity.Here we present results that demonstrate that the network is robust to a range of variations in the stimulus pattern, such as are found in naturalistic stimuli and neural responses. This robustness is a property critical to the development of realistic, electronic neuromorphic systems.We analyse the variability of the response of the network to `noisy' stimuli which allows us to characterize the acuity in information-theoretic terms. This provides an objective basis for the quantitative comparison of networks, their connectivity patterns, and learning strategies, which can inform future design decisions. We also show, using stimuli derived from speech samples, that the principles are robust to other challenges, such as variable presentation rate, that would have to be met by systems deployed in the real world. Finally we demonstrate the potential applicability of the approach to real sounds.

  5. Nonlinear robust hierarchical control for nonlinear uncertain systems

    Directory of Open Access Journals (Sweden)

    Leonessa Alexander

    1999-01-01

    Full Text Available A nonlinear robust control-system design framework predicated on a hierarchical switching controller architecture parameterized over a set of moving nominal system equilibria is developed. Specifically, using equilibria-dependent Lyapunov functions, a hierarchical nonlinear robust control strategy is developed that robustly stabilizes a given nonlinear system over a prescribed range of system uncertainty by robustly stabilizing a collection of nonlinear controlled uncertain subsystems. The robust switching nonlinear controller architecture is designed based on a generalized (lower semicontinuous Lyapunov function obtained by minimizing a potential function over a given switching set induced by the parameterized nominal system equilibria. The proposed framework robustly stabilizes a compact positively invariant set of a given nonlinear uncertain dynamical system with structured parametric uncertainty. Finally, the efficacy of the proposed approach is demonstrated on a jet engine propulsion control problem with uncertain pressure-flow map data.

  6. Chapter 2: Sampling strategies in forest hydrology and biogeochemistry

    Science.gov (United States)

    Roger C. Bales; Martha H. Conklin; Branko Kerkez; Steven Glaser; Jan W. Hopmans; Carolyn T. Hunsaker; Matt Meadows; Peter C. Hartsough

    2011-01-01

    Many aspects of forest hydrology have been based on accurate but not necessarily spatially representative measurements, reflecting the measurement capabilities that were traditionally available. Two developments are bringing about fundamental changes in sampling strategies in forest hydrology and biogeochemistry: (a) technical advances in measurement capability, as is...

  7. Robustness of Quadratic Hedging Strategies in Finance via Backward Stochastic Differential Equations with Jumps

    International Nuclear Information System (INIS)

    Di Nunno, Giulia; Khedher, Asma; Vanmaele, Michèle

    2015-01-01

    We consider a backward stochastic differential equation with jumps (BSDEJ) which is driven by a Brownian motion and a Poisson random measure. We present two candidate-approximations to this BSDEJ and we prove that the solution of each candidate-approximation converges to the solution of the original BSDEJ in a space which we specify. We use this result to investigate in further detail the consequences of the choice of the model to (partial) hedging in incomplete markets in finance. As an application, we consider models in which the small variations in the price dynamics are modeled with a Poisson random measure with infinite activity and models in which these small variations are modeled with a Brownian motion or are cut off. Using the convergence results on BSDEJs, we show that quadratic hedging strategies are robust towards the approximation of the market prices and we derive an estimation of the model risk

  8. Robustness of Quadratic Hedging Strategies in Finance via Backward Stochastic Differential Equations with Jumps

    Energy Technology Data Exchange (ETDEWEB)

    Di Nunno, Giulia, E-mail: giulian@math.uio.no [University of Oslo, Center of Mathematics for Applications (Norway); Khedher, Asma, E-mail: asma.khedher@tum.de [Technische Universität München, Chair of Mathematical Finance (Germany); Vanmaele, Michèle, E-mail: michele.vanmaele@ugent.be [Ghent University, Department of Applied Mathematics, Computer Science and Statistics (Belgium)

    2015-12-15

    We consider a backward stochastic differential equation with jumps (BSDEJ) which is driven by a Brownian motion and a Poisson random measure. We present two candidate-approximations to this BSDEJ and we prove that the solution of each candidate-approximation converges to the solution of the original BSDEJ in a space which we specify. We use this result to investigate in further detail the consequences of the choice of the model to (partial) hedging in incomplete markets in finance. As an application, we consider models in which the small variations in the price dynamics are modeled with a Poisson random measure with infinite activity and models in which these small variations are modeled with a Brownian motion or are cut off. Using the convergence results on BSDEJs, we show that quadratic hedging strategies are robust towards the approximation of the market prices and we derive an estimation of the model risk.

  9. Robust PID based power system stabiliser: Design and real-time implementation

    Energy Technology Data Exchange (ETDEWEB)

    Bevrani, Hassan [Department of Electrical and Computer Eng., University of Kurdistan, Sanandaj (Iran, Islamic Republic of); Hiyama, Takashi [Department of Electrical and Computer Eng., Kumamoto University, Kumamoto (Japan); Bevrani, Hossein [Department of Statistics, University of Tabriz, Tabriz (Iran, Islamic Republic of)

    2011-02-15

    This paper addresses a new robust control strategy to synthesis of robust proportional-integral-derivative (PID) based power system stabilisers (PSS). The PID based PSS design problem is reduced to find an optimal gain vector via an H{infinity} static output feedback control (H{infinity}-SOF) technique, and the solution is easily carried out using a developed iterative linear matrix inequalities algorithm. To illustrate the developed approach, a real-time experiment has been performed for a longitudinal four-machine infinite-bus system using the Analog Power System Simulator at the Research Laboratory of the Kyushu Electric Power Company. The results of the proposed control strategy are compared with full-order H{infinity} and conventional PSS designs. The robust PSS is shown to maintain the robust performance and minimise the effect of disturbances properly. (author)

  10. Measure of robustness for complex networks

    Science.gov (United States)

    Youssef, Mina Nabil

    to the spread of susceptible/infected/recovered (SIR) epidemics. To compute VCSIR, we propose a novel individual-based approach to model the spread of SIR epidemics in networks, which captures the infection size for a given effective infection rate. Thus, VCSIR quantitatively integrates the infection strength with the corresponding infection size. To optimize the VCSIR metric, a new mitigation strategy is proposed, based on a temporary reduction of contacts in social networks. The social contact network is modeled as a weighted graph that describes the frequency of contacts among the individuals. Thus, we consider the spread of an epidemic as a dynamical system, and the total number of infection cases as the state of the system, while the weight reduction in the social network is the controller variable leading to slow/reduce the spread of epidemics. Using optimal control theory, the obtained solution represents an optimal adaptive weighted network defined over a finite time interval. Moreover, given the high complexity of the optimization problem, we propose two heuristics to find the near optimal solutions by reducing the contacts among the individuals in a decentralized way. Finally, the cascading failures that can take place in power grids and have recently caused several blackouts are studied. We propose a new metric to assess the robustness of the power grid with respect to the cascading failures. The power grid topology is modeled as a network, which consists of nodes and links representing power substations and transmission lines, respectively. We also propose an optimal islanding strategy to protect the power grid when a cascading failure event takes place in the grid. The robustness metrics are numerically evaluated using real and synthetic networks to quantify their robustness with respect to disturbing dynamics. We show that the proposed metrics outperform the classical metrics in quantifying the robustness of networks and the efficiency of the mitigation

  11. A new way to improve the robustness of complex communication networks by allocating redundancy links

    International Nuclear Information System (INIS)

    Shi Chunhui; Zhuo Yue; Tang Jieying; Long Keping; Peng Yunfeng

    2012-01-01

    We investigate the robustness of complex communication networks on allocating redundancy links. The protecting key nodes (PKN) strategy is proposed to improve the robustness of complex communication networks against intentional attack. Our numerical simulations show that allocating a few redundant links among key nodes using the PKN strategy will significantly increase the robustness of scale-free complex networks. We have also theoretically proved and demonstrated the effectiveness of the PKN strategy. We expect that our work will help achieve a better understanding of communication networks. (paper)

  12. Sampling strategy to develop a primary core collection of apple ...

    African Journals Online (AJOL)

    PRECIOUS

    2010-01-11

    Jan 11, 2010 ... Physiology and Molecular Biology for Fruit, Tree, Beijing 100193, China. ... analyzed on genetic diversity to ensure their represen- .... strategy, cluster and random sampling. .... on isozyme data―A simulation study, Theor.

  13. New solutions for NPP robustness improvement

    International Nuclear Information System (INIS)

    Wolski, Alexander

    2013-01-01

    Fukushima accident has triggered a major re-assessment of robustness of nuclear stations. First round of evaluations has been Finished. Improvement areas and strategies have been identified. Implementation of upgrades has started world-wide. New solutions can provide substantial benefits

  14. Adjustable Robust Strategies for Flood Protection

    NARCIS (Netherlands)

    Postek, Krzysztof; den Hertog, Dick; Kind, J.; Pustjens, Chris

    2016-01-01

    Flood protection is of major importance to many flood-prone regions and involves substantial investment and maintenance costs. Modern flood risk management requires often to determine a cost-efficient protection strategy, i.e., one with lowest possible long run cost and satisfying flood protection

  15. Robust estimation and hypothesis testing

    CERN Document Server

    Tiku, Moti L

    2004-01-01

    In statistical theory and practice, a certain distribution is usually assumed and then optimal solutions sought. Since deviations from an assumed distribution are very common, one cannot feel comfortable with assuming a particular distribution and believing it to be exactly correct. That brings the robustness issue in focus. In this book, we have given statistical procedures which are robust to plausible deviations from an assumed mode. The method of modified maximum likelihood estimation is used in formulating these procedures. The modified maximum likelihood estimators are explicit functions of sample observations and are easy to compute. They are asymptotically fully efficient and are as efficient as the maximum likelihood estimators for small sample sizes. The maximum likelihood estimators have computational problems and are, therefore, elusive. A broad range of topics are covered in this book. Solutions are given which are easy to implement and are efficient. The solutions are also robust to data anomali...

  16. Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty

    Science.gov (United States)

    Trindade, B. C.; Reed, P. M.; Herman, J. D.; Zeff, H. B.; Characklis, G. W.

    2017-06-01

    Emerging water scarcity concerns in many urban regions are associated with several deeply uncertain factors, including rapid population growth, limited coordination across adjacent municipalities and the increasing risks for sustained regional droughts. Managing these uncertainties will require that regional water utilities identify coordinated, scarcity-mitigating strategies that trigger the appropriate actions needed to avoid water shortages and financial instabilities. This research focuses on the Research Triangle area of North Carolina, seeking to engage the water utilities within Raleigh, Durham, Cary and Chapel Hill in cooperative and robust regional water portfolio planning. Prior analysis of this region through the year 2025 has identified significant regional vulnerabilities to volumetric shortfalls and financial losses. Moreover, efforts to maximize the individual robustness of any of the mentioned utilities also have the potential to strongly degrade the robustness of the others. This research advances a multi-stakeholder Many-Objective Robust Decision Making (MORDM) framework to better account for deeply uncertain factors when identifying cooperative drought management strategies. Our results show that appropriately designing adaptive risk-of-failure action triggers required stressing them with a comprehensive sample of deeply uncertain factors in the computational search phase of MORDM. Search under the new ensemble of states-of-the-world is shown to fundamentally change perceived performance tradeoffs and substantially improve the robustness of individual utilities as well as the overall region to water scarcity. Search under deep uncertainty enhanced the discovery of how cooperative water transfers, financial risk mitigation tools, and coordinated regional demand management must be employed jointly to improve regional robustness and decrease robustness conflicts between the utilities. Insights from this work have general merit for regions where

  17. Robust and economical multi-sample, multi-wavelength UV/vis absorption and fluorescence detector for biological and chemical contamination

    Science.gov (United States)

    Lu, Peter J.; Hoehl, Melanie M.; Macarthur, James B.; Sims, Peter A.; Ma, Hongshen; Slocum, Alexander H.

    2012-09-01

    We present a portable multi-channel, multi-sample UV/vis absorption and fluorescence detection device, which has no moving parts, can operate wirelessly and on batteries, interfaces with smart mobile phones or tablets, and has the sensitivity of commercial instruments costing an order of magnitude more. We use UV absorption to measure the concentration of ethylene glycol in water solutions at all levels above those deemed unsafe by the United States Food and Drug Administration; in addition we use fluorescence to measure the concentration of d-glucose. Both wavelengths can be used concurrently to increase measurement robustness and increase detection sensitivity. Our small robust economical device can be deployed in the absence of laboratory infrastructure, and therefore may find applications immediately following natural disasters, and in more general deployment for much broader-based testing of food, agricultural and household products to prevent outbreaks of poisoning and disease.

  18. Attractive ellipsoids in robust control

    CERN Document Server

    Poznyak, Alexander; Azhmyakov, Vadim

    2014-01-01

    This monograph introduces a newly developed robust-control design technique for a wide class of continuous-time dynamical systems called the “attractive ellipsoid method.” Along with a coherent introduction to the proposed control design and related topics, the monograph studies nonlinear affine control systems in the presence of uncertainty and presents a constructive and easily implementable control strategy that guarantees certain stability properties. The authors discuss linear-style feedback control synthesis in the context of the above-mentioned systems. The development and physical implementation of high-performance robust-feedback controllers that work in the absence of complete information is addressed, with numerous examples to illustrate how to apply the attractive ellipsoid method to mechanical and electromechanical systems. While theorems are proved systematically, the emphasis is on understanding and applying the theory to real-world situations. Attractive Ellipsoids in Robust Control will a...

  19. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software.

    Science.gov (United States)

    Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.

  20. Robust weak measurements on finite samples

    International Nuclear Information System (INIS)

    Tollaksen, Jeff

    2007-01-01

    A new weak measurement procedure is introduced for finite samples which yields accurate weak values that are outside the range of eigenvalues and which do not require an exponentially rare ensemble. This procedure provides a unique advantage in the amplification of small nonrandom signals by minimizing uncertainties in determining the weak value and by minimizing sample size. This procedure can also extend the strength of the coupling between the system and measuring device to a new regime

  1. Sampling strategy for a large scale indoor radiation survey - a pilot project

    International Nuclear Information System (INIS)

    Strand, T.; Stranden, E.

    1986-01-01

    Optimisation of a stratified random sampling strategy for large scale indoor radiation surveys is discussed. It is based on the results from a small scale pilot project where variances in dose rates within different categories of houses were assessed. By selecting a predetermined precision level for the mean dose rate in a given region, the number of measurements needed can be optimised. The results of a pilot project in Norway are presented together with the development of the final sampling strategy for a planned large scale survey. (author)

  2. Satellite network robust QoS-aware routing

    CERN Document Server

    Long, Fei

    2014-01-01

    Satellite Network Robust QoS-aware Routing presents a novel routing strategy for satellite networks. This strategy is useful for the design of multi-layered satellite networks as it can greatly reduce the number of time slots in one system cycle. The traffic prediction and engineering approaches make the system robust so that the traffic spikes can be handled effectively. The multi-QoS optimization routing algorithm can satisfy various potential user requirements. Clear and sufficient illustrations are also presented in the book. As the chapters cover the above topics independently, readers from different research backgrounds in constellation design, multi-QoS routing, and traffic engineering can benefit from the book.   Fei Long is a senior engineer at Beijing R&D Center of 54th Research Institute of China Electronics Technology Group Corporation.

  3. Robust nonhomogeneous training samples detection method for space-time adaptive processing radar using sparse-recovery with knowledge-aided

    Science.gov (United States)

    Li, Zhihui; Liu, Hanwei; Zhang, Yongshun; Guo, Yiduo

    2017-10-01

    The performance of space-time adaptive processing (STAP) may degrade significantly when some of the training samples are contaminated by the signal-like components (outliers) in nonhomogeneous clutter environments. To remove the training samples contaminated by outliers in nonhomogeneous clutter environments, a robust nonhomogeneous training samples detection method using the sparse-recovery (SR) with knowledge-aided (KA) is proposed. First, the reduced-dimension (RD) overcomplete spatial-temporal steering dictionary is designed with the prior knowledge of system parameters and the possible target region. Then, the clutter covariance matrix (CCM) of cell under test is efficiently estimated using a modified focal underdetermined system solver (FOCUSS) algorithm, where a RD overcomplete spatial-temporal steering dictionary is applied. Third, the proposed statistics are formed by combining the estimated CCM with the generalized inner products (GIP) method, and the contaminated training samples can be detected and removed. Finally, several simulation results validate the effectiveness of the proposed KA-SR-GIP method.

  4. Many-objective robust decision making for water allocation under climate change

    NARCIS (Netherlands)

    Yan, Dan; Ludwig, Fulco; Huang, He Qing; Werners, Saskia E.

    2017-01-01

    Water allocation is facing profound challenges due to climate change uncertainties. To identify adaptive water allocation strategies that are robust to climate change uncertainties, a model framework combining many-objective robust decision making and biophysical modeling is developed for large

  5. Recruitment of hard-to-reach population subgroups via adaptations of the snowball sampling strategy.

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Lim, Rod Seung-Hwan; Fullerton, Judith

    2010-09-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author's program of research are provided to demonstrate how adaptations of snowball sampling can be used effectively in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more-vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or for research studies when the recruitment of a population-based sample is not essential.

  6. Self-optimizing robust nonlinear model predictive control

    NARCIS (Netherlands)

    Lazar, M.; Heemels, W.P.M.H.; Jokic, A.; Thoma, M.; Allgöwer, F.; Morari, M.

    2009-01-01

    This paper presents a novel method for designing robust MPC schemes that are self-optimizing in terms of disturbance attenuation. The method employs convex control Lyapunov functions and disturbance bounds to optimize robustness of the closed-loop system on-line, at each sampling instant - a unique

  7. Sampling strategies for the analysis of reactive low-molecular weight compounds in air

    NARCIS (Netherlands)

    Henneken, H.

    2006-01-01

    Within this thesis, new sampling and analysis strategies for the determination of airborne workplace contaminants have been developed. Special focus has been directed towards the development of air sampling methods that involve diffusive sampling. In an introductory overview, the current

  8. Solving the Puzzle of Recruitment and Retention-Strategies for Building a Robust Clinical and Translational Research Workforce.

    Science.gov (United States)

    Nearing, Kathryn A; Hunt, Cerise; Presley, Jessica H; Nuechterlein, Bridget M; Moss, Marc; Manson, Spero M

    2015-10-01

    This paper is the first in a five-part series on the clinical and translational science educational pipeline and presents strategies to support recruitment and retention to create diverse pathways into clinical and translational research (CTR). The strategies address multiple levels or contexts of persistence decisions and include: (1) creating a seamless pipeline by forming strategic partnerships to achieve continuity of support for scholars and collective impact; (2) providing meaningful research opportunities to support identity formation as a scientist and sustain motivation to pursue and persist in CTR careers; (3) fostering an environment for effective mentorship and peer support to promote academic and social integration; (4) advocating for institutional policies to alleviate environmental pull factors; and, (5) supporting program evaluation-particularly, the examination of longitudinal outcomes. By combining institutional policies that promote a culture and climate for diversity with quality, evidence-based programs and integrated networks of support, we can create the environment necessary for diverse scholars to progress successfully and efficiently through the pipeline to achieve National Institutes of Health's vision of a robust CTR workforce. © 2015 Wiley Periodicals, Inc.

  9. Plug and Play Robust Distributed Control with Ellipsoidal Parametric Uncertainty System

    Directory of Open Access Journals (Sweden)

    Hong Wang-jian

    2016-01-01

    Full Text Available We consider a continuous linear time invariant system with ellipsoidal parametric uncertainty structured into subsystems. Since the design of a local controller uses only information on a subsystem and its neighbours, we combine the plug and play idea and robust distributed control to propose one distributed control strategy for linear system with ellipsoidal parametric uncertainty. Firstly for linear system with ellipsoidal parametric uncertainty, a necessary and sufficient condition for robust state feedback control is proposed by means of linear matrix inequality. If this necessary and sufficient condition is satisfied, this robust state feedback gain matrix can be easily derived to guarantee robust stability and prescribed closed loop performance. Secondly the plug and play idea is introduced in the design process. Finally by one example of aircraft flutter model parameter identification, the efficiency of the proposed control strategy can be easily realized.

  10. A Geostatistical Approach to Indoor Surface Sampling Strategies

    DEFF Research Database (Denmark)

    Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg

    1990-01-01

    Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...

  11. Effect of intermittent feedback control on robustness of human-like postural control system

    Science.gov (United States)

    Tanabe, Hiroko; Fujii, Keisuke; Suzuki, Yasuyuki; Kouzaki, Motoki

    2016-03-01

    Humans have to acquire postural robustness to maintain stability against internal and external perturbations. Human standing has been recently modelled using an intermittent feedback control. However, the causality inside of the closed-loop postural control system associated with the neural control strategy is still unknown. Here, we examined the effect of intermittent feedback control on postural robustness and of changes in active/passive components on joint coordinative structure. We implemented computer simulation of a quadruple inverted pendulum that is mechanically close to human tiptoe standing. We simulated three pairs of joint viscoelasticity and three choices of neural control strategies for each joint: intermittent, continuous, or passive control. We examined postural robustness for each parameter set by analysing the region of active feedback gain. We found intermittent control at the hip joint was necessary for model stabilisation and model parameters affected the robustness of the pendulum. Joint sways of the pendulum model were partially smaller than or similar to those of experimental data. In conclusion, intermittent feedback control was necessary for the stabilisation of the quadruple inverted pendulum. Also, postural robustness of human-like multi-link standing would be achieved by both passive joint viscoelasticity and neural joint control strategies.

  12. Robust motion estimation using connected operators

    OpenAIRE

    Salembier Clairon, Philippe Jean; Sanson, H

    1997-01-01

    This paper discusses the use of connected operators for robust motion estimation The proposed strategy involves a motion estimation step extracting the dominant motion and a ltering step relying on connected operators that remove objects that do not fol low the dominant motion. These two steps are iterated in order to obtain an accurate motion estimation and a precise de nition of the objects fol lowing this motion This strategy can be applied on the entire frame or on individual connected c...

  13. Forecasting exchange rates: a robust regression approach

    OpenAIRE

    Preminger, Arie; Franck, Raphael

    2005-01-01

    The least squares estimation method as well as other ordinary estimation method for regression models can be severely affected by a small number of outliers, thus providing poor out-of-sample forecasts. This paper suggests a robust regression approach, based on the S-estimation method, to construct forecasting models that are less sensitive to data contamination by outliers. A robust linear autoregressive (RAR) and a robust neural network (RNN) models are estimated to study the predictabil...

  14. Impact of sampling strategy on stream load estimates in till landscape of the Midwest

    Science.gov (United States)

    Vidon, P.; Hubbard, L.E.; Soyeux, E.

    2009-01-01

    Accurately estimating various solute loads in streams during storms is critical to accurately determine maximum daily loads for regulatory purposes. This study investigates the impact of sampling strategy on solute load estimates in streams in the US Midwest. Three different solute types (nitrate, magnesium, and dissolved organic carbon (DOC)) and three sampling strategies are assessed. Regardless of the method, the average error on nitrate loads is higher than for magnesium or DOC loads, and all three methods generally underestimate DOC loads and overestimate magnesium loads. Increasing sampling frequency only slightly improves the accuracy of solute load estimates but generally improves the precision of load calculations. This type of investigation is critical for water management and environmental assessment so error on solute load calculations can be taken into account by landscape managers, and sampling strategies optimized as a function of monitoring objectives. ?? 2008 Springer Science+Business Media B.V.

  15. Perspectives on land snails - sampling strategies for isotopic analyses

    Science.gov (United States)

    Kwiecien, Ola; Kalinowski, Annika; Kamp, Jessica; Pellmann, Anna

    2017-04-01

    Since the seminal works of Goodfriend (1992), several substantial studies confirmed a relation between the isotopic composition of land snail shells (d18O, d13C) and environmental parameters like precipitation amount, moisture source, temperature and vegetation type. This relation, however, is not straightforward and site dependent. The choice of sampling strategy (discrete or bulk sampling) and cleaning procedure (several methods can be used, but comparison of their effects in an individual shell has yet not been achieved) further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in the snails' limited mobility, and therefore an intrinsic aptitude of recording local and site-specific conditions. Also, snail shells are often found at dated archaeological sites. An obvious drawback is that shell assemblages rarely make up a continuous record, and a single shell is only a snapshot of the environmental setting at a given time. Shells from archaeological sites might represent a dietary component and cooking would presumably alter the isotopic signature of aragonite material. Consequently, a proper sampling strategy is of great importance and should be adjusted to the scientific question. Here, we compare and contrast different sampling approaches using modern shells collected in Morocco, Spain and Germany. The bulk shell approach (fine-ground material) yields information on mean environmental parameters within the life span of analyzed individuals. However, despite homogenization, replicate measurements of bulk shell material returned results with a variability greater than analytical precision (up to 2‰ for d18O, and up to 1‰ for d13C), calling for caution analyzing only single individuals. Horizontal high-resolution sampling (single drill holes along growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line

  16. Sample preparation strategies for food and biological samples prior to nanoparticle detection and imaging

    DEFF Research Database (Denmark)

    Larsen, Erik Huusfeldt; Löschner, Katrin

    2014-01-01

    microscopy (TEM) proved to be necessary for trouble shooting of results obtained from AFFF-LS-ICP-MS. Aqueous and enzymatic extraction strategies were tested for thorough sample preparation aiming at degrading the sample matrix and to liberate the AgNPs from chicken meat into liquid suspension. The resulting...... AFFF-ICP-MS fractograms, which corresponded to the enzymatic digests, showed a major nano-peak (about 80 % recovery of AgNPs spiked to the meat) plus new smaller peaks that eluted close to the void volume of the fractograms. Small, but significant shifts in retention time of AFFF peaks were observed...... for the meat sample extracts and the corresponding neat AgNP suspension, and rendered sizing by way of calibration with AgNPs as sizing standards inaccurate. In order to gain further insight into the sizes of the separated AgNPs, or their possible dissolved state, fractions of the AFFF eluate were collected...

  17. Sampling strategy for estimating human exposure pathways to consumer chemicals

    NARCIS (Netherlands)

    Papadopoulou, Eleni; Padilla-Sanchez, Juan A.; Collins, Chris D.; Cousins, Ian T.; Covaci, Adrian; de Wit, Cynthia A.; Leonards, Pim E.G.; Voorspoels, Stefan; Thomsen, Cathrine; Harrad, Stuart; Haug, Line S.

    2016-01-01

    Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway.

  18. Dried blood spot measurement: application in tacrolimus monitoring using limited sampling strategy and abbreviated AUC estimation.

    Science.gov (United States)

    Cheung, Chi Yuen; van der Heijden, Jaques; Hoogtanders, Karin; Christiaans, Maarten; Liu, Yan Lun; Chan, Yiu Han; Choi, Koon Shing; van de Plas, Afke; Shek, Chi Chung; Chau, Ka Foon; Li, Chun Sang; van Hooff, Johannes; Stolk, Leo

    2008-02-01

    Dried blood spot (DBS) sampling and high-performance liquid chromatography tandem-mass spectrometry have been developed in monitoring tacrolimus levels. Our center favors the use of limited sampling strategy and abbreviated formula to estimate the area under concentration-time curve (AUC(0-12)). However, it is inconvenient for patients because they have to wait in the center for blood sampling. We investigated the application of DBS method in tacrolimus level monitoring using limited sampling strategy and abbreviated AUC estimation approach. Duplicate venous samples were obtained at each time point (C(0), C(2), and C(4)). To determine the stability of blood samples, one venous sample was sent to our laboratory immediately. The other duplicate venous samples, together with simultaneous fingerprick blood samples, were sent to the University of Maastricht in the Netherlands. Thirty six patients were recruited and 108 sets of blood samples were collected. There was a highly significant relationship between AUC(0-12), estimated from venous blood samples, and fingerprick blood samples (r(2) = 0.96, P AUC(0-12) strategy as drug monitoring.

  19. Introduction to Robust Estimation and Hypothesis Testing

    CERN Document Server

    Wilcox, Rand R

    2012-01-01

    This revised book provides a thorough explanation of the foundation of robust methods, incorporating the latest updates on R and S-Plus, robust ANOVA (Analysis of Variance) and regression. It guides advanced students and other professionals through the basic strategies used for developing practical solutions to problems, and provides a brief background on the foundations of modern methods, placing the new methods in historical context. Author Rand Wilcox includes chapter exercises and many real-world examples that illustrate how various methods perform in different situations.Introduction to R

  20. Robust network design for multispecies conservation

    Science.gov (United States)

    Ronan Le Bras; Bistra Dilkina; Yexiang Xue; Carla P. Gomes; Kevin S. McKelvey; Michael K. Schwartz; Claire A. Montgomery

    2013-01-01

    Our work is motivated by an important network design application in computational sustainability concerning wildlife conservation. In the face of human development and climate change, it is important that conservation plans for protecting landscape connectivity exhibit certain level of robustness. While previous work has focused on conservation strategies that result...

  1. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  2. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  3. Adaptive Critic Nonlinear Robust Control: A Survey.

    Science.gov (United States)

    Wang, Ding; He, Haibo; Liu, Derong

    2017-10-01

    Adaptive dynamic programming (ADP) and reinforcement learning are quite relevant to each other when performing intelligent optimization. They are both regarded as promising methods involving important components of evaluation and improvement, at the background of information technology, such as artificial intelligence, big data, and deep learning. Although great progresses have been achieved and surveyed when addressing nonlinear optimal control problems, the research on robustness of ADP-based control strategies under uncertain environment has not been fully summarized. Hence, this survey reviews the recent main results of adaptive-critic-based robust control design of continuous-time nonlinear systems. The ADP-based nonlinear optimal regulation is reviewed, followed by robust stabilization of nonlinear systems with matched uncertainties, guaranteed cost control design of unmatched plants, and decentralized stabilization of interconnected systems. Additionally, further comprehensive discussions are presented, including event-based robust control design, improvement of the critic learning rule, nonlinear H ∞ control design, and several notes on future perspectives. By applying the ADP-based optimal and robust control methods to a practical power system and an overhead crane plant, two typical examples are provided to verify the effectiveness of theoretical results. Overall, this survey is beneficial to promote the development of adaptive critic control methods with robustness guarantee and the construction of higher level intelligent systems.

  4. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  5. Robust linear discriminant analysis with distance based estimators

    Science.gov (United States)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  6. Recruiting hard-to-reach United States population sub-groups via adaptations of snowball sampling strategy

    Science.gov (United States)

    Sadler, Georgia Robins; Lee, Hau-Chen; Seung-Hwan Lim, Rod; Fullerton, Judith

    2011-01-01

    Nurse researchers and educators often engage in outreach to narrowly defined populations. This article offers examples of how variations on the snowball sampling recruitment strategy can be applied in the creation of culturally appropriate, community-based information dissemination efforts related to recruitment to health education programs and research studies. Examples from the primary author’s program of research are provided to demonstrate how adaptations of snowball sampling can be effectively used in the recruitment of members of traditionally underserved or vulnerable populations. The adaptation of snowball sampling techniques, as described in this article, helped the authors to gain access to each of the more vulnerable population groups of interest. The use of culturally sensitive recruitment strategies is both appropriate and effective in enlisting the involvement of members of vulnerable populations. Adaptations of snowball sampling strategies should be considered when recruiting participants for education programs or subjects for research studies when recruitment of a population based sample is not essential. PMID:20727089

  7. Stability Constraints for Robust Model Predictive Control

    Directory of Open Access Journals (Sweden)

    Amanda G. S. Ottoni

    2015-01-01

    Full Text Available This paper proposes an approach for the robust stabilization of systems controlled by MPC strategies. Uncertain SISO linear systems with box-bounded parametric uncertainties are considered. The proposed approach delivers some constraints on the control inputs which impose sufficient conditions for the convergence of the system output. These stability constraints can be included in the set of constraints dealt with by existing MPC design strategies, in this way leading to the “robustification” of the MPC.

  8. Robust Switched Predictive Braking Control for Rollover Prevention in Wheeled Vehicles

    Directory of Open Access Journals (Sweden)

    Martín Antonio Rodríguez Licea

    2014-01-01

    Full Text Available The aim of this paper is to propose a differential braking rollover mitigation strategy for wheeled vehicles. The strategy makes use of a polytopic (piecewise linear description of the vehicle and includes translational and rotational dynamics, as well as suspension effects. The braking controller is robust and the system states are predicted to estimate the rollover risk up to a given time horizon. In contrast to existing works, the switched predictive nature of the control allows it to be applied only when risk of rollover is foreseen, interfering a minimum with driver’s actions. The stability of the strategy is analyzed and its robustness is illustrated via numerical simulations using CarSim for a variety of vehicles.

  9. Sampling and analyte enrichment strategies for ambient mass spectrometry.

    Science.gov (United States)

    Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei

    2018-01-01

    Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.

  10. An Optimal Sample Data Usage Strategy to Minimize Overfitting and Underfitting Effects in Regression Tree Models Based on Remotely-Sensed Data

    Directory of Open Access Journals (Sweden)

    Yingxin Gu

    2016-11-01

    Full Text Available Regression tree models have been widely used for remote sensing-based ecosystem mapping. Improper use of the sample data (model training and testing data may cause overfitting and underfitting effects in the model. The goal of this study is to develop an optimal sampling data usage strategy for any dataset and identify an appropriate number of rules in the regression tree model that will improve its accuracy and robustness. Landsat 8 data and Moderate-Resolution Imaging Spectroradiometer-scaled Normalized Difference Vegetation Index (NDVI were used to develop regression tree models. A Python procedure was designed to generate random replications of model parameter options across a range of model development data sizes and rule number constraints. The mean absolute difference (MAD between the predicted and actual NDVI (scaled NDVI, value from 0–200 and its variability across the different randomized replications were calculated to assess the accuracy and stability of the models. In our case study, a six-rule regression tree model developed from 80% of the sample data had the lowest MAD (MADtraining = 2.5 and MADtesting = 2.4, which was suggested as the optimal model. This study demonstrates how the training data and rule number selections impact model accuracy and provides important guidance for future remote-sensing-based ecosystem modeling.

  11. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    2003-01-01

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied. The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low. Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  12. Observing System Simulation Experiments for the assessment of temperature sampling strategies in the Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    F. Raicich

    Full Text Available For the first time in the Mediterranean Sea various temperature sampling strategies are studied and compared to each other by means of the Observing System Simulation Experiment technique. Their usefulness in the framework of the Mediterranean Forecasting System (MFS is assessed by quantifying their impact in a Mediterranean General Circulation Model in numerical twin experiments via univariate data assimilation of temperature profiles in summer and winter conditions. Data assimilation is performed by means of the optimal interpolation algorithm implemented in the SOFA (System for Ocean Forecasting and Analysis code. The sampling strategies studied here include various combinations of eXpendable BathyThermograph (XBT profiles collected along Volunteer Observing Ship (VOS tracks, Airborne XBTs (AXBTs and sea surface temperatures. The actual sampling strategy adopted in the MFS Pilot Project during the Targeted Operational Period (TOP, winter-spring 2000 is also studied.

    The data impact is quantified by the error reduction relative to the free run. The most effective sampling strategies determine 25–40% error reduction, depending on the season, the geographic area and the depth range. A qualitative relationship can be recognized in terms of the spread of information from the data positions, between basin circulation features and spatial patterns of the error reduction fields, as a function of different spatial and seasonal characteristics of the dynamics. The largest error reductions are observed when samplings are characterized by extensive spatial coverages, as in the cases of AXBTs and the combination of XBTs and surface temperatures. The sampling strategy adopted during the TOP is characterized by little impact, as a consequence of a sampling frequency that is too low.

    Key words. Oceanography: general (marginal and semi-enclosed seas; numerical modelling

  13. A robust and fast method of sampling and analysis of delta13C of dissolved inorganic carbon in ground waters.

    Science.gov (United States)

    Spötl, Christoph

    2005-09-01

    The stable carbon isotopic composition of dissolved inorganic carbon (delta13C(DIC)) is traditionally determined using either direct precipitation or gas evolution methods in conjunction with offline gas preparation and measurement in a dual-inlet isotope ratio mass spectrometer. A gas evolution method based on continuous-flow technology is described here, which is easy to use and robust. Water samples (100-1500 microl depending on the carbonate alkalinity) are injected into He-filled autosampler vials in the field and analysed on an automated continuous-flow gas preparation system interfaced to an isotope ratio mass spectrometer. Sample analysis time including online preparation is 10 min and overall precision is 0.1 per thousand. This method is thus fast and can easily be automated for handling large sample batches.

  14. Automatic Offline Formulation of Robust Model Predictive Control Based on Linear Matrix Inequalities Method

    Directory of Open Access Journals (Sweden)

    Longge Zhang

    2013-01-01

    Full Text Available Two automatic robust model predictive control strategies are presented for uncertain polytopic linear plants with input and output constraints. A sequence of nested geometric proportion asymptotically stable ellipsoids and controllers is constructed offline first. Then the feedback controllers are automatically selected with the receding horizon online in the first strategy. Finally, a modified automatic offline robust MPC approach is constructed to improve the closed system's performance. The new proposed strategies not only reduce the conservatism but also decrease the online computation. Numerical examples are given to illustrate their effectiveness.

  15. [Study of spatial stratified sampling strategy of Oncomelania hupensis snail survey based on plant abundance].

    Science.gov (United States)

    Xun-Ping, W; An, Z

    2017-07-27

    Objective To optimize and simplify the survey method of Oncomelania hupensis snails in marshland endemic regions of schistosomiasis, so as to improve the precision, efficiency and economy of the snail survey. Methods A snail sampling strategy (Spatial Sampling Scenario of Oncomelania based on Plant Abundance, SOPA) which took the plant abundance as auxiliary variable was explored and an experimental study in a 50 m×50 m plot in a marshland in the Poyang Lake region was performed. Firstly, the push broom surveyed data was stratified into 5 layers by the plant abundance data; then, the required numbers of optimal sampling points of each layer through Hammond McCullagh equation were calculated; thirdly, every sample point in the line with the Multiple Directional Interpolation (MDI) placement scheme was pinpointed; and finally, the comparison study among the outcomes of the spatial random sampling strategy, the traditional systematic sampling method, the spatial stratified sampling method, Sandwich spatial sampling and inference and SOPA was performed. Results The method (SOPA) proposed in this study had the minimal absolute error of 0.213 8; and the traditional systematic sampling method had the largest estimate, and the absolute error was 0.924 4. Conclusion The snail sampling strategy (SOPA) proposed in this study obtains the higher estimation accuracy than the other four methods.

  16. Return Predictability, Model Uncertainty, and Robust Investment

    DEFF Research Database (Denmark)

    Lukas, Manuel

    Stock return predictability is subject to great uncertainty. In this paper we use the model confidence set approach to quantify uncertainty about expected utility from investment, accounting for potential return predictability. For monthly US data and six representative return prediction models, we...... find that confidence sets are very wide, change significantly with the predictor variables, and frequently include expected utilities for which the investor prefers not to invest. The latter motivates a robust investment strategy maximizing the minimal element of the confidence set. The robust investor...... allocates a much lower share of wealth to stocks compared to a standard investor....

  17. UAV Robust Strategy Control Based on MAS

    Directory of Open Access Journals (Sweden)

    Jian Han

    2014-01-01

    Full Text Available A novel multiagent system (MAS has been proposed to integrate individual UAV (unmanned aerial vehicle to form a UAV team which can accomplish complex missions with better efficiency and effect. The MAS based UAV team control is more able to conquer dynamic situations and enhance the performance of any single UAV. In this paper, the MAS proposed and established combines the reacting and thinking abilities to be an initiative and autonomous hybrid system which can solve missions involving coordinated flight and cooperative operation. The MAS uses BDI model to support its logical perception and to classify the different missions; then the missions will be allocated by utilizing auction mechanism after analyzing dynamic parameters. Prim potential algorithm, particle swarm algorithm, and reallocation mechanism are proposed to realize the rational decomposing and optimal allocation in order to reach the maximum profit. After simulation, the MAS has been proved to be able to promote the success ratio and raise the robustness, while realizing feasibility of coordinated flight and optimality of cooperative mission.

  18. A fully robust PARAFAC method for analyzing fluorescence data

    DEFF Research Database (Denmark)

    Engelen, Sanne; Frosch, Stina; Jørgensen, Bo

    2009-01-01

    and Rayleigh scatter. Recently, a robust PARAFAC method that circumvents the harmful effects of outlying samples has been developed. For removing the scatter effects on the final PARAFAC model, different techniques exist. Newly, an automated scatter identification tool has been constructed. However......, there still exists no robust method for handling fluorescence data encountering both outlying EEM landscapes and scatter. In this paper, we present an iterative algorithm where the robust PARAFAC method and the scatter identification tool are alternately performed. A fully automated robust PARAFAC method...

  19. A sampling strategy for estimating plot average annual fluxes of chemical elements from forest soils

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.; Vries, de W.

    2010-01-01

    A sampling strategy for estimating spatially averaged annual element leaching fluxes from forest soils is presented and tested in three Dutch forest monitoring plots. In this method sampling locations and times (days) are selected by probability sampling. Sampling locations were selected by

  20. Sample preparation composite and replicate strategy for assay of solid oral drug products.

    Science.gov (United States)

    Harrington, Brent; Nickerson, Beverly; Guo, Michele Xuemei; Barber, Marc; Giamalva, David; Lee, Carlos; Scrivens, Garry

    2014-12-16

    In pharmaceutical analysis, the results of drug product assay testing are used to make decisions regarding the quality, efficacy, and stability of the drug product. In order to make sound risk-based decisions concerning drug product potency, an understanding of the uncertainty of the reportable assay value is required. Utilizing the most restrictive criteria in current regulatory documentation, a maximum variability attributed to method repeatability is defined for a drug product potency assay. A sampling strategy that reduces the repeatability component of the assay variability below this predefined maximum is demonstrated. The sampling strategy consists of determining the number of dosage units (k) to be prepared in a composite sample of which there may be a number of equivalent replicate (r) sample preparations. The variability, as measured by the standard error (SE), of a potency assay consists of several sources such as sample preparation and dosage unit variability. A sampling scheme that increases the number of sample preparations (r) and/or number of dosage units (k) per sample preparation will reduce the assay variability and thus decrease the uncertainty around decisions made concerning the potency of the drug product. A maximum allowable repeatability component of the standard error (SE) for the potency assay is derived using material in current regulatory documents. A table of solutions for the number of dosage units per sample preparation (r) and number of replicate sample preparations (k) is presented for any ratio of sample preparation and dosage unit variability.

  1. Development and Demonstration of a Method to Evaluate Bio-Sampling Strategies Using Building Simulation and Sample Planning Software

    OpenAIRE

    Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.

    2010-01-01

    In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by vir...

  2. An Intercompany Perspective on Biopharmaceutical Drug Product Robustness Studies.

    Science.gov (United States)

    Morar-Mitrica, Sorina; Adams, Monica L; Crotts, George; Wurth, Christine; Ihnat, Peter M; Tabish, Tanvir; Antochshuk, Valentyn; DiLuzio, Willow; Dix, Daniel B; Fernandez, Jason E; Gupta, Kapil; Fleming, Michael S; He, Bing; Kranz, James K; Liu, Dingjiang; Narasimhan, Chakravarthy; Routhier, Eric; Taylor, Katherine D; Truong, Nobel; Stokes, Elaine S E

    2018-02-01

    The Biophorum Development Group (BPDG) is an industry-wide consortium enabling networking and sharing of best practices for the development of biopharmaceuticals. To gain a better understanding of current industry approaches for establishing biopharmaceutical drug product (DP) robustness, the BPDG-Formulation Point Share group conducted an intercompany collaboration exercise, which included a bench-marking survey and extensive group discussions around the scope, design, and execution of robustness studies. The results of this industry collaboration revealed several key common themes: (1) overall DP robustness is defined by both the formulation and the manufacturing process robustness; (2) robustness integrates the principles of quality by design (QbD); (3) DP robustness is an important factor in setting critical quality attribute control strategies and commercial specifications; (4) most companies employ robustness studies, along with prior knowledge, risk assessments, and statistics, to develop the DP design space; (5) studies are tailored to commercial development needs and the practices of each company. Three case studies further illustrate how a robustness study design for a biopharmaceutical DP balances experimental complexity, statistical power, scientific understanding, and risk assessment to provide the desired product and process knowledge. The BPDG-Formulation Point Share discusses identified industry challenges with regard to biopharmaceutical DP robustness and presents some recommendations for best practices. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  3. Robust estimation for partially linear models with large-dimensional covariates.

    Science.gov (United States)

    Zhu, LiPing; Li, RunZe; Cui, HengJian

    2013-10-01

    We are concerned with robust estimation procedures to estimate the parameters in partially linear models with large-dimensional covariates. To enhance the interpretability, we suggest implementing a noncon-cave regularization method in the robust estimation procedure to select important covariates from the linear component. We establish the consistency for both the linear and the nonlinear components when the covariate dimension diverges at the rate of [Formula: see text], where n is the sample size. We show that the robust estimate of linear component performs asymptotically as well as its oracle counterpart which assumes the baseline function and the unimportant covariates were known a priori. With a consistent estimator of the linear component, we estimate the nonparametric component by a robust local linear regression. It is proved that the robust estimate of nonlinear component performs asymptotically as well as if the linear component were known in advance. Comprehensive simulation studies are carried out and an application is presented to examine the finite-sample performance of the proposed procedures.

  4. Design and implementation of robust controllers for a gait trainer.

    Science.gov (United States)

    Wang, F C; Yu, C H; Chou, T Y

    2009-08-01

    This paper applies robust algorithms to control an active gait trainer for children with walking disabilities. Compared with traditional rehabilitation procedures, in which two or three trainers are required to assist the patient, a motor-driven mechanism was constructed to improve the efficiency of the procedures. First, a six-bar mechanism was designed and constructed to mimic the trajectory of children's ankles in walking. Second, system identification techniques were applied to obtain system transfer functions at different operating points by experiments. Third, robust control algorithms were used to design Hinfinity robust controllers for the system. Finally, the designed controllers were implemented to verify experimentally the system performance. From the results, the proposed robust control strategies are shown to be effective.

  5. Robustness Metrics: Consolidating the multiple approaches to quantify Robustness

    DEFF Research Database (Denmark)

    Göhler, Simon Moritz; Eifler, Tobias; Howard, Thomas J.

    2016-01-01

    robustness metrics; 3) Functional expectancy and dispersion robustness metrics; and 4) Probability of conformance robustness metrics. The goal was to give a comprehensive overview of robustness metrics and guidance to scholars and practitioners to understand the different types of robustness metrics...

  6. Direct and long-term detection of gene doping in conventional blood samples.

    Science.gov (United States)

    Beiter, T; Zimmermann, M; Fragasso, A; Hudemann, J; Niess, A M; Bitzer, M; Lauer, U M; Simon, P

    2011-03-01

    The misuse of somatic gene therapy for the purpose of enhancing athletic performance is perceived as a coming threat to the world of sports and categorized as 'gene doping'. This article describes a direct detection approach for gene doping that gives a clear yes-or-no answer based on the presence or absence of transgenic DNA in peripheral blood samples. By exploiting a priming strategy to specifically amplify intronless DNA sequences, we developed PCR protocols allowing the detection of very small amounts of transgenic DNA in genomic DNA samples to screen for six prime candidate genes. Our detection strategy was verified in a mouse model, giving positive signals from minute amounts (20 μl) of blood samples for up to 56 days following intramuscular adeno-associated virus-mediated gene transfer, one of the most likely candidate vector systems to be misused for gene doping. To make our detection strategy amenable for routine testing, we implemented a robust sample preparation and processing protocol that allows cost-efficient analysis of small human blood volumes (200 μl) with high specificity and reproducibility. The practicability and reliability of our detection strategy was validated by a screening approach including 327 blood samples taken from professional and recreational athletes under field conditions.

  7. Robust event-triggered MPC with guaranteed asymptotic bound and average sampling rate

    NARCIS (Netherlands)

    Brunner, F.D.; Heemels, W.P.M.H.; Allgower, F.

    2017-01-01

    We propose a robust event-triggered model predictive control (MPC) scheme for linear time-invariant discrete-time systems subject to bounded additive stochastic disturbances and hard constraints on the input and state. For given probability distributions of the disturbances acting on the system, we

  8. Sampling and analysis strategies to support waste form qualification

    International Nuclear Information System (INIS)

    Westsik, J.H. Jr.; Pulsipher, B.A.; Eggett, D.L.; Kuhn, W.L.

    1989-04-01

    As part of the waste acceptance process, waste form producers will be required to (1) demonstrate that their glass waste form will meet minimum specifications, (2) show that the process can be controlled to consistently produce an acceptable waste form, and (3) provide documentation that the waste form produced meets specifications. Key to the success of these endeavors is adequate sampling and chemical and radiochemical analyses of the waste streams from the waste tanks through the process to the final glass product. This paper suggests sampling and analysis strategies for meeting specific statistical objectives of (1) detection of compositions outside specification limits, (2) prediction of final glass product composition, and (3) estimation of composition in process vessels for both reporting and guiding succeeding process steps. 2 refs., 1 fig., 3 tabs

  9. Evolution of networks for body plan patterning; interplay of modularity, robustness and evolvability.

    Directory of Open Access Journals (Sweden)

    Kirsten H Ten Tusscher

    2011-10-01

    Full Text Available A major goal of evolutionary developmental biology (evo-devo is to understand how multicellular body plans of increasing complexity have evolved, and how the corresponding developmental programs are genetically encoded. It has been repeatedly argued that key to the evolution of increased body plan complexity is the modularity of the underlying developmental gene regulatory networks (GRNs. This modularity is considered essential for network robustness and evolvability. In our opinion, these ideas, appealing as they may sound, have not been sufficiently tested. Here we use computer simulations to study the evolution of GRNs' underlying body plan patterning. We select for body plan segmentation and differentiation, as these are considered to be major innovations in metazoan evolution. To allow modular networks to evolve, we independently select for segmentation and differentiation. We study both the occurrence and relation of robustness, evolvability and modularity of evolved networks. Interestingly, we observed two distinct evolutionary strategies to evolve a segmented, differentiated body plan. In the first strategy, first segments and then differentiation domains evolve (SF strategy. In the second scenario segments and domains evolve simultaneously (SS strategy. We demonstrate that under indirect selection for robustness the SF strategy becomes dominant. In addition, as a byproduct of this larger robustness, the SF strategy is also more evolvable. Finally, using a combined functional and architectural approach, we determine network modularity. We find that while SS networks generate segments and domains in an integrated manner, SF networks use largely independent modules to produce segments and domains. Surprisingly, we find that widely used, purely architectural methods for determining network modularity completely fail to establish this higher modularity of SF networks. Finally, we observe that, as a free side effect of evolving segmentation

  10. GFC-Robust Risk Management Under the Basel Accord Using Extreme Value Methodologies

    NARCIS (Netherlands)

    P.A. Santos (Paulo Araújo); J.A. Jiménez-Martín (Juan-Ángel); M.J. McAleer (Michael); T. Pérez-Amaral (Teodosio)

    2011-01-01

    textabstractIn McAleer et al. (2010b), a robust risk management strategy to the Global Financial Crisis (GFC) was proposed under the Basel II Accord by selecting a Value-at-Risk (VaR) forecast that combines the forecasts of different VaR models. The robust forecast was based on the median of the

  11. A cache-friendly sampling strategy for texture-based volume rendering on GPU

    Directory of Open Access Journals (Sweden)

    Junpeng Wang

    2017-06-01

    Full Text Available The texture-based volume rendering is a memory-intensive algorithm. Its performance relies heavily on the performance of the texture cache. However, most existing texture-based volume rendering methods blindly map computational resources to texture memory and result in incoherent memory access patterns, causing low cache hit rates in certain cases. The distance between samples taken by threads of an atomic scheduling unit (e.g. a warp of 32 threads in CUDA of the GPU is a crucial factor that affects the texture cache performance. Based on this fact, we present a new sampling strategy, called Warp Marching, for the ray-casting algorithm of texture-based volume rendering. The effects of different sample organizations and different thread-pixel mappings in the ray-casting algorithm are thoroughly analyzed. Also, a pipeline manner color blending approach is introduced and the power of warp-level GPU operations is leveraged to improve the efficiency of parallel executions on the GPU. In addition, the rendering performance of the Warp Marching is view-independent, and it outperforms existing empty space skipping techniques in scenarios that need to render large dynamic volumes in a low resolution image. Through a series of micro-benchmarking and real-life data experiments, we rigorously analyze our sampling strategies and demonstrate significant performance enhancements over existing sampling methods.

  12. Strategy for thermo-gravimetric analysis of K East fuel samples

    International Nuclear Information System (INIS)

    Lawrence, L.A.

    1997-01-01

    A strategy was developed for the Thermo-Gravimetric Analysis (TGA) testing of K East fuel samples for oxidation rate determinations. Tests will first establish if there are any differences for dry air oxidation between the K West and K East fuel. These tests will be followed by moist inert gas oxidation rate measurements. The final series of tests will consider pure water vapor i.e., steam

  13. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  14. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    Science.gov (United States)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value

  15. Improving snow density estimation for mapping SWE with Lidar snow depth: assessment of uncertainty in modeled density and field sampling strategies in NASA SnowEx

    Science.gov (United States)

    Raleigh, M. S.; Smyth, E.; Small, E. E.

    2017-12-01

    The spatial distribution of snow water equivalent (SWE) is not sufficiently monitored with either remotely sensed or ground-based observations for water resources management. Recent applications of airborne Lidar have yielded basin-wide mapping of SWE when combined with a snow density model. However, in the absence of snow density observations, the uncertainty in these SWE maps is dominated by uncertainty in modeled snow density rather than in Lidar measurement of snow depth. Available observations tend to have a bias in physiographic regime (e.g., flat open areas) and are often insufficient in number to support testing of models across a range of conditions. Thus, there is a need for targeted sampling strategies and controlled model experiments to understand where and why different snow density models diverge. This will enable identification of robust model structures that represent dominant processes controlling snow densification, in support of basin-scale estimation of SWE with remotely-sensed snow depth datasets. The NASA SnowEx mission is a unique opportunity to evaluate sampling strategies of snow density and to quantify and reduce uncertainty in modeled snow density. In this presentation, we present initial field data analyses and modeling results over the Colorado SnowEx domain in the 2016-2017 winter campaign. We detail a framework for spatially mapping the uncertainty in snowpack density, as represented across multiple models. Leveraging the modular SUMMA model, we construct a series of physically-based models to assess systematically the importance of specific process representations to snow density estimates. We will show how models and snow pit observations characterize snow density variations with forest cover in the SnowEx domains. Finally, we will use the spatial maps of density uncertainty to evaluate the selected locations of snow pits, thereby assessing the adequacy of the sampling strategy for targeting uncertainty in modeled snow density.

  16. Cascade-robustness optimization of coupling preference in interconnected networks

    International Nuclear Information System (INIS)

    Zhang, Xue-Jun; Xu, Guo-Qiang; Zhu, Yan-Bo; Xia, Yong-Xiang

    2016-01-01

    Highlights: • A specific memetic algorithm was proposed to optimize coupling links. • A small toy model was investigated to examine the underlying mechanism. • The MA optimized strategy exhibits a moderate assortative pattern. • A novel coupling coefficient index was proposed to quantify coupling preference. - Abstract: Recently, the robustness of interconnected networks has attracted extensive attentions, one of which is to investigate the influence of coupling preference. In this paper, the memetic algorithm (MA) is employed to optimize the coupling links of interconnected networks. Afterwards, a comparison is made between MA optimized coupling strategy and traditional assortative, disassortative and random coupling preferences. It is found that the MA optimized coupling strategy with a moderate assortative value shows an outstanding performance against cascading failures on both synthetic scale-free interconnected networks and real-world networks. We then provide an explanation for this phenomenon from a micro-scope point of view and propose a coupling coefficient index to quantify the coupling preference. Our work is helpful for the design of robust interconnected networks.

  17. Doing Good Again? A Multilevel Institutional Perspective on Corporate Environmental Responsibility and Philanthropic Strategy

    Science.gov (United States)

    Liu, Wei; Wei, Qiao; Huang, Song-Qin

    2017-01-01

    This study investigates the relationship between corporate environmental responsibility and corporate philanthropy. Using a sample of Chinese listed firms from 2008 to 2013, this paper examines the role of corporate environmental responsibility in corporate philanthropy and the moderating influence of the institutional environment using multilevel analysis. The results show that corporate eco-friendly events are positively associated with corporate philanthropic strategy to a significant degree. Provincial-level government intervention positively moderate the positive relationship between eco-friendly events and corporate philanthropy and government corruption is negatively moderate the relationship. All these results are robust according to robustness checks. These findings provide a new perspective on corporate philanthropic strategy as a means to obtain critical resources from the government in order to compensate for the loss made on environmental responsibility. Moreover, the institutional environment is proved here to play an important role in corporate philanthropic strategy. PMID:29064451

  18. Doing Good Again? A Multilevel Institutional Perspective on Corporate Environmental Responsibility and Philanthropic Strategy.

    Science.gov (United States)

    Liu, Wei; Wei, Qiao; Huang, Song-Qin; Tsai, Sang-Bing

    2017-10-24

    This study investigates the relationship between corporate environmental responsibility and corporate philanthropy. Using a sample of Chinese listed firms from 2008 to 2013, this paper examines the role of corporate environmental responsibility in corporate philanthropy and the moderating influence of the institutional environment using multilevel analysis. The results show that corporate eco-friendly events are positively associated with corporate philanthropic strategy to a significant degree. Provincial-level government intervention positively moderate the positive relationship between eco-friendly events and corporate philanthropy and government corruption is negatively moderate the relationship. All these results are robust according to robustness checks. These findings provide a new perspective on corporate philanthropic strategy as a means to obtain critical resources from the government in order to compensate for the loss made on environmental responsibility. Moreover, the institutional environment is proved here to play an important role in corporate philanthropic strategy.

  19. Doing Good Again? A Multilevel Institutional Perspective on Corporate Environmental Responsibility and Philanthropic Strategy

    Directory of Open Access Journals (Sweden)

    Wei Liu

    2017-10-01

    Full Text Available This study investigates the relationship between corporate environmental responsibility and corporate philanthropy. Using a sample of Chinese listed firms from 2008 to 2013, this paper examines the role of corporate environmental responsibility in corporate philanthropy and the moderating influence of the institutional environment using multilevel analysis. The results show that corporate eco-friendly events are positively associated with corporate philanthropic strategy to a significant degree. Provincial-level government intervention positively moderate the positive relationship between eco-friendly events and corporate philanthropy and government corruption is negatively moderate the relationship. All these results are robust according to robustness checks. These findings provide a new perspective on corporate philanthropic strategy as a means to obtain critical resources from the government in order to compensate for the loss made on environmental responsibility. Moreover, the institutional environment is proved here to play an important role in corporate philanthropic strategy.

  20. Toward a bioethical framework for antibiotic use, antimicrobial resistance and for empirically designing ethically robust strategies to protect human health: a research protocol.

    Science.gov (United States)

    Hernández-Marrero, Pablo; Martins Pereira, Sandra; de Sá Brandão, Patrícia Joana; Araújo, Joana; Carvalho, Ana Sofia

    2017-12-01

    Introduction Antimicrobial resistance (AMR) is a challenging global and public health issue, raising bioethical challenges, considerations and strategies. Objectives This research protocol presents a conceptual model leading to formulating an empirically based bioethics framework for antibiotic use, AMR and designing ethically robust strategies to protect human health. Methods Mixed methods research will be used and operationalized into five substudies. The bioethical framework will encompass and integrate two theoretical models: global bioethics and ethical decision-making. Results Being a study protocol, this article reports on planned and ongoing research. Conclusions Based on data collection, future findings and using a comprehensive, integrative, evidence-based approach, a step-by-step bioethical framework will be developed for (i) responsible use of antibiotics in healthcare and (ii) design of strategies to decrease AMR. This will entail the analysis and interpretation of approaches from several bioethical theories, including deontological and consequentialist approaches, and the implications of uncertainty to these approaches.

  1. Existential risks: exploring a robust risk reduction strategy.

    Science.gov (United States)

    Jebari, Karim

    2015-06-01

    A small but growing number of studies have aimed to understand, assess and reduce existential risks, or risks that threaten the continued existence of mankind. However, most attention has been focused on known and tangible risks. This paper proposes a heuristic for reducing the risk of black swan extinction events. These events are, as the name suggests, stochastic and unforeseen when they happen. Decision theory based on a fixed model of possible outcomes cannot properly deal with this kind of event. Neither can probabilistic risk analysis. This paper will argue that the approach that is referred to as engineering safety could be applied to reducing the risk from black swan extinction events. It will also propose a conceptual sketch of how such a strategy may be implemented: isolated, self-sufficient, and continuously manned underground refuges. Some characteristics of such refuges are also described, in particular the psychosocial aspects. Furthermore, it is argued that this implementation of the engineering safety strategy safety barriers would be effective and plausible and could reduce the risk of an extinction event in a wide range of possible (known and unknown) scenarios. Considering the staggering opportunity cost of an existential catastrophe, such strategies ought to be explored more vigorously.

  2. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  3. Engineering Robustness of Microbial Cell Factories.

    Science.gov (United States)

    Gong, Zhiwei; Nielsen, Jens; Zhou, Yongjin J

    2017-10-01

    Metabolic engineering and synthetic biology offer great prospects in developing microbial cell factories capable of converting renewable feedstocks into fuels, chemicals, food ingredients, and pharmaceuticals. However, prohibitively low production rate and mass concentration remain the major hurdles in industrial processes even though the biosynthetic pathways are comprehensively optimized. These limitations are caused by a variety of factors unamenable for host cell survival, such as harsh industrial conditions, fermentation inhibitors from biomass hydrolysates, and toxic compounds including metabolic intermediates and valuable target products. Therefore, engineered microbes with robust phenotypes is essential for achieving higher yield and productivity. In this review, the recent advances in engineering robustness and tolerance of cell factories is described to cope with these issues and briefly introduce novel strategies with great potential to enhance the robustness of cell factories, including metabolic pathway balancing, transporter engineering, and adaptive laboratory evolution. This review also highlights the integration of advanced systems and synthetic biology principles toward engineering the harmony of overall cell function, more than the specific pathways or enzymes. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Robust power system frequency control

    CERN Document Server

    Bevrani, Hassan

    2014-01-01

    This updated edition of the industry standard reference on power system frequency control provides practical, systematic and flexible algorithms for regulating load frequency, offering new solutions to the technical challenges introduced by the escalating role of distributed generation and renewable energy sources in smart electric grids. The author emphasizes the physical constraints and practical engineering issues related to frequency in a deregulated environment, while fostering a conceptual understanding of frequency regulation and robust control techniques. The resulting control strategi

  5. The Effect of Self-Explaining on Robust Learning

    Science.gov (United States)

    Hausmann, Robert G. M.; VanLehn, Kurt

    2010-01-01

    Self-explaining is a domain-independent learning strategy that generally leads to a robust understanding of the domain material. However, there are two potential explanations for its effectiveness. First, self-explanation generates additional "content" that does not exist in the instructional materials. Second, when compared to…

  6. Robust Frequency and Voltage Stability Control Strategy for Standalone AC/DC Hybrid Microgrid

    Directory of Open Access Journals (Sweden)

    Furqan Asghar

    2017-05-01

    Full Text Available The microgrid (MG concept is attracting considerable attention as a solution to energy deficiencies, especially in remote areas, but the intermittent nature of renewable sources and varying loads cause many control problems and thereby affect the quality of power within a microgrid operating in standalone mode. This might cause large frequency and voltage deviations in the system due to unpredictable output power fluctuations. Furthermore, without any main grid support, it is more complex to control and manage the system. In past, droop control and various other coordination control strategies have been presented to stabilize the microgrid frequency and voltages, but in order to utilize the available resources up to their maximum capacity in a positive way, new and robust control mechanisms are required. In this paper, a standalone microgrid is presented, which integrates renewable energy-based distributed generations and local loads. A fuzzy logic-based intelligent control technique is proposed to maintain the frequency and DC (direct current-link voltage stability for sudden changes in load or generation power. Also from a frequency control perspective, a battery energy storage system (BESS is suggested as a replacement for a synchronous generator to stabilize the nominal system frequency as a synchronous generator is unable to operate at its maximum efficiency while being controlled for stabilization purposes. Likewise, a super capacitor (SC and BESS is used to stabilize DC bus voltages even though maximum possible energy is being extracted from renewable generated sources using maximum power point tracking. This newly proposed control method proves to be effective by reducing transient time, minimizing the frequency deviations, maintaining voltages even though maximum power point tracking is working and preventing generators from exceeding their power ratings during disturbances. However, due to the BESS limited capacity, load switching

  7. Robust input design for nonlinear dynamic modeling of AUV.

    Science.gov (United States)

    Nouri, Nowrouz Mohammad; Valadi, Mehrdad

    2017-09-01

    Input design has a dominant role in developing the dynamic model of autonomous underwater vehicles (AUVs) through system identification. Optimal input design is the process of generating informative inputs that can be used to generate the good quality dynamic model of AUVs. In a problem with optimal input design, the desired input signal depends on the unknown system which is intended to be identified. In this paper, the input design approach which is robust to uncertainties in model parameters is used. The Bayesian robust design strategy is applied to design input signals for dynamic modeling of AUVs. The employed approach can design multiple inputs and apply constraints on an AUV system's inputs and outputs. Particle swarm optimization (PSO) is employed to solve the constraint robust optimization problem. The presented algorithm is used for designing the input signals for an AUV, and the estimate obtained by robust input design is compared with that of the optimal input design. According to the results, proposed input design can satisfy both robustness of constraints and optimality. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  9. Tetrahedral meshing via maximal Poisson-disk sampling

    KAUST Repository

    Guo, Jianwei

    2016-02-15

    In this paper, we propose a simple yet effective method to generate 3D-conforming tetrahedral meshes from closed 2-manifold surfaces. Our approach is inspired by recent work on maximal Poisson-disk sampling (MPS), which can generate well-distributed point sets in arbitrary domains. We first perform MPS on the boundary of the input domain, we then sample the interior of the domain, and we finally extract the tetrahedral mesh from the samples by using 3D Delaunay or regular triangulation for uniform or adaptive sampling, respectively. We also propose an efficient optimization strategy to protect the domain boundaries and to remove slivers to improve the meshing quality. We present various experimental results to illustrate the efficiency and the robustness of our proposed approach. We demonstrate that the performance and quality (e.g., minimal dihedral angle) of our approach are superior to current state-of-the-art optimization-based approaches.

  10. Robust and efficient direct multiplex amplification method for large-scale DNA detection of blood samples on FTA cards

    International Nuclear Information System (INIS)

    Jiang Bowei; Xiang Fawei; Zhao Xingchun; Wang Lihua; Fan Chunhai

    2013-01-01

    Deoxyribonucleic acid (DNA) damage arising from radiations widely occurred along with the development of nuclear weapons and clinically wide application of computed tomography (CT) scan and nuclear medicine. All ionizing radiations (X-rays, γ-rays, alpha particles, etc.) and ultraviolet (UV) radiation lead to the DNA damage. Polymerase chain reaction (PCR) is one of the most wildly used techniques for detecting DNA damage as the amplification stops at the site of the damage. Improvements to enhance the efficiency of PCR are always required and remain a great challenge. Here we establish a multiplex PCR assay system (MPAS) that is served as a robust and efficient method for direct detection of target DNA sequences in genomic DNA. The establishment of the system is performed by adding a combination of PCR enhancers to standard PCR buffer, The performance of MPAS was demonstrated by carrying out the direct PCR amplification on l.2 mm human blood punch using commercially available primer sets which include multiple primer pairs. The optimized PCR system resulted in high quality genotyping results without any inhibitory effect indicated and led to a full-profile success rate of 98.13%. Our studies demonstrate that the MPAS provides an efficient and robust method for obtaining sensitive, reliable and reproducible PCR results from human blood samples. (authors)

  11. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  12. Microchip-electrochemistry route for rapid screening of hydroquinone and arbutin from miscellaneous samples: Investigation of the robustness of a simple cross-injector system

    International Nuclear Information System (INIS)

    Crevillen, Agustin G.; Barrigas, Ines; Blasco, Antonio Javier; Gonzalez, Maria Cristina; Escarpa, Alberto

    2006-01-01

    This work examines in deep the analytical performance of an example of 'first-generation' microdevices: capillary electrophoresis microchip (CE) with end-channel electrochemical detection (ED). A hydroquinone and arbutin separation strategically chosen as route involving pharmaceutical-clinical testing, public safety and food control scenes was carried out. The reproducibility of the unpinched electrokinetic protocol was carefully studied and the technical possibility of working indiscriminately and/or sequentially with both simple cross-injectors was also demonstrated using a real sample (R.S.D.'s less than 7%). The robustness of the injection protocol allowed checking the state of the microchip/detector coupling and following the extraction efficiency of the analyte from real sample. Separation variables such as pH, ionic strength and, separation voltage were also carefully assayed and optimized. Analyte screening was performed using borate buffer (pH 9, 60 mM) in less than 180 s in the samples studied improving dramatically the analysis times used for the same analytes on a conventional scale (15 min), with good precision (R.S.D.'s ranging 5-10%), accuracy (recoveries ranging 90-110%) and acceptable resolution (Rs ≥ 1.0). In addition, the excellent analytical performance of the overall analytical method indicated the quality of the whole analytical microsystem and allowed to introduce the definition of robustness for methodologies developed into the 'lab-on-a-chip' scene

  13. Microchip-electrochemistry route for rapid screening of hydroquinone and arbutin from miscellaneous samples: Investigation of the robustness of a simple cross-injector system

    Energy Technology Data Exchange (ETDEWEB)

    Crevillen, Agustin G. [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Barrigas, Ines [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Blasco, Antonio Javier [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Gonzalez, Maria Cristina [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain); Escarpa, Alberto [Dpto. Quimica Analitica e Ingenieria Quimica, Universidad de Alcala, 28871 Alcala de Henares, Madrid (Spain)]. E-mail: alberto.escarpa@uah.es

    2006-03-15

    This work examines in deep the analytical performance of an example of 'first-generation' microdevices: capillary electrophoresis microchip (CE) with end-channel electrochemical detection (ED). A hydroquinone and arbutin separation strategically chosen as route involving pharmaceutical-clinical testing, public safety and food control scenes was carried out. The reproducibility of the unpinched electrokinetic protocol was carefully studied and the technical possibility of working indiscriminately and/or sequentially with both simple cross-injectors was also demonstrated using a real sample (R.S.D.'s less than 7%). The robustness of the injection protocol allowed checking the state of the microchip/detector coupling and following the extraction efficiency of the analyte from real sample. Separation variables such as pH, ionic strength and, separation voltage were also carefully assayed and optimized. Analyte screening was performed using borate buffer (pH 9, 60 mM) in less than 180 s in the samples studied improving dramatically the analysis times used for the same analytes on a conventional scale (15 min), with good precision (R.S.D.'s ranging 5-10%), accuracy (recoveries ranging 90-110%) and acceptable resolution (Rs {>=} 1.0). In addition, the excellent analytical performance of the overall analytical method indicated the quality of the whole analytical microsystem and allowed to introduce the definition of robustness for methodologies developed into the 'lab-on-a-chip' scene.

  14. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples

    Directory of Open Access Journals (Sweden)

    Stéphanie Simon

    2015-11-01

    Full Text Available Botulinum neurotoxins (BoNTs cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A–G, of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as “category A” bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  15. Recommended Immunological Strategies to Screen for Botulinum Neurotoxin-Containing Samples.

    Science.gov (United States)

    Simon, Stéphanie; Fiebig, Uwe; Liu, Yvonne; Tierney, Rob; Dano, Julie; Worbs, Sylvia; Endermann, Tanja; Nevers, Marie-Claire; Volland, Hervé; Sesardic, Dorothea; Dorner, Martin B

    2015-11-26

    Botulinum neurotoxins (BoNTs) cause the life-threatening neurological illness botulism in humans and animals and are divided into seven serotypes (BoNT/A-G), of which serotypes A, B, E, and F cause the disease in humans. BoNTs are classified as "category A" bioterrorism threat agents and are relevant in the context of the Biological Weapons Convention. An international proficiency test (PT) was conducted to evaluate detection, quantification and discrimination capabilities of 23 expert laboratories from the health, food and security areas. Here we describe three immunological strategies that proved to be successful for the detection and quantification of BoNT/A, B, and E considering the restricted sample volume (1 mL) distributed. To analyze the samples qualitatively and quantitatively, the first strategy was based on sensitive immunoenzymatic and immunochromatographic assays for fast qualitative and quantitative analyses. In the second approach, a bead-based suspension array was used for screening followed by conventional ELISA for quantification. In the third approach, an ELISA plate format assay was used for serotype specific immunodetection of BoNT-cleaved substrates, detecting the activity of the light chain, rather than the toxin protein. The results provide guidance for further steps in quality assurance and highlight problems to address in the future.

  16. Real-time control systems: feedback, scheduling and robustness

    Science.gov (United States)

    Simon, Daniel; Seuret, Alexandre; Sename, Olivier

    2017-08-01

    The efficient control of real-time distributed systems, where continuous components are governed through digital devices and communication networks, needs a careful examination of the constraints arising from the different involved domains inside co-design approaches. Thanks to the robustness of feedback control, both new control methodologies and slackened real-time scheduling schemes are proposed beyond the frontiers between these traditionally separated fields. A methodology to design robust aperiodic controllers is provided, where the sampling interval is considered as a control variable of the system. Promising experimental results are provided to show the feasibility and robustness of the approach.

  17. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    Science.gov (United States)

    Debasish Saha; Armen R. Kemanian; Benjamin M. Rau; Paul R. Adler; Felipe Montes

    2017-01-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (...

  18. Evaluating sampling strategies for larval cisco (Coregonus artedi)

    Science.gov (United States)

    Myers, J.T.; Stockwell, J.D.; Yule, D.L.; Black, J.A.

    2008-01-01

    To improve our ability to assess larval cisco (Coregonus artedi) populations in Lake Superior, we conducted a study to compare several sampling strategies. First, we compared density estimates of larval cisco concurrently captured in surface waters with a 2 x 1-m paired neuston net and a 0.5-m (diameter) conical net. Density estimates obtained from the two gear types were not significantly different, suggesting that the conical net is a reasonable alternative to the more cumbersome and costly neuston net. Next, we assessed the effect of tow pattern (sinusoidal versus straight tows) to examine if propeller wash affected larval density. We found no effect of propeller wash on the catchability of larval cisco. Given the availability of global positioning systems, we recommend sampling larval cisco using straight tows to simplify protocols and facilitate straightforward measurements of volume filtered. Finally, we investigated potential trends in larval cisco density estimates by sampling four time periods during the light period of a day at individual sites. Our results indicate no significant trends in larval density estimates during the day. We conclude estimates of larval cisco density across space are not confounded by time at a daily timescale. Well-designed, cost effective surveys of larval cisco abundance will help to further our understanding of this important Great Lakes forage species.

  19. Adapting to Adaptations: Behavioural Strategies that are Robust to Mutations and Other Organisational-Transformations

    Science.gov (United States)

    Egbert, Matthew D.; Pérez-Mercader, Juan

    2016-01-01

    Genetic mutations, infection by parasites or symbionts, and other events can transform the way that an organism’s internal state changes in response to a given environment. We use a minimalistic computational model to support an argument that by behaving “interoceptively,” i.e. responding to internal state rather than to the environment, organisms can be robust to these organisational-transformations. We suggest that the robustness of interoceptive behaviour is due, in part, to the asymmetrical relationship between an organism and its environment, where the latter more substantially influences the former than vice versa. This relationship means that interoceptive behaviour can respond to the environment, the internal state and the interaction between the two, while exteroceptive behaviour can only respond to the environment. We discuss the possibilities that (i) interoceptive behaviour may play an important role of facilitating adaptive evolution (especially in the early evolution of primitive life) and (ii) interoceptive mechanisms could prove useful in efforts to create more robust synthetic life-forms. PMID:26743579

  20. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    Science.gov (United States)

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  1. TiSH - a robust and sensitive global phosphoproteomics strategy employing a combination of TiO(2), SIMAC, and HILIC

    DEFF Research Database (Denmark)

    Engholm-Keller, Kasper; Birck, Pernille; Størling, Joachim

    2012-01-01

    losses. We demonstrate the capability of this strategy by quantitative investigation of early interferon-γ signaling in low quantities of insulinoma cells. We identified ~6600 unique phosphopeptides from 300μg of peptides/condition (22 unique phosphopeptides/μg) in a duplex dimethyl labeling experiment....... This strategy thus shows great potential for interrogating signaling networks from low amounts of sample with high sensitivity and specificity....

  2. Replication and robustness in developmental research.

    Science.gov (United States)

    Duncan, Greg J; Engel, Mimi; Claessens, Amy; Dowsett, Chantelle J

    2014-11-01

    Replications and robustness checks are key elements of the scientific method and a staple in many disciplines. However, leading journals in developmental psychology rarely include explicit replications of prior research conducted by different investigators, and few require authors to establish in their articles or online appendices that their key results are robust across estimation methods, data sets, and demographic subgroups. This article makes the case for prioritizing both explicit replications and, especially, within-study robustness checks in developmental psychology. It provides evidence on variation in effect sizes in developmental studies and documents strikingly different replication and robustness-checking practices in a sample of journals in developmental psychology and a sister behavioral science-applied economics. Our goal is not to show that any one behavioral science has a monopoly on best practices, but rather to show how journals from a related discipline address vital concerns of replication and generalizability shared by all social and behavioral sciences. We provide recommendations for promoting graduate training in replication and robustness-checking methods and for editorial policies that encourage these practices. Although some of our recommendations may shift the form and substance of developmental research articles, we argue that they would generate considerable scientific benefits for the field. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  4. Alignment Condition-Based Robust Adaptive Iterative Learning Control of Uncertain Robot System

    Directory of Open Access Journals (Sweden)

    Guofeng Tong

    2014-04-01

    Full Text Available This paper proposes an adaptive iterative learning control strategy integrated with saturation-based robust control for uncertain robot system in presence of modelling uncertainties, unknown parameter, and external disturbance under alignment condition. An important merit is that it achieves adaptive switching of gain matrix both in conventional PD-type feedforward control and robust adaptive control in the iteration domain simultaneously. The analysis of convergence of proposed control law is based on Lyapunov's direct method under alignment initial condition. Simulation results demonstrate the faster learning rate and better robust performance with proposed algorithm by comparing with other existing robust controllers. The actual experiment on three-DOF robot manipulator shows its better practical effectiveness.

  5. Robust decentralised PI based LFC design for time delay power systems

    International Nuclear Information System (INIS)

    Bevrani, Hassan; Hiyama, Takashi

    2008-01-01

    In this paper, two robust decentralised proportional integral (PI) control designs are proposed for load frequency control (LFC) with communication delays. In both methodologies, the PI based LFC problem is reduced to a static output feedback (SOF) control synthesis for a multiple delay system. The first one is based on the optimal H ∞ control design using a linear matrix inequalities (LMI) technique. The second control design gives a suboptimal solution using a developed iterative linear matrix inequalities (ILMI) algorithm via the mixed H 2 /H ∞ control technique. The control strategies are suitable for LFC applications that usually employ PI control. The proposed control strategies are applied to a three control area power system with time delays and load disturbance to demonstrate their robustness

  6. Robust self-triggered MPC for constrained linear systems

    NARCIS (Netherlands)

    Brunner, F.D.; Heemels, W.P.M.H.; Allgöwer, F.

    2014-01-01

    In this paper we propose a robust self-triggered model predictive control algorithm for linear systems with additive bounded disturbances and hard constraints on the inputs and state. In self-triggered control, at every sampling instant the time until the next sampling instant is computed online

  7. Robust Two Degrees-of-freedom Single-current Control Strategy for LCL-type Grid-Connected DG System under Grid-Frequency Fluctuation and Grid-impedance Variation

    DEFF Research Database (Denmark)

    Zhou, Leming; Chen, Yandong; Luo, An

    2016-01-01

    -of-freedom single-current control (RTDOF-SCC) strategy is proposed, which mainly includes the synchronous reference frame quasi-proportional-integral (SRFQPI) control and robust grid-current-feedback active damping (RGCFAD) control. The proposed SRFQPI control can compensate the local-loads reactive power......, and regulate the instantaneous grid current without steady-state error regardless of the fundamental frequency fluctuation. Simultaneously, the proposed RGCFAD control effectively damps the LCL-resonance peak regardless of the grid-impedance variation, and further improves both transient and steady...

  8. Sustainable Resilient, Robust & Resplendent Enterprises

    DEFF Research Database (Denmark)

    Edgeman, Rick

    to their impact. Resplendent enterprises are introduced with resplendence referring not to some sort of public or private façade, but instead refers to organizations marked by dual brilliance and nobility of strategy, governance and comportment that yields superior and sustainable triple bottom line performance....... Herein resilience, robustness, and resplendence (R3) are integrated with sustainable enterprise excellence (Edgeman and Eskildsen, 2013) or SEE and social-ecological innovation (Eskildsen and Edgeman, 2012) to aid progress of a firm toward producing continuously relevant performance that proceed from...

  9. Robust design optimization using the price of robustness, robust least squares and regularization methods

    Science.gov (United States)

    Bukhari, Hassan J.

    2017-12-01

    In this paper a framework for robust optimization of mechanical design problems and process systems that have parametric uncertainty is presented using three different approaches. Robust optimization problems are formulated so that the optimal solution is robust which means it is minimally sensitive to any perturbations in parameters. The first method uses the price of robustness approach which assumes the uncertain parameters to be symmetric and bounded. The robustness for the design can be controlled by limiting the parameters that can perturb.The second method uses the robust least squares method to determine the optimal parameters when data itself is subjected to perturbations instead of the parameters. The last method manages uncertainty by restricting the perturbation on parameters to improve sensitivity similar to Tikhonov regularization. The methods are implemented on two sets of problems; one linear and the other non-linear. This methodology will be compared with a prior method using multiple Monte Carlo simulation runs which shows that the approach being presented in this paper results in better performance.

  10. Comparison of active and passive sampling strategies for the monitoring of pesticide contamination in streams

    Science.gov (United States)

    Assoumani, Azziz; Margoum, Christelle; Guillemain, Céline; Coquery, Marina

    2014-05-01

    The monitoring of water bodies regarding organic contaminants, and the determination of reliable estimates of concentrations are challenging issues, in particular for the implementation of the Water Framework Directive. Several strategies can be applied to collect water samples for the determination of their contamination level. Grab sampling is fast, easy, and requires little logistical and analytical needs in case of low frequency sampling campaigns. However, this technique lacks of representativeness for streams with high variations of contaminant concentrations, such as pesticides in rivers located in small agricultural watersheds. Increasing the representativeness of this sampling strategy implies greater logistical needs and higher analytical costs. Average automated sampling is therefore a solution as it allows, in a single analysis, the determination of more accurate and more relevant estimates of concentrations. Two types of automatic samplings can be performed: time-related sampling allows the assessment of average concentrations, whereas flow-dependent sampling leads to average flux concentrations. However, the purchase and the maintenance of automatic samplers are quite expensive. Passive sampling has recently been developed as an alternative to grab or average automated sampling, to obtain at lower cost, more realistic estimates of the average concentrations of contaminants in streams. These devices allow the passive accumulation of contaminants from large volumes of water, resulting in ultratrace level detection and smoothed integrative sampling over periods ranging from days to weeks. They allow the determination of time-weighted average (TWA) concentrations of the dissolved fraction of target contaminants, but they need to be calibrated in controlled conditions prior to field applications. In other words, the kinetics of the uptake of the target contaminants into the sampler must be studied in order to determine the corresponding sampling rate

  11. Robust stability bounds for multi-delay networked control systems

    Science.gov (United States)

    Seitz, Timothy; Yedavalli, Rama K.; Behbahani, Alireza

    2018-04-01

    In this paper, the robust stability of a perturbed linear continuous-time system is examined when controlled using a sampled-data networked control system (NCS) framework. Three new robust stability bounds on the time-invariant perturbations to the original continuous-time plant matrix are presented guaranteeing stability for the corresponding discrete closed-loop augmented delay-free system (ADFS) with multiple time-varying sensor and actuator delays. The bounds are differentiated from previous work by accounting for the sampled-data nature of the NCS and for separate communication delays for each sensor and actuator, not a single delay. Therefore, this paper expands the knowledge base in multiple inputs multiple outputs (MIMO) sampled-data time delay systems. Bounds are presented for unstructured, semi-structured, and structured perturbations.

  12. Robust modified GA based multi-stage fuzzy LFC

    International Nuclear Information System (INIS)

    Shayeghi, H.; Jalili, A.; Shayanfar, H.A.

    2007-01-01

    In this paper, a robust genetic algorithm (GA) based multi-stage fuzzy (MSF) controller is proposed for solution of the load frequency control (LFC) problem in a restructured power system that operates under deregulation based on the bilateral policy scheme. In this strategy, the control signal is tuned online from the knowledge base and the fuzzy inference, which request fewer sources and has two rule base sets. In the proposed method, for achieving the desired level of robust performance, exact tuning of the membership functions is very important. Thus, to reduce the design effort and find a better fuzzy system control, membership functions are designed automatically by modified genetic algorithms. The classical genetic algorithms are powerful search techniques to find the global optimal area. However, the global optimum value is not guaranteed using this method, and the speed of the algorithm's convergence is extremely reduced too. To overcome this drawback, a modified genetic algorithm is being used to tune the membership functions of the proposed MSF controller. The effectiveness of the proposed method is demonstrated on a three area restructured power system with possible contracted scenarios under large load demand and area disturbances in comparison with the multi-stage fuzzy and classical fuzzy PID controllers through FD and ITAE performance indices. The results evaluation shows that the proposed control strategy achieves good robust performance for a wide range of system parameters and load changes in the presence of system nonlinearities and is superior to the other controllers. Moreover, this newly developed control strategy has a simple structure, does not require an accurate model of the plant and is fairly easy to implement, which can be useful for the real world complex power systems

  13. Robust modified GA based multi-stage fuzzy LFC

    Energy Technology Data Exchange (ETDEWEB)

    Shayeghi, H. [Technical Engineering Department, The University of Mohaghegh Ardebili, Daneshkah St., Ardebil (Iran); Jalili, A. [Electrical Engineering Group, Islamic Azad University, Ardebil Branch, Ardebil (Iran); Shayanfar, H.A. [Electrical Engineering Department, Iran University of Science and Technology, Tehran (Iran)

    2007-05-15

    In this paper, a robust genetic algorithm (GA) based multi-stage fuzzy (MSF) controller is proposed for solution of the load frequency control (LFC) problem in a restructured power system that operates under deregulation based on the bilateral policy scheme. In this strategy, the control signal is tuned online from the knowledge base and the fuzzy inference, which request fewer sources and has two rule base sets. In the proposed method, for achieving the desired level of robust performance, exact tuning of the membership functions is very important. Thus, to reduce the design effort and find a better fuzzy system control, membership functions are designed automatically by modified genetic algorithms. The classical genetic algorithms are powerful search techniques to find the global optimal area. However, the global optimum value is not guaranteed using this method, and the speed of the algorithm's convergence is extremely reduced too. To overcome this drawback, a modified genetic algorithm is being used to tune the membership functions of the proposed MSF controller. The effectiveness of the proposed method is demonstrated on a three area restructured power system with possible contracted scenarios under large load demand and area disturbances in comparison with the multi-stage fuzzy and classical fuzzy PID controllers through FD and ITAE performance indices. The results evaluation shows that the proposed control strategy achieves good robust performance for a wide range of system parameters and load changes in the presence of system nonlinearities and is superior to the other controllers. Moreover, this newly developed control strategy has a simple structure, does not require an accurate model of the plant and is fairly easy to implement, which can be useful for the real world complex power systems. (author)

  14. Robust and Rapid Air-Borne Odor Tracking without Casting1,2,3

    Science.gov (United States)

    Bhattacharyya, Urvashi

    2015-01-01

    Abstract Casting behavior (zigzagging across an odor stream) is common in air/liquid-borne odor tracking in open fields; however, terrestrial odor localization often involves path selection in a familiar environment. To study this, we trained rats to run toward an odor source in a multi-choice olfactory arena with near-laminar airflow. We find that rather than casting, rats run directly toward an odor port, and if this is incorrect, they serially sample other sources. This behavior is consistent and accurate in the presence of perturbations, such as novel odors, background odor, unilateral nostril stitching, and turbulence. We developed a model that predicts that this run-and-scan tracking of air-borne odors is faster than casting, provided there are a small number of targets at known locations. Thus, the combination of best-guess target selection with fallback serial sampling provides a rapid and robust strategy for finding odor sources in familiar surroundings. PMID:26665165

  15. Robust optimization of supersonic ORC nozzle guide vanes

    Science.gov (United States)

    Bufi, Elio A.; Cinnella, Paola

    2017-03-01

    An efficient Robust Optimization (RO) strategy is developed for the design of 2D supersonic Organic Rankine Cycle turbine expanders. The dense gas effects are not-negligible for this application and they are taken into account describing the thermodynamics by means of the Peng-Robinson-Stryjek-Vera equation of state. The design methodology combines an Uncertainty Quantification (UQ) loop based on a Bayesian kriging model of the system response to the uncertain parameters, used to approximate statistics (mean and variance) of the uncertain system output, a CFD solver, and a multi-objective non-dominated sorting algorithm (NSGA), also based on a Kriging surrogate of the multi-objective fitness function, along with an adaptive infill strategy for surrogate enrichment at each generation of the NSGA. The objective functions are the average and variance of the isentropic efficiency. The blade shape is parametrized by means of a Free Form Deformation (FFD) approach. The robust optimal blades are compared to the baseline design (based on the Method of Characteristics) and to a blade obtained by means of a deterministic CFD-based optimization.

  16. On robust multi-period pre-commitment and time-consistent mean-variance portfolio optimization

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2017-01-01

    textabstractWe consider robust pre-commitment and time-consistent mean-variance optimal asset allocation strategies, that are required to perform well also in a worst-case scenario regarding the development of the asset price. We show that worst-case scenarios for both strategies can be found by

  17. Sampling strategies to measure the prevalence of common recurrent infections in longitudinal studies

    Directory of Open Access Journals (Sweden)

    Luby Stephen P

    2010-08-01

    Full Text Available Abstract Background Measuring recurrent infections such as diarrhoea or respiratory infections in epidemiological studies is a methodological challenge. Problems in measuring the incidence of recurrent infections include the episode definition, recall error, and the logistics of close follow up. Longitudinal prevalence (LP, the proportion-of-time-ill estimated by repeated prevalence measurements, is an alternative measure to incidence of recurrent infections. In contrast to incidence which usually requires continuous sampling, LP can be measured at intervals. This study explored how many more participants are needed for infrequent sampling to achieve the same study power as frequent sampling. Methods We developed a set of four empirical simulation models representing low and high risk settings with short or long episode durations. The model was used to evaluate different sampling strategies with different assumptions on recall period and recall error. Results The model identified three major factors that influence sampling strategies: (1 the clustering of episodes in individuals; (2 the duration of episodes; (3 the positive correlation between an individual's disease incidence and episode duration. Intermittent sampling (e.g. 12 times per year often requires only a slightly larger sample size compared to continuous sampling, especially in cluster-randomized trials. The collection of period prevalence data can lead to highly biased effect estimates if the exposure variable is associated with episode duration. To maximize study power, recall periods of 3 to 7 days may be preferable over shorter periods, even if this leads to inaccuracy in the prevalence estimates. Conclusion Choosing the optimal approach to measure recurrent infections in epidemiological studies depends on the setting, the study objectives, study design and budget constraints. Sampling at intervals can contribute to making epidemiological studies and trials more efficient, valid

  18. OUTPACE long duration stations: physical variability, context of biogeochemical sampling, and evaluation of sampling strategy

    Directory of Open Access Journals (Sweden)

    A. de Verneil

    2018-04-01

    Full Text Available Research cruises to quantify biogeochemical fluxes in the ocean require taking measurements at stations lasting at least several days. A popular experimental design is the quasi-Lagrangian drifter, often mounted with in situ incubations or sediment traps that follow the flow of water over time. After initial drifter deployment, the ship tracks the drifter for continuing measurements that are supposed to represent the same water environment. An outstanding question is how to best determine whether this is true. During the Oligotrophy to UlTra-oligotrophy PACific Experiment (OUTPACE cruise, from 18 February to 3 April 2015 in the western tropical South Pacific, three separate stations of long duration (five days over the upper 500 m were conducted in this quasi-Lagrangian sampling scheme. Here we present physical data to provide context for these three stations and to assess whether the sampling strategy worked, i.e., that a single body of water was sampled. After analyzing tracer variability and local water circulation at each station, we identify water layers and times where the drifter risks encountering another body of water. While almost no realization of this sampling scheme will be truly Lagrangian, due to the presence of vertical shear, the depth-resolved observations during the three stations show most layers sampled sufficiently homogeneous physical environments during OUTPACE. By directly addressing the concerns raised by these quasi-Lagrangian sampling platforms, a protocol of best practices can begin to be formulated so that future research campaigns include the complementary datasets and analyses presented here to verify the appropriate use of the drifter platform.

  19. Exploration of robust operating conditions in inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Tromp, John W.; Pomares, Mario; Alvarez-Prieto, Manuel; Cole, Amanda; Ying Hai; Salin, Eric D.

    2003-01-01

    'Robust' conditions, as defined by Mermet and co-workers for inductively coupled plasma (ICP)-atomic emission spectrometry, minimize matrix effects on analyte signals, and are obtained by increasing power and reducing nebulizer gas flow. In ICP-mass spectrometry (MS), it is known that reduced nebulizer gas flow usually leads to more robust conditions such that matrix effects are reduced. In this work, robust conditions for ICP-MS have been determined by optimizing for accuracy in the determination of analytes in a multi-element solution with various interferents (Al, Ba, Cs, K, Na), by varying power, nebulizer gas flow, sample introduction rate and ion lens voltage. The goal of the work was to determine which operating parameters were the most important in reducing matrix effects, and whether different interferents yielded the same robust conditions. Reduction in nebulizer gas flow and in sample input rate led to a significantly decreased interference, while an increase in power seemed to have a lesser effect. Once the other parameters had been adjusted to their robust values, there was no additional improvement in accuracy attainable by adjusting the ion lens voltage. The robust conditions were universal, since, for all the interferents and analytes studied, the optimum was found at the same operating conditions. One drawback to the use of robust conditions was the slightly reduced sensitivity; however, in the context of 'intelligent' instruments, the concept of 'robust conditions' is useful in many cases

  20. A Robust H∞ Controller for an UAV Flight Control System

    Directory of Open Access Journals (Sweden)

    J. López

    2015-01-01

    Full Text Available The objective of this paper is the implementation and validation of a robust H∞ controller for an UAV to track all types of manoeuvres in the presence of noisy environment. A robust inner-outer loop strategy is implemented. To design the H∞ robust controller in the inner loop, H∞ control methodology is used. The two controllers that conform the outer loop are designed using the H∞ Loop Shaping technique. The reference vector used in the control architecture formed by vertical velocity, true airspeed, and heading angle, suggests a nontraditional way to pilot the aircraft. The simulation results show that the proposed control scheme works well despite the presence of noise and uncertainties, so the control system satisfies the requirements.

  1. Comparison of sampling strategies for tobacco retailer inspections to maximize coverage in vulnerable areas and minimize cost.

    Science.gov (United States)

    Lee, Joseph G L; Shook-Sa, Bonnie E; Bowling, J Michael; Ribisl, Kurt M

    2017-06-23

    In the United States, tens of thousands of inspections of tobacco retailers are conducted each year. Various sampling choices can reduce travel costs, emphasize enforcement in areas with greater non-compliance, and allow for comparability between states and over time. We sought to develop a model sampling strategy for state tobacco retailer inspections. Using a 2014 list of 10,161 North Carolina tobacco retailers, we compared results from simple random sampling; stratified, clustered at the ZIP code sampling; and, stratified, clustered at the census tract sampling. We conducted a simulation of repeated sampling and compared approaches for their comparative level of precision, coverage, and retailer dispersion. While maintaining an adequate design effect and statistical precision appropriate for a public health enforcement program, both stratified, clustered ZIP- and tract-based approaches were feasible. Both ZIP and tract strategies yielded improvements over simple random sampling, with relative improvements, respectively, of average distance between retailers (reduced 5.0% and 1.9%), percent Black residents in sampled neighborhoods (increased 17.2% and 32.6%), percent Hispanic residents in sampled neighborhoods (reduced 2.2% and increased 18.3%), percentage of sampled retailers located near schools (increased 61.3% and 37.5%), and poverty rate in sampled neighborhoods (increased 14.0% and 38.2%). States can make retailer inspections more efficient and targeted with stratified, clustered sampling. Use of statistically appropriate sampling strategies like these should be considered by states, researchers, and the Food and Drug Administration to improve program impact and allow for comparisons over time and across states. The authors present a model tobacco retailer sampling strategy for promoting compliance and reducing costs that could be used by U.S. states and the Food and Drug Administration (FDA). The design is feasible to implement in North Carolina. Use of

  2. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  3. Sampling strong tracking nonlinear unscented Kalman filter and its application in eye tracking

    International Nuclear Information System (INIS)

    Zu-Tao, Zhang; Jia-Shu, Zhang

    2010-01-01

    The unscented Kalman filter is a developed well-known method for nonlinear motion estimation and tracking. However, the standard unscented Kalman filter has the inherent drawbacks, such as numerical instability and much more time spent on calculation in practical applications. In this paper, we present a novel sampling strong tracking nonlinear unscented Kalman filter, aiming to overcome the difficulty in nonlinear eye tracking. In the above proposed filter, the simplified unscented transform sampling strategy with n + 2 sigma points leads to the computational efficiency, and suboptimal fading factor of strong tracking filtering is introduced to improve robustness and accuracy of eye tracking. Compared with the related unscented Kalman filter for eye tracking, the proposed filter has potential advantages in robustness, convergence speed, and tracking accuracy. The final experimental results show the validity of our method for eye tracking under realistic conditions. (classical areas of phenomenology)

  4. Robustness of pinning a general complex dynamical network

    International Nuclear Information System (INIS)

    Wang Lei; Sun Youxian

    2010-01-01

    This Letter studies the robustness problem of pinning a general complex dynamical network toward an assigned synchronous evolution. Several synchronization criteria are presented to guarantee the convergence of the pinning process locally and globally by construction of Lyapunov functions. In particular, if a pinning strategy has been designed for synchronization of a given complex dynamical network, then no matter what uncertainties occur among the pinned nodes, synchronization can still be guaranteed through the pinning. The analytical results show that pinning control has a certain robustness against perturbations on network architecture: adding, deleting and changing the weights of edges. Numerical simulations illustrated by scale-free complex networks verify the theoretical results above-acquired.

  5. Sampling strategies for tropical forest nutrient cycling studies: a case study in São Paulo, Brazil

    Directory of Open Access Journals (Sweden)

    G. Sparovek

    1997-12-01

    Full Text Available The precise sampling of soil, biological or micro climatic attributes in tropical forests, which are characterized by a high diversity of species and complex spatial variability, is a difficult task. We found few basic studies to guide sampling procedures. The objective of this study was to define a sampling strategy and data analysis for some parameters frequently used in nutrient cycling studies, i. e., litter amount, total nutrient amounts in litter and its composition (Ca, Mg, Κ, Ν and P, and soil attributes at three depths (organic matter, Ρ content, cation exchange capacity and base saturation. A natural remnant forest in the West of São Paulo State (Brazil was selected as study area and samples were collected in July, 1989. The total amount of litter and its total nutrient amounts had a high spatial independent variance. Conversely, the variance of litter composition was lower and the spatial dependency was peculiar to each nutrient. The sampling strategy for the estimation of litter amounts and the amount of nutrient in litter should be different than the sampling strategy for nutrient composition. For the estimation of litter amounts and the amount of nutrients in litter (related to quantity a large number of randomly distributed determinations are needed. Otherwise, for the estimation of litter nutrient composition (related to quality a smaller amount of spatially located samples should be analyzed. The determination of sampling for soil attributes differed according to the depth. Overall, surface samples (0-5 cm showed high short distance spatial dependent variance, whereas, subsurface samples exhibited spatial dependency in longer distances. Short transects with sampling interval of 5-10 m are recommended for surface sampling. Subsurface samples must also be spatially located, but with transects or grids with longer distances between sampling points over the entire area. Composite soil samples would not provide a complete

  6. Robust efficient video fingerprinting

    Science.gov (United States)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  7. Does a crouched leg posture enhance running stability and robustness?

    Science.gov (United States)

    Blum, Yvonne; Birn-Jeffery, Aleksandra; Daley, Monica A; Seyfarth, Andre

    2011-07-21

    Humans and birds both walk and run bipedally on compliant legs. However, differences in leg architecture may result in species-specific leg control strategies as indicated by the observed gait patterns. In this work, control strategies for stable running are derived based on a conceptual model and compared with experimental data on running humans and pheasants (Phasianus colchicus). From a model perspective, running with compliant legs can be represented by the planar spring mass model and stabilized by applying swing leg control. Here, linear adaptations of the three leg parameters, leg angle, leg length and leg stiffness during late swing phase are assumed. Experimentally observed kinematic control parameters (leg rotation and leg length change) of human and avian running are compared, and interpreted within the context of this model, with specific focus on stability and robustness characteristics. The results suggest differences in stability characteristics and applied control strategies of human and avian running, which may relate to differences in leg posture (straight leg posture in humans, and crouched leg posture in birds). It has been suggested that crouched leg postures may improve stability. However, as the system of control strategies is overdetermined, our model findings suggest that a crouched leg posture does not necessarily enhance running stability. The model also predicts different leg stiffness adaptation rates for human and avian running, and suggests that a crouched avian leg posture, which is capable of both leg shortening and lengthening, allows for stable running without adjusting leg stiffness. In contrast, in straight-legged human running, the preparation of the ground contact seems to be more critical, requiring leg stiffness adjustment to remain stable. Finally, analysis of a simple robustness measure, the normalized maximum drop, suggests that the crouched leg posture may provide greater robustness to changes in terrain height

  8. Clinical usefulness of limited sampling strategies for estimating AUC of proton pump inhibitors.

    Science.gov (United States)

    Niioka, Takenori

    2011-03-01

    Cytochrome P450 (CYP) 2C19 (CYP2C19) genotype is regarded as a useful tool to predict area under the blood concentration-time curve (AUC) of proton pump inhibitors (PPIs). In our results, however, CYP2C19 genotypes had no influence on AUC of all PPIs during fluvoxamine treatment. These findings suggest that CYP2C19 genotyping is not always a good indicator for estimating AUC of PPIs. Limited sampling strategies (LSS) were developed to estimate AUC simply and accurately. It is important to minimize the number of blood samples because of patient's acceptance. This article reviewed the usefulness of LSS for estimating AUC of three PPIs (omeprazole: OPZ, lansoprazole: LPZ and rabeprazole: RPZ). The best prediction formulas in each PPI were AUC(OPZ)=9.24 x C(6h)+2638.03, AUC(LPZ)=12.32 x C(6h)+3276.09 and AUC(RPZ)=1.39 x C(3h)+7.17 x C(6h)+344.14, respectively. In order to optimize the sampling strategy of LPZ, we tried to establish LSS for LPZ using a time point within 3 hours through the property of pharmacokinetics of its enantiomers. The best prediction formula using the fewest sampling points (one point) was AUC(racemic LPZ)=6.5 x C(3h) of (R)-LPZ+13.7 x C(3h) of (S)-LPZ-9917.3 x G1-14387.2×G2+7103.6 (G1: homozygous extensive metabolizer is 1 and the other genotypes are 0; G2: heterozygous extensive metabolizer is 1 and the other genotypes are 0). Those strategies, plasma concentration monitoring at one or two time-points, might be more suitable for AUC estimation than reference to CYP2C19 genotypes, particularly in the case of coadministration of CYP mediators.

  9. Robustness in NAA evaluated by the Youden and Steiner test

    International Nuclear Information System (INIS)

    Bedregal, P.; Torres, B.; Ubillus, M.; Mendoza, P.; Montoya, E.

    2008-01-01

    The chemistry laboratory at the Peruvian Institute of Nuclear Energy (IPEN) has carried out a validation method for the samples of siliceous composition. At least seven variables affecting the robustness of the results were initially identified, which may interact simultaneously or individually. Conventional evaluation hereof would imply a massive number of analyses and a far more effective approach for assessment of the robustness for these effects was found in the Youden-Steiner test, which provides the necessary information by only eight analyses for each sample type. Three reference materials were used for evaluating the effects of variations in sample mass, irradiation duration, standard mass, neutron flux, decay time, counting time and counting distance. (author)

  10. Robust reconfigurable control for parametric and additive faults with FDI uncertainties

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Yang, Zhenyu

    2000-01-01

    From the system recoverable point of view, this paper discusses robust reconfigurable control synthesis for LTI systems and a class of nonlinear control systems with parametric and additive faults as well as derivations generated by FDI algorithms. By following the model-matching strategy......, an augmented optimal control problem is constructed based on the considered faulty and fictitious nominal systems, such that the robust control design techniques, such as H-infinity control and mu synthesis, can be employed for the reconfigurable control design....

  11. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Science.gov (United States)

    Mitra, Abhishek; Skrzypczak, Magdalena; Ginalski, Krzysztof; Rowicka, Maga

    2015-01-01

    Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding). Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants). Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol) that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively, we discuss how

  12. Strategies for achieving high sequencing accuracy for low diversity samples and avoiding sample bleeding using illumina platform.

    Directory of Open Access Journals (Sweden)

    Abhishek Mitra

    Full Text Available Sequencing microRNA, reduced representation sequencing, Hi-C technology and any method requiring the use of in-house barcodes result in sequencing libraries with low initial sequence diversity. Sequencing such data on the Illumina platform typically produces low quality data due to the limitations of the Illumina cluster calling algorithm. Moreover, even in the case of diverse samples, these limitations are causing substantial inaccuracies in multiplexed sample assignment (sample bleeding. Such inaccuracies are unacceptable in clinical applications, and in some other fields (e.g. detection of rare variants. Here, we discuss how both problems with quality of low-diversity samples and sample bleeding are caused by incorrect detection of clusters on the flowcell during initial sequencing cycles. We propose simple software modifications (Long Template Protocol that overcome this problem. We present experimental results showing that our Long Template Protocol remarkably increases data quality for low diversity samples, as compared with the standard analysis protocol; it also substantially reduces sample bleeding for all samples. For comprehensiveness, we also discuss and compare experimental results from alternative approaches to sequencing low diversity samples. First, we discuss how the low diversity problem, if caused by barcodes, can be avoided altogether at the barcode design stage. Second and third, we present modified guidelines, which are more stringent than the manufacturer's, for mixing low diversity samples with diverse samples and lowering cluster density, which in our experience consistently produces high quality data from low diversity samples. Fourth and fifth, we present rescue strategies that can be applied when sequencing results in low quality data and when there is no more biological material available. In such cases, we propose that the flowcell be re-hybridized and sequenced again using our Long Template Protocol. Alternatively

  13. Perceptual Robust Design

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard

    The research presented in this PhD thesis has focused on a perceptual approach to robust design. The results of the research and the original contribution to knowledge is a preliminary framework for understanding, positioning, and applying perceptual robust design. Product quality is a topic...... been presented. Therefore, this study set out to contribute to the understanding and application of perceptual robust design. To achieve this, a state-of-the-art and current practice review was performed. From the review two main research problems were identified. Firstly, a lack of tools...... for perceptual robustness was found to overlap with the optimum for functional robustness and at most approximately 2.2% out of the 14.74% could be ascribed solely to the perceptual robustness optimisation. In conclusion, the thesis have offered a new perspective on robust design by merging robust design...

  14. Sample design and gamma-ray counting strategy of neutron activation system for triton burnup measurements in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Jungmin [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Cheon, Mun Seong [ITER Korea, National Fusion Research Institute, Daejeon (Korea, Republic of); Chung, Kyoung-Jae, E-mail: jkjlsh1@snu.ac.kr [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of); Hwang, Y.S. [Department of Energy System Engineering, Seoul National University, Seoul (Korea, Republic of)

    2016-11-01

    Highlights: • Sample design for triton burnup ratio measurement is carried out. • Samples for 14.1 MeV neutron measurements are selected for KSTAR. • Si and Cu are the most suitable materials for d-t neutron measurements. • Appropriate γ-ray counting strategies for each selected sample are established. - Abstract: On the purpose of triton burnup measurements in Korea Superconducting Tokamak Advanced Research (KSTAR) deuterium plasmas, appropriate neutron activation system (NAS) samples for 14.1 MeV d-t neutron measurements have been designed and gamma-ray counting strategy is established. Neutronics calculations are performed with the MCNP5 neutron transport code for the KSTAR neutral beam heated deuterium plasma discharges. Based on those calculations and the assumed d-t neutron yield, the activities induced by d-t neutrons are estimated with the inventory code FISPACT-2007 for candidate sample materials: Si, Cu, Al, Fe, Nb, Co, Ti, and Ni. It is found that Si, Cu, Al, and Fe are suitable for the KSATR NAS in terms of the minimum detectable activity (MDA) calculated based on the standard deviation of blank measurements. Considering background gamma-rays radiated from surrounding structures activated by thermalized fusion neutrons, appropriate gamma-ray counting strategy for each selected sample is established.

  15. Females' sampling strategy to comparatively evaluate prospective mates in the peacock blenny Salaria pavo

    Science.gov (United States)

    Locatello, Lisa; Rasotto, Maria B.

    2017-08-01

    Emerging evidence suggests the occurrence of comparative decision-making processes in mate choice, questioning the traditional idea of female choice based on rules of absolute preference. In such a scenario, females are expected to use a typical best-of- n sampling strategy, being able to recall previous sampled males based on memory of their quality and location. Accordingly, the quality of preferred mate is expected to be unrelated to both the number and the sequence of female visits. We found support for these predictions in the peacock blenny, Salaria pavo, a fish where females have the opportunity to evaluate the attractiveness of many males in a short time period and in a restricted spatial range. Indeed, even considering the variability in preference among females, most of them returned to previous sampled males for further evaluations; thus, the preferred male did not represent the last one in the sequence of visited males. Moreover, there was no relationship between the attractiveness of the preferred male and the number of further visits assigned to the other males. Our results suggest the occurrence of a best-of- n mate sampling strategy in the peacock blenny.

  16. Evaluating risk management strategies in resource planning

    International Nuclear Information System (INIS)

    Andrews, C.J.

    1995-01-01

    This paper discusses the evaluation of risk management strategies as a part of integrated resource planning. Value- and scope-related uncertainties can be addressed during the process of planning, but uncertainties in the operating environment require technical analysis within planning models. Flexibility and robustness are two key classes of strategies for managing the risk posed by these uncertainties. This paper reviews standard capacity expansion planning models and shows that they are poorly equipped to compare risk management strategies. Those that acknowledge uncertainty are better at evaluating robustness than flexibility, which implies a bias against flexible options. Techniques are available to overcome this bias

  17. Synthetic Jet Actuator-Based Aircraft Tracking Using a Continuous Robust Nonlinear Control Strategy

    Directory of Open Access Journals (Sweden)

    N. Ramos-Pedroza

    2017-01-01

    Full Text Available A robust nonlinear control law that achieves trajectory tracking control for unmanned aerial vehicles (UAVs equipped with synthetic jet actuators (SJAs is presented in this paper. A key challenge in the control design is that the dynamic characteristics of SJAs are nonlinear and contain parametric uncertainty. The challenge resulting from the uncertain SJA actuator parameters is mitigated via innovative algebraic manipulation in the tracking error system derivation along with a robust nonlinear control law employing constant SJA parameter estimates. A key contribution of the paper is a rigorous analysis of the range of SJA actuator parameter uncertainty within which asymptotic UAV trajectory tracking can be achieved. A rigorous stability analysis is carried out to prove semiglobal asymptotic trajectory tracking. Detailed simulation results are included to illustrate the effectiveness of the proposed control law in the presence of wind gusts and varying levels of SJA actuator parameter uncertainty.

  18. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  19. Statistical Methods and Sampling Design for Estimating Step Trends in Surface-Water Quality

    Science.gov (United States)

    Hirsch, Robert M.

    1988-01-01

    This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The J. L. Hodges-E. L. Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies is examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods.

  20. Robust optical sensors for safety critical automotive applications

    Science.gov (United States)

    De Locht, Cliff; De Knibber, Sven; Maddalena, Sam

    2008-02-01

    Optical sensors for the automotive industry need to be robust, high performing and low cost. This paper focuses on the impact of automotive requirements on optical sensor design and packaging. Main strategies to lower optical sensor entry barriers in the automotive market include: Perform sensor calibration and tuning by the sensor manufacturer, sensor test modes on chip to guarantee functional integrity at operation, and package technology is key. As a conclusion, optical sensor applications are growing in automotive. Optical sensor robustness matured to the level of safety critical applications like Electrical Power Assisted Steering (EPAS) and Drive-by-Wire by optical linear arrays based systems and Automated Cruise Control (ACC), Lane Change Assist and Driver Classification/Smart Airbag Deployment by camera imagers based systems.

  1. Closed-Loop and Robust Control of Quantum Systems

    Directory of Open Access Journals (Sweden)

    Chunlin Chen

    2013-01-01

    Full Text Available For most practical quantum control systems, it is important and difficult to attain robustness and reliability due to unavoidable uncertainties in the system dynamics or models. Three kinds of typical approaches (e.g., closed-loop learning control, feedback control, and robust control have been proved to be effective to solve these problems. This work presents a self-contained survey on the closed-loop and robust control of quantum systems, as well as a brief introduction to a selection of basic theories and methods in this research area, to provide interested readers with a general idea for further studies. In the area of closed-loop learning control of quantum systems, we survey and introduce such learning control methods as gradient-based methods, genetic algorithms (GA, and reinforcement learning (RL methods from a unified point of view of exploring the quantum control landscapes. For the feedback control approach, the paper surveys three control strategies including Lyapunov control, measurement-based control, and coherent-feedback control. Then such topics in the field of quantum robust control as H∞ control, sliding mode control, quantum risk-sensitive control, and quantum ensemble control are reviewed. The paper concludes with a perspective of future research directions that are likely to attract more attention.

  2. Closed-loop and robust control of quantum systems.

    Science.gov (United States)

    Chen, Chunlin; Wang, Lin-Cheng; Wang, Yuanlong

    2013-01-01

    For most practical quantum control systems, it is important and difficult to attain robustness and reliability due to unavoidable uncertainties in the system dynamics or models. Three kinds of typical approaches (e.g., closed-loop learning control, feedback control, and robust control) have been proved to be effective to solve these problems. This work presents a self-contained survey on the closed-loop and robust control of quantum systems, as well as a brief introduction to a selection of basic theories and methods in this research area, to provide interested readers with a general idea for further studies. In the area of closed-loop learning control of quantum systems, we survey and introduce such learning control methods as gradient-based methods, genetic algorithms (GA), and reinforcement learning (RL) methods from a unified point of view of exploring the quantum control landscapes. For the feedback control approach, the paper surveys three control strategies including Lyapunov control, measurement-based control, and coherent-feedback control. Then such topics in the field of quantum robust control as H(∞) control, sliding mode control, quantum risk-sensitive control, and quantum ensemble control are reviewed. The paper concludes with a perspective of future research directions that are likely to attract more attention.

  3. The effectiveness of robust RMCD control chart as outliers’ detector

    Science.gov (United States)

    Darmanto; Astutik, Suci

    2017-12-01

    A well-known control chart to monitor a multivariate process is Hotelling’s T 2 which its parameters are estimated classically, very sensitive and also marred by masking and swamping of outliers data effect. To overcome these situation, robust estimators are strongly recommended. One of robust estimators is re-weighted minimum covariance determinant (RMCD) which has robust characteristics as same as MCD. In this paper, the effectiveness term is accuracy of the RMCD control chart in detecting outliers as real outliers. In other word, how effectively this control chart can identify and remove masking and swamping effects of outliers. We assessed the effectiveness the robust control chart based on simulation by considering different scenarios: n sample sizes, proportion of outliers, number of p quality characteristics. We found that in some scenarios, this RMCD robust control chart works effectively.

  4. Peak alignment and robust principal component analysis of gas chromatograms of fatty acid methyl esters and volatiles

    DEFF Research Database (Denmark)

    Frosch, Stina; Jørgensen, Bo

    2007-01-01

    . The ability of robust algorithms to deal with outlier problems, including both sample-wise and element-wise outliers, and the advantages and drawbacks of two robust PCA methods, robust PCA (ROBPCA) and robust singular value decomposition when analysing these GC data were investigated. The results show...... that the usage of ROPCA is advantageous, compared with traditional PCA, when analysing the entire profile of chromatographic data in cases of sub-optimally aligned data. It also demonstrates how choosing the most robust PCA (sample or element-wise) depends on the type of outliers present in the data set....

  5. A Robust H ∞ Controller for an UAV Flight Control System.

    Science.gov (United States)

    López, J; Dormido, R; Dormido, S; Gómez, J P

    2015-01-01

    The objective of this paper is the implementation and validation of a robust H ∞ controller for an UAV to track all types of manoeuvres in the presence of noisy environment. A robust inner-outer loop strategy is implemented. To design the H ∞ robust controller in the inner loop, H ∞ control methodology is used. The two controllers that conform the outer loop are designed using the H ∞ Loop Shaping technique. The reference vector used in the control architecture formed by vertical velocity, true airspeed, and heading angle, suggests a nontraditional way to pilot the aircraft. The simulation results show that the proposed control scheme works well despite the presence of noise and uncertainties, so the control system satisfies the requirements.

  6. Limited sampling strategy models for estimating the AUC of gliclazide in Chinese healthy volunteers.

    Science.gov (United States)

    Huang, Ji-Han; Wang, Kun; Huang, Xiao-Hui; He, Ying-Chun; Li, Lu-Jin; Sheng, Yu-Cheng; Yang, Juan; Zheng, Qing-Shan

    2013-06-01

    The aim of this work is to reduce the cost of required sampling for the estimation of the area under the gliclazide plasma concentration versus time curve within 60 h (AUC0-60t ). The limited sampling strategy (LSS) models were established and validated by the multiple regression model within 4 or fewer gliclazide concentration values. Absolute prediction error (APE), root of mean square error (RMSE) and visual prediction check were used as criterion. The results of Jack-Knife validation showed that 10 (25.0 %) of the 40 LSS based on the regression analysis were not within an APE of 15 % using one concentration-time point. 90.2, 91.5 and 92.4 % of the 40 LSS models were capable of prediction using 2, 3 and 4 points, respectively. Limited sampling strategies were developed and validated for estimating AUC0-60t of gliclazide. This study indicates that the implementation of an 80 mg dosage regimen enabled accurate predictions of AUC0-60t by the LSS model. This study shows that 12, 6, 4, 2 h after administration are the key sampling times. The combination of (12, 2 h), (12, 8, 2 h) or (12, 8, 4, 2 h) can be chosen as sampling hours for predicting AUC0-60t in practical application according to requirement.

  7. Sample preparation composite and replicate strategy case studies for assay of solid oral drug products.

    Science.gov (United States)

    Nickerson, Beverly; Harrington, Brent; Li, Fasheng; Guo, Michele Xuemei

    2017-11-30

    Drug product assay is one of several tests required for new drug products to ensure the quality of the product at release and throughout the life cycle of the product. Drug product assay testing is typically performed by preparing a composite sample of multiple dosage units to obtain an assay value representative of the batch. In some cases replicate composite samples may be prepared and the reportable assay value is the average value of all the replicates. In previously published work by Harrington et al. (2014) [5], a sample preparation composite and replicate strategy for assay was developed to provide a systematic approach which accounts for variability due to the analytical method and dosage form with a standard error of the potency assay criteria based on compendia and regulatory requirements. In this work, this sample preparation composite and replicate strategy for assay is applied to several case studies to demonstrate the utility of this approach and its application at various stages of pharmaceutical drug product development. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A Conceptual Methodology for Assessing Acquisition Requirements Robustness against Technology Uncertainties

    Science.gov (United States)

    Chou, Shuo-Ju

    2011-12-01

    In recent years the United States has shifted from a threat-based acquisition policy that developed systems for countering specific threats to a capabilities-based strategy that emphasizes the acquisition of systems that provide critical national defense capabilities. This shift in policy, in theory, allows for the creation of an "optimal force" that is robust against current and future threats regardless of the tactics and scenario involved. In broad terms, robustness can be defined as the insensitivity of an outcome to "noise" or non-controlled variables. Within this context, the outcome is the successful achievement of defense strategies and the noise variables are tactics and scenarios that will be associated with current and future enemies. Unfortunately, a lack of system capability, budget, and schedule robustness against technology performance and development uncertainties has led to major setbacks in recent acquisition programs. This lack of robustness stems from the fact that immature technologies have uncertainties in their expected performance, development cost, and schedule that cause to variations in system effectiveness and program development budget and schedule requirements. Unfortunately, the Technology Readiness Assessment process currently used by acquisition program managers and decision-makers to measure technology uncertainty during critical program decision junctions does not adequately capture the impact of technology performance and development uncertainty on program capability and development metrics. The Technology Readiness Level metric employed by the TRA to describe program technology elements uncertainties can only provide a qualitative and non-descript estimation of the technology uncertainties. In order to assess program robustness, specifically requirements robustness, against technology performance and development uncertainties, a new process is needed. This process should provide acquisition program managers and decision

  9. Sampling designs for contaminant temporal trend analyses using sedentary species exemplified by the snails Bellamya aeruginosa and Viviparus viviparus.

    Science.gov (United States)

    Yin, Ge; Danielsson, Sara; Dahlberg, Anna-Karin; Zhou, Yihui; Qiu, Yanling; Nyberg, Elisabeth; Bignert, Anders

    2017-10-01

    Environmental monitoring typically assumes samples and sampling activities to be representative of the population being studied. Given a limited budget, an appropriate sampling strategy is essential to support detecting temporal trends of contaminants. In the present study, based on real chemical analysis data on polybrominated diphenyl ethers in snails collected from five subsites in Tianmu Lake, computer simulation is performed to evaluate three sampling strategies by the estimation of required sample size, to reach a detection of an annual change of 5% with a statistical power of 80% and 90% with a significant level of 5%. The results showed that sampling from an arbitrarily selected sampling spot is the worst strategy, requiring much more individual analyses to achieve the above mentioned criteria compared with the other two approaches. A fixed sampling site requires the lowest sample size but may not be representative for the intended study object e.g. a lake and is also sensitive to changes of that particular sampling site. In contrast, sampling at multiple sites along the shore each year, and using pooled samples when the cost to collect and prepare individual specimens are much lower than the cost for chemical analysis, would be the most robust and cost efficient strategy in the long run. Using statistical power as criterion, the results demonstrated quantitatively the consequences of various sampling strategies, and could guide users with respect of required sample sizes depending on sampling design for long term monitoring programs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. CALiPER Report 20.3: Robustness of LED PAR38 Lamps

    Energy Technology Data Exchange (ETDEWEB)

    Poplawski, Michael E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Royer, Michael P. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brown, Charles C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-01

    Three samples of 40 of the Series 20 PAR38 lamps underwent multi-stress testing, whereby samples were subjected to increasing levels of simultaneous thermal, humidity, electrical, and vibrational stress. The results do not explicitly predict expected lifetime or reliability, but they can be compared with one another, as well as with benchmark conventional products, to assess the relative robustness of the product designs. On average, the 32 LED lamp models tested were substantially more robust than the conventional benchmark lamps. As with other performance attributes, however, there was great variability in the robustness and design maturity of the LED lamps. Several LED lamp samples failed within the first one or two levels of the ten-level stress plan, while all three samples of some lamp models completed all ten levels. One potential area of improvement is design maturity, given that more than 25% of the lamp models demonstrated a difference in failure level for the three samples that was greater than or equal to the maximum for the benchmarks. At the same time, the fact that nearly 75% of the lamp models exhibited better design maturity than the benchmarks is noteworthy, given the relative stage of development for the technology.

  11. Robust automated knowledge capture.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  12. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    Energy Technology Data Exchange (ETDEWEB)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N. [Natural Resources Canada, Ottawa, ON (Canada); Chapman, M. [Atomic Energy of Canada Limited, Chalk River, ON (Canada)

    2011-07-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  13. A preliminary evaluation of comminution and sampling strategies for radioactive cemented waste

    International Nuclear Information System (INIS)

    Bilodeau, M.; Lastra, R.; Bouzoubaa, N.; Chapman, M.

    2011-01-01

    Lixiviation of Hg, U and Cs contaminants and micro-encapsulation of cemented radioactive waste (CRW) are the two main components of a CRW stabilization research project carried out at Natural Resources Canada in collaboration with Atomic Energy of Canada Limited. Unmolding CRW from the storage pail, its fragmentation into a size range suitable for both processes and the collection of a representative sample are three essential steps for providing optimal material conditions for the two studies. Separation of wires, metals and plastic incorporated into CRW samples is also required. A comminution and sampling strategy was developed to address all those needs. Dust emissions and other health and safety concerns were given full consideration. Surrogate cemented waste (SCW) was initially used for this comminution study where Cu was used as a substitute for U and Hg. SCW was characterized as a friable material through the measurement of the Bond work index of 7.7 kWh/t. A mineralogical investigation and the calibration of material heterogeneity parameters of the sampling error model showed that Cu, Hg and Cs are finely disseminated in the cement matrix. A sampling strategy was built from the model and successfully validated with radioactive waste. A larger than expected sampling error was observed with U due to the formation of large U solid phases, which were not observed with the Cu tracer. SCW samples were crushed and ground under different rock fragmentation mechanisms: compression (jaw and cone crushers, rod mill), impact (ball mill), attrition, high voltage disintegration and high pressure water (and liquid nitrogen) jetting. Cryogenic grinding was also tested with the attrition mill. Crushing and grinding technologies were assessed against criteria that were gathered from literature surveys, experiential know-how and discussion with the client and field experts. Water jetting and its liquid nitrogen variant were retained for pail cutting and waste unmolding while

  14. Robust Visual Tracking Using the Bidirectional Scale Estimation

    Directory of Open Access Journals (Sweden)

    An Zhiyong

    2017-01-01

    Full Text Available Object tracking with robust scale estimation is a challenging task in computer vision. This paper presents a novel tracking algorithm that learns the translation and scale filters with a complementary scheme. The translation filter is constructed using the ridge regression and multidimensional features. A robust scale filter is constructed by the bidirectional scale estimation, including the forward scale and backward scale. Firstly, we learn the scale filter using the forward tracking information. Then the forward scale and backward scale can be estimated using the respective scale filter. Secondly, a conservative strategy is adopted to compromise the forward and backward scales. Finally, the scale filter is updated based on the final scale estimation. It is effective to update scale filter since the stable scale estimation can improve the performance of scale filter. To reveal the effectiveness of our tracker, experiments are performed on 32 sequences with significant scale variation and on the benchmark dataset with 50 challenging videos. Our results show that the proposed tracker outperforms several state-of-the-art trackers in terms of robustness and accuracy.

  15. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  16. Guaranteeing robustness of structural condition monitoring to environmental variability

    Science.gov (United States)

    Van Buren, Kendra; Reilly, Jack; Neal, Kyle; Edwards, Harry; Hemez, François

    2017-01-01

    Advances in sensor deployment and computational modeling have allowed significant strides to be recently made in the field of Structural Health Monitoring (SHM). One widely used SHM strategy is to perform a vibration analysis where a model of the structure's pristine (undamaged) condition is compared with vibration response data collected from the physical structure. Discrepancies between model predictions and monitoring data can be interpreted as structural damage. Unfortunately, multiple sources of uncertainty must also be considered in the analysis, including environmental variability, unknown model functional forms, and unknown values of model parameters. Not accounting for these sources of uncertainty can lead to false-positives or false-negatives in the structural condition assessment. To manage the uncertainty, we propose a robust SHM methodology that combines three technologies. A time series algorithm is trained using "baseline" data to predict the vibration response, compare predictions to actual measurements collected on a potentially damaged structure, and calculate a user-defined damage indicator. The second technology handles the uncertainty present in the problem. An analysis of robustness is performed to propagate this uncertainty through the time series algorithm and obtain the corresponding bounds of variation of the damage indicator. The uncertainty description and robustness analysis are both inspired by the theory of info-gap decision-making. Lastly, an appropriate "size" of the uncertainty space is determined through physical experiments performed in laboratory conditions. Our hypothesis is that examining how the uncertainty space changes throughout time might lead to superior diagnostics of structural damage as compared to only monitoring the damage indicator. This methodology is applied to a portal frame structure to assess if the strategy holds promise for robust SHM. (Publication approved for unlimited, public release on October-28

  17. Limited sampling strategy for determining metformin area under the plasma concentration-time curve

    DEFF Research Database (Denmark)

    Santoro, Ana Beatriz; Stage, Tore Bjerregaard; Struchiner, Claudio José

    2016-01-01

    AIM: The aim was to develop and validate limited sampling strategy (LSS) models to predict the area under the plasma concentration-time curve (AUC) for metformin. METHODS: Metformin plasma concentrations (n = 627) at 0-24 h after a single 500 mg dose were used for LSS development, based on all su...

  18. Robust species taxonomy assignment algorithm for 16S rRNA NGS reads: application to oral carcinoma samples

    Directory of Open Access Journals (Sweden)

    Nezar Noor Al-Hebshi

    2015-09-01

    Full Text Available Background: Usefulness of next-generation sequencing (NGS in assessing bacteria associated with oral squamous cell carcinoma (OSCC has been undermined by inability to classify reads to the species level. Objective: The purpose of this study was to develop a robust algorithm for species-level classification of NGS reads from oral samples and to pilot test it for profiling bacteria within OSCC tissues. Methods: Bacterial 16S V1-V3 libraries were prepared from three OSCC DNA samples and sequenced using 454's FLX chemistry. High-quality, well-aligned, and non-chimeric reads ≥350 bp were classified using a novel, multi-stage algorithm that involves matching reads to reference sequences in revised versions of the Human Oral Microbiome Database (HOMD, HOMD extended (HOMDEXT, and Greengene Gold (GGG at alignment coverage and percentage identity ≥98%, followed by assignment to species level based on top hit reference sequences. Priority was given to hits in HOMD, then HOMDEXT and finally GGG. Unmatched reads were subject to operational taxonomic unit analysis. Results: Nearly, 92.8% of the reads were matched to updated-HOMD 13.2, 1.83% to trusted-HOMDEXT, and 1.36% to modified-GGG. Of all matched reads, 99.6% were classified to species level. A total of 228 species-level taxa were identified, representing 11 phyla; the most abundant were Proteobacteria, Bacteroidetes, Firmicutes, Fusobacteria, and Actinobacteria. Thirty-five species-level taxa were detected in all samples. On average, Prevotella oris, Neisseria flava, Neisseria flavescens/subflava, Fusobacterium nucleatum ss polymorphum, Aggregatibacter segnis, Streptococcus mitis, and Fusobacterium periodontium were the most abundant. Bacteroides fragilis, a species rarely isolated from the oral cavity, was detected in two samples. Conclusion: This multi-stage algorithm maximizes the fraction of reads classified to the species level while ensuring reliable classification by giving priority to the

  19. Observer-Based Robust Control for Hydraulic Velocity Control System

    Directory of Open Access Journals (Sweden)

    Wei Shen

    2013-01-01

    Full Text Available This paper investigates the problems of robust stabilization and robust control for the secondary component speed control system with parameters uncertainty and load disturbance. The aim is to enhance the control performance of hydraulic system based on Common Pressure Rail (CPR. Firstly, a mathematical model is presented to describe the hydraulic control system. Then a novel observer is proposed, and an observed-based control strategy is designed such that the closed-loop system is asymptotically stable and satisfies the disturbance attenuation level. The condition for the existence of the developed controller can by efficiently solved by using the MATLAB software. Finally, simulation results are provided to demonstrate the effectiveness of the proposed method.

  20. Towards Robust Predictive Fault–Tolerant Control for a Battery Assembly System

    Directory of Open Access Journals (Sweden)

    Seybold Lothar

    2015-12-01

    Full Text Available The paper deals with the modeling and fault-tolerant control of a real battery assembly system which is under implementation at the RAFI GmbH company (one of the leading electronic manufacturing service providers in Germany. To model and control the battery assembly system, a unified max-plus algebra and model predictive control framework is introduced. Subsequently, the control strategy is enhanced with fault-tolerance features that increase the overall performance of the production system being considered. In particular, it enables tolerating (up to some degree mobile robot, processing and transportation faults. The paper discusses also robustness issues, which are inevitable in real production systems. As a result, a novel robust predictive fault-tolerant strategy is developed that is applied to the battery assembly system. The last part of the paper shows illustrative examples, which clearly exhibit the performance of the proposed approach.

  1. Human pluripotent stem cell-derived products: advances towards robust, scalable and cost-effective manufacturing strategies.

    Science.gov (United States)

    Jenkins, Michael J; Farid, Suzanne S

    2015-01-01

    The ability to develop cost-effective, scalable and robust bioprocesses for human pluripotent stem cells (hPSCs) will be key to their commercial success as cell therapies and tools for use in drug screening and disease modelling studies. This review outlines key process economic drivers for hPSCs and progress made on improving the economic and operational feasibility of hPSC bioprocesses. Factors influencing key cost metrics, namely capital investment and cost of goods, for hPSCs are discussed. Step efficiencies particularly for differentiation, media requirements and technology choice are amongst the key process economic drivers identified for hPSCs. Progress made to address these cost drivers in hPSC bioprocessing strategies is discussed. These include improving expansion and differentiation yields in planar and bioreactor technologies, the development of xeno-free media and microcarrier coatings, identification of optimal bioprocess operating conditions to control cell fate and the development of directed differentiation protocols that reduce reliance on expensive morphogens such as growth factors and small molecules. These approaches offer methods to further optimise hPSC bioprocessing in terms of its commercial feasibility. © 2014 The Authors. Biotechnology Journal published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

  2. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  3. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  4. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  5. [Strategies for biobank networks. Classification of different approaches for locating samples and an outlook on the future within the BBMRI-ERIC].

    Science.gov (United States)

    Lablans, Martin; Kadioglu, Dennis; Mate, Sebastian; Leb, Ines; Prokosch, Hans-Ulrich; Ückert, Frank

    2016-03-01

    Medical research projects often require more biological material than can be supplied by a single biobank. For this reason, a multitude of strategies support locating potential research partners with matching material without requiring centralization of sample storage. Classification of different strategies for biobank networks, in particular for locating suitable samples. Description of an IT infrastructure combining these strategies. Existing strategies can be classified according to three criteria: (a) granularity of sample data: coarse bank-level data (catalogue) vs. fine-granular sample-level data, (b) location of sample data: central (central search service) vs. decentral storage (federated search services), and (c) level of automation: automatic (query-based, federated search service) vs. semi-automatic (inquiry-based, decentral search). All mentioned search services require data integration. Metadata help to overcome semantic heterogeneity. The "Common Service IT" in BBMRI-ERIC (Biobanking and BioMolecular Resources Research Infrastructure) unites a catalogue, the decentral search and metadata in an integrated platform. As a result, researchers receive versatile tools to search suitable biomaterial, while biobanks retain a high degree of data sovereignty. Despite their differences, the presented strategies for biobank networks do not rule each other out but can complement and even benefit from each other.

  6. Birds achieve high robustness in uneven terrain through active control of landing conditions.

    Science.gov (United States)

    Birn-Jeffery, Aleksandra V; Daley, Monica A

    2012-06-15

    We understand little about how animals adjust locomotor behaviour to negotiate uneven terrain. The mechanical demands and constraints of such behaviours likely differ from uniform terrain locomotion. Here we investigated how common pheasants negotiate visible obstacles with heights from 10 to 50% of leg length. Our goal was to determine the neuro-mechanical strategies used to achieve robust stability, and address whether strategies vary with obstacle height. We found that control of landing conditions was crucial for minimising fluctuations in stance leg loading and work in uneven terrain. Variation in touchdown leg angle (θ(TD)) was correlated with the orientation of ground force during stance, and the angle between the leg and body velocity vector at touchdown (β(TD)) was correlated with net limb work. Pheasants actively targeted obstacles to control body velocity and leg posture at touchdown to achieve nearly steady dynamics on the obstacle step. In the approach step to an obstacle, the birds produced net positive limb work to launch themselves upward. On the obstacle, body dynamics were similar to uniform terrain. Pheasants also increased swing leg retraction velocity during obstacle negotiation, which we suggest is an active strategy to minimise fluctuations in peak force and leg posture in uneven terrain. Thus, pheasants appear to achieve robustly stable locomotion through a combination of path planning using visual feedback and active adjustment of leg swing dynamics to control landing conditions. We suggest that strategies for robust stability are context specific, depending on the quality of sensory feedback available, especially visual input.

  7. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  8. The added value of system robustness analysis for flood risk management

    NARCIS (Netherlands)

    Mens, M.J.P.; Klijn, F.

    2014-01-01

    Decision makers in fluvial flood risk management increasingly acknowledge that they have to prepare for extreme events. Flood risk is the most common basis on which to compare flood risk-reducing strategies. To take uncertainties into account the criteria of robustness and flexibility are advocated

  9. Comparison of sampling strategies for object-based classification of urban vegetation from Very High Resolution satellite images

    Science.gov (United States)

    Rougier, Simon; Puissant, Anne; Stumpf, André; Lachiche, Nicolas

    2016-09-01

    Vegetation monitoring is becoming a major issue in the urban environment due to the services they procure and necessitates an accurate and up to date mapping. Very High Resolution satellite images enable a detailed mapping of the urban tree and herbaceous vegetation. Several supervised classifications with statistical learning techniques have provided good results for the detection of urban vegetation but necessitate a large amount of training data. In this context, this study proposes to investigate the performances of different sampling strategies in order to reduce the number of examples needed. Two windows based active learning algorithms from state-of-art are compared to a classical stratified random sampling and a third combining active learning and stratified strategies is proposed. The efficiency of these strategies is evaluated on two medium size French cities, Strasbourg and Rennes, associated to different datasets. Results demonstrate that classical stratified random sampling can in some cases be just as effective as active learning methods and that it should be used more frequently to evaluate new active learning methods. Moreover, the active learning strategies proposed in this work enables to reduce the computational runtime by selecting multiple windows at each iteration without increasing the number of windows needed.

  10. Robust hopping based on virtual pendulum posture control

    International Nuclear Information System (INIS)

    Sharbafi, Maziar A; Ahmadabadi, Majid Nili; Yazdanpanah, Mohammad J; Maufroy, Christophe; Seyfarth, Andre

    2013-01-01

    A new control approach to achieve robust hopping against perturbations in the sagittal plane is presented in this paper. In perturbed hopping, vertical body alignment has a significant role for stability. Our approach is based on the virtual pendulum concept, recently proposed, based on experimental findings in human and animal locomotion. In this concept, the ground reaction forces are pointed to a virtual support point, named virtual pivot point (VPP), during motion. This concept is employed in designing the controller to balance the trunk during the stance phase. New strategies for leg angle and length adjustment besides the virtual pendulum posture control are proposed as a unified controller. This method is investigated by applying it on an extension of the spring loaded inverted pendulum (SLIP) model. Trunk, leg mass and damping are added to the SLIP model in order to make the model more realistic. The stability is analyzed by Poincaré map analysis. With fixed VPP position, stability, disturbance rejection and moderate robustness are achieved, but with a low convergence speed. To improve the performance and attain higher robustness, an event-based control of the VPP position is introduced, using feedback of the system states at apexes. Discrete linear quartic regulator is used to design the feedback controller. Considerable enhancements with respect to stability, convergence speed and robustness against perturbations and parameter changes are achieved. (paper)

  11. Robustness for slope stability modelling under deep uncertainty

    Science.gov (United States)

    Almeida, Susana; Holcombe, Liz; Pianosi, Francesca; Wagener, Thorsten

    2015-04-01

    Landslides can have large negative societal and economic impacts, such as loss of life and damage to infrastructure. However, the ability of slope stability assessment to guide management is limited by high levels of uncertainty in model predictions. Many of these uncertainties cannot be easily quantified, such as those linked to climate change and other future socio-economic conditions, restricting the usefulness of traditional decision analysis tools. Deep uncertainty can be managed more effectively by developing robust, but not necessarily optimal, policies that are expected to perform adequately under a wide range of future conditions. Robust strategies are particularly valuable when the consequences of taking a wrong decision are high as is often the case of when managing natural hazard risks such as landslides. In our work a physically based numerical model of hydrologically induced slope instability (the Combined Hydrology and Stability Model - CHASM) is applied together with robust decision making to evaluate the most important uncertainties (storm events, groundwater conditions, surface cover, slope geometry, material strata and geotechnical properties) affecting slope stability. Specifically, impacts of climate change on long-term slope stability are incorporated, accounting for the deep uncertainty in future climate projections. Our findings highlight the potential of robust decision making to aid decision support for landslide hazard reduction and risk management under conditions of deep uncertainty.

  12. A comparison of sample preparation strategies for biological tissues and subsequent trace element analysis using LA-ICP-MS.

    Science.gov (United States)

    Bonta, Maximilian; Török, Szilvia; Hegedus, Balazs; Döme, Balazs; Limbeck, Andreas

    2017-03-01

    Laser ablation-inductively coupled plasma-mass spectrometry (LA-ICP-MS) is one of the most commonly applied methods for lateral trace element distribution analysis in medical studies. Many improvements of the technique regarding quantification and achievable lateral resolution have been achieved in the last years. Nevertheless, sample preparation is also of major importance and the optimal sample preparation strategy still has not been defined. While conventional histology knows a number of sample pre-treatment strategies, little is known about the effect of these approaches on the lateral distributions of elements and/or their quantities in tissues. The technique of formalin fixation and paraffin embedding (FFPE) has emerged as the gold standard in tissue preparation. However, the potential use for elemental distribution studies is questionable due to a large number of sample preparation steps. In this work, LA-ICP-MS was used to examine the applicability of the FFPE sample preparation approach for elemental distribution studies. Qualitative elemental distributions as well as quantitative concentrations in cryo-cut tissues as well as FFPE samples were compared. Results showed that some metals (especially Na and K) are severely affected by the FFPE process, whereas others (e.g., Mn, Ni) are less influenced. Based on these results, a general recommendation can be given: FFPE samples are completely unsuitable for the analysis of alkaline metals. When analyzing transition metals, FFPE samples can give comparable results to snap-frozen tissues. Graphical abstract Sample preparation strategies for biological tissues are compared with regard to the elemental distributions and average trace element concentrations.

  13. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  14. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  15. Analysis and design of robust decentralized controllers for nonlinear systems

    Energy Technology Data Exchange (ETDEWEB)

    Schoenwald, D.A.

    1993-07-01

    Decentralized control strategies for nonlinear systems are achieved via feedback linearization techniques. New results on optimization and parameter robustness of non-linear systems are also developed. In addition, parametric uncertainty in large-scale systems is handled by sensitivity analysis and optimal control methods in a completely decentralized framework. This idea is applied to alleviate uncertainty in friction parameters for the gimbal joints on Space Station Freedom. As an example of decentralized nonlinear control, singular perturbation methods and distributed vibration damping are merged into a control strategy for a two-link flexible manipulator.

  16. Robust H∞ output-feedback control for path following of autonomous ground vehicles

    Science.gov (United States)

    Hu, Chuan; Jing, Hui; Wang, Rongrong; Yan, Fengjun; Chadli, Mohammed

    2016-03-01

    This paper presents a robust H∞ output-feedback control strategy for the path following of autonomous ground vehicles (AGVs). Considering the vehicle lateral velocity is usually hard to measure with low cost sensor, a robust H∞ static output-feedback controller based on the mixed genetic algorithms (GA)/linear matrix inequality (LMI) approach is proposed to realize the path following without the information of the lateral velocity. The proposed controller is robust to the parametric uncertainties and external disturbances, with the parameters including the tire cornering stiffness, vehicle longitudinal velocity, yaw rate and road curvature. Simulation results based on CarSim-Simulink joint platform using a high-fidelity and full-car model have verified the effectiveness of the proposed control approach.

  17. Analytical sample preparation strategies for the determination of antimalarial drugs in human whole blood, plasma and urine

    DEFF Research Database (Denmark)

    Casas, Monica Escolà; Hansen, Martin; Krogh, Kristine A

    2014-01-01

    the available sample preparation strategies combined with liquid chromatographic (LC) analysis to determine antimalarials in whole blood, plasma and urine published over the last decade. Sample preparation can be done by protein precipitation, solid-phase extraction, liquid-liquid extraction or dilution. After...

  18. A robust regression based on weighted LSSVM and penalized trimmed squares

    International Nuclear Information System (INIS)

    Liu, Jianyong; Wang, Yong; Fu, Chengqun; Guo, Jie; Yu, Qin

    2016-01-01

    Least squares support vector machine (LS-SVM) for nonlinear regression is sensitive to outliers in the field of machine learning. Weighted LS-SVM (WLS-SVM) overcomes this drawback by adding weight to each training sample. However, as the number of outliers increases, the accuracy of WLS-SVM may decrease. In order to improve the robustness of WLS-SVM, a new robust regression method based on WLS-SVM and penalized trimmed squares (WLSSVM–PTS) has been proposed. The algorithm comprises three main stages. The initial parameters are obtained by least trimmed squares at first. Then, the significant outliers are identified and eliminated by the Fast-PTS algorithm. The remaining samples with little outliers are estimated by WLS-SVM at last. The statistical tests of experimental results carried out on numerical datasets and real-world datasets show that the proposed WLSSVM–PTS is significantly robust than LS-SVM, WLS-SVM and LSSVM–LTS.

  19. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    International Nuclear Information System (INIS)

    McGowan, S E; Albertini, F; Lomax, A J; Thomas, S J

    2015-01-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties. (paper)

  20. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    Science.gov (United States)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  1. Robust Timing Synchronization in Aeronautical Mobile Communication Systems

    Science.gov (United States)

    Xiong, Fu-Qin; Pinchak, Stanley

    2004-01-01

    This work details a study of robust synchronization schemes suitable for satellite to mobile aeronautical applications. A new scheme, the Modified Sliding Window Synchronizer (MSWS), is devised and compared with existing schemes, including the traditional Early-Late Gate Synchronizer (ELGS), the Gardner Zero-Crossing Detector (GZCD), and the Sliding Window Synchronizer (SWS). Performance of the synchronization schemes is evaluated by a set of metrics that indicate performance in digital communications systems. The metrics are convergence time, mean square phase error (or root mean-square phase error), lowest SNR for locking, initial frequency offset performance, midstream frequency offset performance, and system complexity. The performance of the synchronizers is evaluated by means of Matlab simulation models. A simulation platform is devised to model the satellite to mobile aeronautical channel, consisting of a Quadrature Phase Shift Keying modulator, an additive white Gaussian noise channel, and a demodulator front end. Simulation results show that the MSWS provides the most robust performance at the cost of system complexity. The GZCD provides a good tradeoff between robustness and system complexity for communication systems that require high symbol rates or low overall system costs. The ELGS has a high system complexity despite its average performance. Overall, the SWS, originally designed for multi-carrier systems, performs very poorly in single-carrier communications systems. Table 5.1 in Section 5 provides a ranking of each of the synchronization schemes in terms of the metrics set forth in Section 4.1. Details of comparison are given in Section 5. Based on the results presented in Table 5, it is safe to say that the most robust synchronization scheme examined in this work is the high-sample-rate Modified Sliding Window Synchronizer. A close second is its low-sample-rate cousin. The tradeoff between complexity and lowest mean-square phase error determines

  2. Current advances and strategies towards fully automated sample preparation for regulated LC-MS/MS bioanalysis.

    Science.gov (United States)

    Zheng, Naiyu; Jiang, Hao; Zeng, Jianing

    2014-09-01

    Robotic liquid handlers (RLHs) have been widely used in automated sample preparation for liquid chromatography-tandem mass spectrometry (LC-MS/MS) bioanalysis. Automated sample preparation for regulated bioanalysis offers significantly higher assay efficiency, better data quality and potential bioanalytical cost-savings. For RLHs that are used for regulated bioanalysis, there are additional requirements, including 21 CFR Part 11 compliance, software validation, system qualification, calibration verification and proper maintenance. This article reviews recent advances in automated sample preparation for regulated bioanalysis in the last 5 years. Specifically, it covers the following aspects: regulated bioanalysis requirements, recent advances in automation hardware and software development, sample extraction workflow simplification, strategies towards fully automated sample extraction, and best practices in automated sample preparation for regulated bioanalysis.

  3. Breeding Strategy To Generate Robust Yeast Starter Cultures for Cocoa Pulp Fermentations

    Science.gov (United States)

    Meersman, Esther; Steensels, Jan; Paulus, Tinneke; Struyf, Nore; Saels, Veerle; Mathawan, Melissa; Koffi, Jean; Vrancken, Gino

    2015-01-01

    Cocoa pulp fermentation is a spontaneous process during which the natural microbiota present at cocoa farms is allowed to ferment the pulp surrounding cocoa beans. Because such spontaneous fermentations are inconsistent and contribute to product variability, there is growing interest in a microbial starter culture that could be used to inoculate cocoa pulp fermentations. Previous studies have revealed that many different fungi are recovered from different batches of spontaneous cocoa pulp fermentations, whereas the variation in the prokaryotic microbiome is much more limited. In this study, therefore, we aimed to develop a suitable yeast starter culture that is able to outcompete wild contaminants and consistently produce high-quality chocolate. Starting from specifically selected Saccharomyces cerevisiae strains, we developed robust hybrids with characteristics that allow them to efficiently ferment cocoa pulp, including improved temperature tolerance and fermentation capacity. We conducted several laboratory and field trials to show that these new hybrids often outperform their parental strains and are able to dominate spontaneous pilot scale fermentations, which results in much more consistent microbial profiles. Moreover, analysis of the resulting chocolate showed that some of the cocoa batches that were fermented with specific starter cultures yielded superior chocolate. Taken together, these results describe the development of robust yeast starter cultures for cocoa pulp fermentations that can contribute to improving the consistency and quality of commercial chocolate production. PMID:26150457

  4. Distributionally robust hydro-thermal-wind economic dispatch

    International Nuclear Information System (INIS)

    Chen, Yue; Wei, Wei; Liu, Feng; Mei, Shengwei

    2016-01-01

    Highlights: • A two-stage distributionally robust hydro-thermal-wind model is proposed. • A semi-definite programing equivalent and its algorithm are developed. • Cases that demonstrate the effectiveness of the proposed model are included. - Abstract: With the penetration of wind energy increasing, uncertainty has become a major challenge in power system dispatch. Hydro power can change rapidly and is regarded as one promising complementary energy resource to mitigate wind power fluctuation. Joint scheduling of hydro, thermal, and wind energy is attracting more and more attention nowadays. This paper proposes a distributionally robust hydro-thermal-wind economic dispatch (DR-HTW-ED) method to enhance the flexibility and reliability of power system operation. In contrast to the traditional stochastic optimization (SO) and adjustable robust optimization (ARO) method, distributionally robust optimization (DRO) method describes the uncertain wind power output by all possible probability distribution functions (PDFs) with the same mean and variance recovered from the forecast data, and optimizes the expected operation cost in the worst distribution. Traditional DRO optimized the random parameter in entire space, which is sometimes contradict to the actual situation. In this paper, we restrict the wind power uncertainty in a bounded set, and derive an equivalent semi-definite programming (SDP) for the DR-HTW-ED using S-lemma. A delayed constraint generation algorithm is suggested to solve it in a tractable manner. The proposed DR-HTW-ED is compared with the existing ARO based hydro-thermal-wind economic dispatch (AR-HTW-ED). Their respective features are shown from the perspective of computational efficiency and conservativeness of dispatch strategies.

  5. Robust Satisficing Decision Making for Unmanned Aerial Vehicle Complex Missions under Severe Uncertainty.

    Directory of Open Access Journals (Sweden)

    Xiaoting Ji

    Full Text Available This paper presents a robust satisficing decision-making method for Unmanned Aerial Vehicles (UAVs executing complex missions in an uncertain environment. Motivated by the info-gap decision theory, we formulate this problem as a novel robust satisficing optimization problem, of which the objective is to maximize the robustness while satisfying some desired mission requirements. Specifically, a new info-gap based Markov Decision Process (IMDP is constructed to abstract the uncertain UAV system and specify the complex mission requirements with the Linear Temporal Logic (LTL. A robust satisficing policy is obtained to maximize the robustness to the uncertain IMDP while ensuring a desired probability of satisfying the LTL specifications. To this end, we propose a two-stage robust satisficing solution strategy which consists of the construction of a product IMDP and the generation of a robust satisficing policy. In the first stage, a product IMDP is constructed by combining the IMDP with an automaton representing the LTL specifications. In the second, an algorithm based on robust dynamic programming is proposed to generate a robust satisficing policy, while an associated robustness evaluation algorithm is presented to evaluate the robustness. Finally, through Monte Carlo simulation, the effectiveness of our algorithms is demonstrated on an UAV search mission under severe uncertainty so that the resulting policy can maximize the robustness while reaching the desired performance level. Furthermore, by comparing the proposed method with other robust decision-making methods, it can be concluded that our policy can tolerate higher uncertainty so that the desired performance level can be guaranteed, which indicates that the proposed method is much more effective in real applications.

  6. New sampling strategy using a Bayesian approach to assess iohexol clearance in kidney transplant recipients.

    Science.gov (United States)

    Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H

    2012-06-01

    Glomerular filtration rate (GFR) measurement is a major issue in kidney transplant recipients for clinicians. GFR can be determined by estimating the plasma clearance of iohexol, a nonradiolabeled compound. For practical and convenient application for patients and caregivers, it is important that a minimal number of samples are drawn. The aim of this study was to develop and validate a Bayesian model with fewer samples for reliable prediction of GFR in kidney transplant recipients. Iohexol plasma concentration-time curves from 95 patients were divided into an index (n = 63) and a validation set (n = 32). Samples (n = 4-6 per patient) were obtained during the elimination phase, that is, between 120 and 270 minutes. Individual reference values of iohexol clearance (CL(iohexol)) were calculated from k (elimination slope) and V (volume of distribution from intercept). Individual CL(iohexol) values were then introduced into the Bröchner-Mortensen equation to obtain the GFR (reference value). A population pharmacokinetic model was developed from the index set and validated using standard methods. For the validation set, we tested various combinations of 1, 2, or 3 sampling time to estimate CL(iohexol). According to the different combinations tested, a maximum a posteriori Bayesian estimation of CL(iohexol) was obtained from population parameters. Individual estimates of GFR were compared with individual reference values through analysis of bias and precision. A capability analysis allowed us to determine the best sampling strategy for Bayesian estimation. A 1-compartment model best described our data. Covariate analysis showed that uremia, serum creatinine, and age were significantly associated with k(e), and weight with V. The strategy, including samples drawn at 120 and 270 minutes, allowed accurate prediction of GFR (mean bias: -3.71%, mean imprecision: 7.77%). With this strategy, about 20% of individual predictions were outside the bounds of acceptance set at ± 10

  7. Robust visual tracking via multiscale deep sparse networks

    Science.gov (United States)

    Wang, Xin; Hou, Zhiqiang; Yu, Wangsheng; Xue, Yang; Jin, Zefenfen; Dai, Bo

    2017-04-01

    In visual tracking, deep learning with offline pretraining can extract more intrinsic and robust features. It has significant success solving the tracking drift in a complicated environment. However, offline pretraining requires numerous auxiliary training datasets and is considerably time-consuming for tracking tasks. To solve these problems, a multiscale sparse networks-based tracker (MSNT) under the particle filter framework is proposed. Based on the stacked sparse autoencoders and rectifier linear unit, the tracker has a flexible and adjustable architecture without the offline pretraining process and exploits the robust and powerful features effectively only through online training of limited labeled data. Meanwhile, the tracker builds four deep sparse networks of different scales, according to the target's profile type. During tracking, the tracker selects the matched tracking network adaptively in accordance with the initial target's profile type. It preserves the inherent structural information more efficiently than the single-scale networks. Additionally, a corresponding update strategy is proposed to improve the robustness of the tracker. Extensive experimental results on a large scale benchmark dataset show that the proposed method performs favorably against state-of-the-art methods in challenging environments.

  8. Feedback Robust Cubature Kalman Filter for Target Tracking Using an Angle Sensor.

    Science.gov (United States)

    Wu, Hao; Chen, Shuxin; Yang, Binfeng; Chen, Kun

    2016-05-09

    The direction of arrival (DOA) tracking problem based on an angle sensor is an important topic in many fields. In this paper, a nonlinear filter named the feedback M-estimation based robust cubature Kalman filter (FMR-CKF) is proposed to deal with measurement outliers from the angle sensor. The filter designs a new equivalent weight function with the Mahalanobis distance to combine the cubature Kalman filter (CKF) with the M-estimation method. Moreover, by embedding a feedback strategy which consists of a splitting and merging procedure, the proper sub-filter (the standard CKF or the robust CKF) can be chosen in each time index. Hence, the probability of the outliers' misjudgment can be reduced. Numerical experiments show that the FMR-CKF performs better than the CKF and conventional robust filters in terms of accuracy and robustness with good computational efficiency. Additionally, the filter can be extended to the nonlinear applications using other types of sensors.

  9. Robust MR spine detection using hierarchical learning and local articulated model.

    Science.gov (United States)

    Zhan, Yiqiang; Maneesh, Dewan; Harder, Martin; Zhou, Xiang Sean

    2012-01-01

    A clinically acceptable auto-spine detection system, i.e., localization and labeling of vertebrae and inter-vertebral discs, is required to have high robustness, in particular to severe diseases (e.g., scoliosis) and imaging artifacts (e.g. metal artifacts in MR). Our method aims to achieve this goal with two novel components. First, instead of treating vertebrae/discs as either repetitive components or completely independent entities, we emulate a radiologist and use a hierarchial strategy to learn detectors dedicated to anchor (distinctive) vertebrae, bundle (non-distinctive) vertebrae and inter-vertebral discs, respectively. At run-time, anchor vertebrae are detected concurrently to provide redundant and distributed appearance cues robust to local imaging artifacts. Bundle vertebrae detectors provide candidates of vertebrae with subtle appearance differences, whose labels are mutually determined by anchor vertebrae to gain additional robustness. Disc locations are derived from a cloud of responses from disc detectors, which is robust to sporadic voxel-level errors. Second, owing to the non-rigidness of spine anatomies, we employ a local articulated model to effectively model the spatial relations across vertebrae and discs. The local articulated model fuses appearance cues from different detectors in a way that is robust to abnormal spine geometry resulting from severe diseases. Our method is validated by 300 MR spine scout scans and exhibits robust performance, especially to cases with severe diseases and imaging artifacts.

  10. A simulation approach to assessing sampling strategies for insect pests: an example with the balsam gall midge.

    Directory of Open Access Journals (Sweden)

    R Drew Carleton

    Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.

  11. A HYBRID ALGORITHM FOR THE ROBUST GRAPH COLORING PROBLEM

    Directory of Open Access Journals (Sweden)

    Román Anselmo Mora Gutiérrez

    2016-08-01

    Full Text Available A hybridalgorithm which combines mathematical programming techniques (Kruskal’s algorithm and the strategy of maintaining arc consistency to solve constraint satisfaction problem “CSP” and heuristic methods (musical composition method and DSATUR to resolve the robust graph coloring problem (RGCP is proposed in this paper. Experimental result shows that this algorithm is better than the other algorithms presented on the literature.

  12. Methods for robustness programming

    NARCIS (Netherlands)

    Olieman, N.J.

    2008-01-01

    Robustness of an object is defined as the probability that an object will have properties as required. Robustness Programming (RP) is a mathematical approach for Robustness estimation and Robustness optimisation. An example in the context of designing a food product, is finding the best composition

  13. Detecting epileptic seizure with different feature extracting strategies using robust machine learning classification techniques by applying advance parameter optimization approach.

    Science.gov (United States)

    Hussain, Lal

    2018-06-01

    Epilepsy is a neurological disorder produced due to abnormal excitability of neurons in the brain. The research reveals that brain activity is monitored through electroencephalogram (EEG) of patients suffered from seizure to detect the epileptic seizure. The performance of EEG detection based epilepsy require feature extracting strategies. In this research, we have extracted varying features extracting strategies based on time and frequency domain characteristics, nonlinear, wavelet based entropy and few statistical features. A deeper study was undertaken using novel machine learning classifiers by considering multiple factors. The support vector machine kernels are evaluated based on multiclass kernel and box constraint level. Likewise, for K-nearest neighbors (KNN), we computed the different distance metrics, Neighbor weights and Neighbors. Similarly, the decision trees we tuned the paramours based on maximum splits and split criteria and ensemble classifiers are evaluated based on different ensemble methods and learning rate. For training/testing tenfold Cross validation was employed and performance was evaluated in form of TPR, NPR, PPV, accuracy and AUC. In this research, a deeper analysis approach was performed using diverse features extracting strategies using robust machine learning classifiers with more advanced optimal options. Support Vector Machine linear kernel and KNN with City block distance metric give the overall highest accuracy of 99.5% which was higher than using the default parameters for these classifiers. Moreover, highest separation (AUC = 0.9991, 0.9990) were obtained at different kernel scales using SVM. Additionally, the K-nearest neighbors with inverse squared distance weight give higher performance at different Neighbors. Moreover, to distinguish the postictal heart rate oscillations from epileptic ictal subjects, and highest performance of 100% was obtained using different machine learning classifiers.

  14. A traditional and a less-invasive robust design: choices in optimizing effort allocation for seabird population studies

    Science.gov (United States)

    Converse, S.J.; Kendall, W.L.; Doherty, P.F.; Naughton, M.B.; Hines, J.E.; Thomson, David L.; Cooch, Evan G.; Conroy, Michael J.

    2009-01-01

    For many animal populations, one or more life stages are not accessible to sampling, and therefore an unobservable state is created. For colonially-breeding populations, this unobservable state could represent the subset of adult breeders that have foregone breeding in a given year. This situation applies to many seabird populations, notably albatrosses, where skipped breeders are either absent from the colony, or are present but difficult to capture or correctly assign to breeding state. Kendall et al. have proposed design strategies for investigations of seabird demography where such temporary emigration occurs, suggesting the use of the robust design to permit the estimation of time-dependent parameters and to increase the precision of estimates from multi-state models. A traditional robust design, where animals are subject to capture multiple times in a sampling season, is feasible in many cases. However, due to concerns that multiple captures per season could cause undue disturbance to animals, Kendall et al. developed a less-invasive robust design (LIRD), where initial captures are followed by an assessment of the ratio of marked-to-unmarked birds in the population or sampled plot. This approach has recently been applied in the Northwestern Hawaiian Islands to populations of Laysan (Phoebastria immutabilis) and black-footed (P. nigripes) albatrosses. In this paper, we outline the LIRD and its application to seabird population studies. We then describe an approach to determining optimal allocation of sampling effort in which we consider a non-robust design option (nRD), and variations of both the traditional robust design (RD), and the LIRD. Variations we considered included the number of secondary sampling occasions for the RD and the amount of total effort allocated to the marked-to-unmarked ratio assessment for the LIRD. We used simulations, informed by early data from the Hawaiian study, to address optimal study design for our example cases. We found that

  15. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  16. Robust optimisation for self-scheduling and bidding strategies of hybrid CSP-fossil power plants

    DEFF Research Database (Denmark)

    Pousinho, H.M.I.; Contreras, J.; Pinson, P.

    2015-01-01

    between the molten-salt thermal energy storage (TES) and a fossil-fuel backup to overcome solar irradiation insufficiency, but with emission allowances constrained in the backup system to mitigate carbon footprint. A robust optimisation-based approach is proposed to provide the day-ahead self...

  17. Dynamic robustness of knowledge collaboration network of open source product development community

    Science.gov (United States)

    Zhou, Hong-Li; Zhang, Xiao-Dong

    2018-01-01

    As an emergent innovative design style, open source product development communities are characterized by a self-organizing, mass collaborative, networked structure. The robustness of the community is critical to its performance. Using the complex network modeling method, the knowledge collaboration network of the community is formulated, and the robustness of the network is systematically and dynamically studied. The characteristics of the network along the development period determine that its robustness should be studied from three time stages: the start-up, development and mature stages of the network. Five kinds of user-loss pattern are designed, to assess the network's robustness under different situations in each of these three time stages. Two indexes - the largest connected component and the network efficiency - are used to evaluate the robustness of the community. The proposed approach is applied in an existing open source car design community. The results indicate that the knowledge collaboration networks show different levels of robustness in different stages and different user loss patterns. Such analysis can be applied to provide protection strategies for the key users involved in knowledge dissemination and knowledge contribution at different stages of the network, thereby promoting the sustainable and stable development of the open source community.

  18. Robust multi-objective control of hybrid renewable microgeneration systems with energy storage

    International Nuclear Information System (INIS)

    Allison, John

    2017-01-01

    Highlights: • A hybrid energy system of micro-CHP, solar PV, and battery storage is presented. • Possible to exploit synergy of systems to fulfil the thermal and electrical demands. • Can control to minimise the interaction with the local electrical network. • Three different control approaches were compared. • The nonlinear inversion-based control strategy exhibits optimum performance. - Abstract: Microgeneration technologies are positioned to address future building energy efficiency requirements and facilitate the integration of renewables into buildings to ensure a sustainable, energy-secure future. This paper explores the development of a robust multi-input multi-output (MIMO) controller applicable to the control of hybrid renewable microgeneration systems with the objective of minimising the electrical grid utilisation of a building while fulfilling the thermal demands. The controller employs the inverse dynamics of the building, servicing systems, and energy storage with a robust control methodology. These inverse dynamics provides the control system with knowledge of the complex cause and effect relationships between the system, the controlled inputs, and the external disturbances, while an outer-loop control ensures robust, stable control in the presence of modelling deficiencies/uncertainty and unknown disturbances. Variable structure control compensates for the physical limitations of the systems whereby the control strategy employed switches depending on the current utilisation and availability of the energy supplies. Preliminary results presented for a system consisting of a micro-CHP unit, solar PV, and battery storage indicate that the control strategy is effective in minimising the interaction with the local electrical network and maximising the utilisation of the available renewable energy.

  19. Robustness of Structural Systems

    DEFF Research Database (Denmark)

    Canisius, T.D.G.; Sørensen, John Dalsgaard; Baker, J.W.

    2007-01-01

    The importance of robustness as a property of structural systems has been recognised following several structural failures, such as that at Ronan Point in 1968,where the consequenceswere deemed unacceptable relative to the initiating damage. A variety of research efforts in the past decades have...... attempted to quantify aspects of robustness such as redundancy and identify design principles that can improve robustness. This paper outlines the progress of recent work by the Joint Committee on Structural Safety (JCSS) to develop comprehensive guidance on assessing and providing robustness in structural...... systems. Guidance is provided regarding the assessment of robustness in a framework that considers potential hazards to the system, vulnerability of system components, and failure consequences. Several proposed methods for quantifying robustness are reviewed, and guidelines for robust design...

  20. Differentiation strategies and winery financial performance: An empirical investigation

    Directory of Open Access Journals (Sweden)

    Sandra K. Newton

    2015-12-01

    Full Text Available This investigation into small-to-medium sized wine businesses empirically tests linkages among differentiation strategies and financial performance over time. Using a two-by-two model, we examine the impact of differentiation strategies on profitability and growth. Financial and operational data from a proprietary database of 71 United States wineries, encompassing five continuous years (2006–2010, provide longitudinal robustness. Management decisions regarding resources and capabilities are used to cluster the sample firms into a two-by-two differentiation strategy model. Those wineries sourcing over 50% estate grapes and distributing over 50% direct-to-consumer have higher gross margins compared to other clusters. Direct-to-consumer distribution decisions impact growth. Results of this research indicate that distribution channel choice-direct-to-consumer-positively impacts gross profit margin and winery growth rates. Supply chain choice-sourcing estate grapes also positively impacts gross profit margin. This study uses reported financial data that have not been made available to researchers.

  1. Identification of a robust subpathway-based signature for acute myeloid leukemia prognosis using an miRNA integrated strategy.

    Science.gov (United States)

    Chang, Huijuan; Gao, Qiuying; Ding, Wei; Qing, Xueqin

    2018-01-01

    Acute myeloid leukemia (AML) is a heterogeneous disease, and survival signatures are urgently needed to better monitor treatment. MiRNAs displayed vital regulatory roles on target genes, which was necessary involved in the complex disease. We therefore examined the expression levels of miRNAs and genes to identify robust signatures for survival benefit analyses. First, we reconstructed subpathway graphs by embedding miRNA components that were derived from low-throughput miRNA-gene interactions. Then, we randomly divided the data sets from The Cancer Genome Atlas (TCGA) into training and testing sets, and further formed 100 subsets based on the training set. Using each subset, we identified survival-related miRNAs and genes, and identified survival subpathways based on the reconstructed subpathway graphs. After statistical analyses of these survival subpathways, the most robust subpathways with the top three ranks were identified, and risk scores were calculated based on these robust subpathways for AML patient prognoses. Among these robust subpathways, three representative subpathways, path: 05200_10 from Pathways in cancer, path: 04110_20 from Cell cycle, and path: 04510_8 from Focal adhesion, were significantly associated with patient survival in the TCGA training and testing sets based on subpathway risk scores. In conclusion, we performed integrated analyses of miRNAs and genes to identify robust prognostic subpathways, and calculated subpathway risk scores to characterize AML patient survival.

  2. Robustness in laying hens

    NARCIS (Netherlands)

    Star, L.

    2008-01-01

    The aim of the project ‘The genetics of robustness in laying hens’ was to investigate nature and regulation of robustness in laying hens under sub-optimal conditions and the possibility to increase robustness by using animal breeding without loss of production. At the start of the project, a robust

  3. Sampling strategy for estimating human exposure pathways to consumer chemicals

    Directory of Open Access Journals (Sweden)

    Eleni Papadopoulou

    2016-03-01

    Full Text Available Human exposure to consumer chemicals has become a worldwide concern. In this work, a comprehensive sampling strategy is presented, to our knowledge being the first to study all relevant exposure pathways in a single cohort using multiple methods for assessment of exposure from each exposure pathway. The selected groups of chemicals to be studied are consumer chemicals whose production and use are currently in a state of transition and are; per- and polyfluorinated alkyl substances (PFASs, traditional and “emerging” brominated flame retardants (BFRs and EBFRs, organophosphate esters (OPEs and phthalate esters (PEs. Information about human exposure to these contaminants is needed due to existing data gaps on human exposure intakes from multiple exposure pathways and relationships between internal and external exposure. Indoor environment, food and biological samples were collected from 61 participants and their households in the Oslo area (Norway on two consecutive days, during winter 2013-14. Air, dust, hand wipes, and duplicate diet (food and drink samples were collected as indicators of external exposure, and blood, urine, blood spots, hair, nails and saliva as indicators of internal exposure. A food diary, food frequency questionnaire (FFQ and indoor environment questionnaire were also implemented. Approximately 2000 samples were collected in total and participant views on their experiences of this campaign were collected via questionnaire. While 91% of our participants were positive about future participation in a similar project, some tasks were viewed as problematic. Completing the food diary and collection of duplicate food/drink portions were the tasks most frequent reported as “hard”/”very hard”. Nevertheless, a strong positive correlation between the reported total mass of food/drinks in the food record and the total weight of the food/drinks in the collection bottles was observed, being an indication of accurate performance

  4. On Improving the Energy Efficiency and Robustness of Position Tracking for Mobile Devices

    DEFF Research Database (Denmark)

    Kjærgaard, Mikkel Baun

    2010-01-01

    position updates when faced with changing conditions such as delays and changing positioning conditions. Previous work has established dynamic tracking systems, such as our EnTracked system, as a solution to address these issues. In this paper we propose a responsibility division for position tracking...... into sensor management strategies and position update protocols and combine the sensor management strategy of EnTracked with position update protocols, which enables the system to further reduce the power consumption with up to 268 mW extending the battery life with up to 36\\%. As our evaluation identify...... that classical position update protocols have robustness weaknesses we propose a method to improve their robustness. Furthermore, we analyze the dependency of tracking systems on the pedestrian movement patterns and positioning environment, and how the power savings depend on the power characteristics...

  5. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit; Genton, Marc G.

    2017-01-01

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  6. Depth-weighted robust multivariate regression with application to sparse data

    KAUST Repository

    Dutta, Subhajit

    2017-04-05

    A robust method for multivariate regression is developed based on robust estimators of the joint location and scatter matrix of the explanatory and response variables using the notion of data depth. The multivariate regression estimator possesses desirable affine equivariance properties, achieves the best breakdown point of any affine equivariant estimator, and has an influence function which is bounded in both the response as well as the predictor variable. To increase the efficiency of this estimator, a re-weighted estimator based on robust Mahalanobis distances of the residual vectors is proposed. In practice, the method is more stable than existing methods that are constructed using subsamples of the data. The resulting multivariate regression technique is computationally feasible, and turns out to perform better than several popular robust multivariate regression methods when applied to various simulated data as well as a real benchmark data set. When the data dimension is quite high compared to the sample size it is still possible to use meaningful notions of data depth along with the corresponding depth values to construct a robust estimator in a sparse setting.

  7. Breeding Strategy To Generate Robust Yeast Starter Cultures for Cocoa Pulp Fermentations.

    Science.gov (United States)

    Meersman, Esther; Steensels, Jan; Paulus, Tinneke; Struyf, Nore; Saels, Veerle; Mathawan, Melissa; Koffi, Jean; Vrancken, Gino; Verstrepen, Kevin J

    2015-09-01

    Cocoa pulp fermentation is a spontaneous process during which the natural microbiota present at cocoa farms is allowed to ferment the pulp surrounding cocoa beans. Because such spontaneous fermentations are inconsistent and contribute to product variability, there is growing interest in a microbial starter culture that could be used to inoculate cocoa pulp fermentations. Previous studies have revealed that many different fungi are recovered from different batches of spontaneous cocoa pulp fermentations, whereas the variation in the prokaryotic microbiome is much more limited. In this study, therefore, we aimed to develop a suitable yeast starter culture that is able to outcompete wild contaminants and consistently produce high-quality chocolate. Starting from specifically selected Saccharomyces cerevisiae strains, we developed robust hybrids with characteristics that allow them to efficiently ferment cocoa pulp, including improved temperature tolerance and fermentation capacity. We conducted several laboratory and field trials to show that these new hybrids often outperform their parental strains and are able to dominate spontaneous pilot scale fermentations, which results in much more consistent microbial profiles. Moreover, analysis of the resulting chocolate showed that some of the cocoa batches that were fermented with specific starter cultures yielded superior chocolate. Taken together, these results describe the development of robust yeast starter cultures for cocoa pulp fermentations that can contribute to improving the consistency and quality of commercial chocolate production. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  8. Enhancing the Robustness of the Microcavity Coupling System

    International Nuclear Information System (INIS)

    Yan Ying-Zhan; Zhang Wen-Dong; Xiong Ji-Jun; Ji Zhe; Yan Shu-Bin; Liu Jun; Xue Chen-Yang

    2011-01-01

    A novel method to enhance the robustness of the microcavity coupling system (MCS) is presented by encapsulating and solidifying the MCS with a low refractive index (RI) curable UV polymer. The encapsulating process is illustrated in detail for a typical microsphere with a radius of R about 240μm. Three differences of the resonant characteristics before and after the package are observed and analyzed. The first two differences refer to the enhancement of the coupling strength and the shift of the resonant spectrum to the longer wavelength, which are both mainly because of the microsphere surrounding RI variation. Another difference is the quality factor (Q-factor) which decreases from 7.8×10 7 to 8.7×10 6 after the package due to the polymer absorption. Moreover, rotation testing experiments have been carried out to verify the robustness of the package MCS. Experimental results demonstrate that the packaged MCR has much better robust performance than the un-package sample. The enhancement of the robustness greatly promotes the microcavity research from fundamental investigations to application fields. (fundamental areas of phenomenology(including applications))

  9. Shape, size, and robustness: feasible regions in the parameter space of biochemical networks.

    Directory of Open Access Journals (Sweden)

    Adel Dayarian

    2009-01-01

    Full Text Available The concept of robustness of regulatory networks has received much attention in the last decade. One measure of robustness has been associated with the volume of the feasible region, namely, the region in the parameter space in which the system is functional. In this paper, we show that, in addition to volume, the geometry of this region has important consequences for the robustness and the fragility of a network. We develop an approximation within which we could algebraically specify the feasible region. We analyze the segment polarity gene network to illustrate our approach. The study of random walks in the parameter space and how they exit the feasible region provide us with a rich perspective on the different modes of failure of this network model. In particular, we found that, between two alternative ways of activating Wingless, one is more robust than the other. Our method provides a more complete measure of robustness to parameter variation. As a general modeling strategy, our approach is an interesting alternative to Boolean representation of biochemical networks.

  10. Distributionally Robust Return-Risk Optimization Models and Their Applications

    Directory of Open Access Journals (Sweden)

    Li Yang

    2014-01-01

    Full Text Available Based on the risk control of conditional value-at-risk, distributionally robust return-risk optimization models with box constraints of random vector are proposed. They describe uncertainty in both the distribution form and moments (mean and covariance matrix of random vector. It is difficult to solve them directly. Using the conic duality theory and the minimax theorem, the models are reformulated as semidefinite programming problems, which can be solved by interior point algorithms in polynomial time. An important theoretical basis is therefore provided for applications of the models. Moreover, an application of the models to a practical example of portfolio selection is considered, and the example is evaluated using a historical data set of four stocks. Numerical results show that proposed methods are robust and the investment strategy is safe.

  11. Robust facial landmark detection based on initializing multiple poses

    Directory of Open Access Journals (Sweden)

    Xin Chai

    2016-10-01

    Full Text Available For robot systems, robust facial landmark detection is the first and critical step for face-based human identification and facial expression recognition. In recent years, the cascaded-regression-based method has achieved excellent performance in facial landmark detection. Nevertheless, it still has certain weakness, such as high sensitivity to the initialization. To address this problem, regression based on multiple initializations is established in a unified model; face shapes are then estimated independently according to these initializations. With a ranking strategy, the best estimate is selected as the final output. Moreover, a face shape model based on restricted Boltzmann machines is built as a constraint to improve the robustness of ranking. Experiments on three challenging datasets demonstrate the effectiveness of the proposed facial landmark detection method against state-of-the-art methods.

  12. Data-driven strategies for robust forecast of continuous glucose monitoring time-series.

    Science.gov (United States)

    Fiorini, Samuele; Martini, Chiara; Malpassi, Davide; Cordera, Renzo; Maggi, Davide; Verri, Alessandro; Barla, Annalisa

    2017-07-01

    Over the past decade, continuous glucose monitoring (CGM) has proven to be a very resourceful tool for diabetes management. To date, CGM devices are employed for both retrospective and online applications. Their use allows to better describe the patients' pathology as well as to achieve a better control of patients' level of glycemia. The analysis of CGM sensor data makes possible to observe a wide range of metrics, such as the glycemic variability during the day or the amount of time spent below or above certain glycemic thresholds. However, due to the high variability of the glycemic signals among sensors and individuals, CGM data analysis is a non-trivial task. Standard signal filtering solutions fall short when an appropriate model personalization is not applied. State-of-the-art data-driven strategies for online CGM forecasting rely upon the use of recursive filters. Each time a new sample is collected, such models need to adjust their parameters in order to predict the next glycemic level. In this paper we aim at demonstrating that the problem of online CGM forecasting can be successfully tackled by personalized machine learning models, that do not need to recursively update their parameters.

  13. Microgrid Stability Controller Based on Adaptive Robust Total SMC

    OpenAIRE

    Su, Xiaoling; Han, Minxiao; Guerrero, Josep M.; Sun, Hai

    2015-01-01

    This paper presents a microgrid stability controller (MSC) in order to provide existing distributed generation units (DGs) the additional functionality of working in islanding mode without changing their control strategies in grid-connected mode and to enhance the stability of the microgrid. Microgrid operating characteristics and mathematical models of the MSC indicate that the system is inherently nonlinear and time-variable. Therefore, this paper proposes an adaptive robust total sliding...

  14. Robust Trajectory Option Set planning in CTOP based on Bayesian game model

    KAUST Repository

    Li, Lichun; Clarke, John-Paul; Feron, Eric; Shamma, Jeff S.

    2017-01-01

    The Federal Aviation Administration (FAA) rations capacity to reduce en route delay, especially those caused by bad weather. This is accomplished via Collaborative Trajectory Options Program (CTOP) which has been recently developed to provide a mechanism for flight operators to communicate their route preferences for each flight via a Trajectory Option Set (TOS), as well as a mechanism for the FAA to assign the best possible route within the set of trajectories in the TOS for a given flight, i.e. the route with the lowest adjusted cost after consideration of system constraints and the requirements of all flights. The routes assigned to an airline depend not only on the TOS's for its own flights but also on the TOS's of all other flights in the CTOP, which are unknown. This paper aims to provide a detailed algorithm for the airline to design its TOS plan which is robust to the uncertainties of its competitors' TOS's. To this purpose, we model the CTOP problem as a Bayesian game, and use Linear Program (LP) to compute the security strategy in the Bayesian game model. This security strategy guarantees the airline an upper bound on the sum of the assigned times. The numerical results demonstrate the robustness of the strategy, which is not achieved by any other tested strategy.

  15. Robust Trajectory Option Set planning in CTOP based on Bayesian game model

    KAUST Repository

    Li, Lichun

    2017-07-10

    The Federal Aviation Administration (FAA) rations capacity to reduce en route delay, especially those caused by bad weather. This is accomplished via Collaborative Trajectory Options Program (CTOP) which has been recently developed to provide a mechanism for flight operators to communicate their route preferences for each flight via a Trajectory Option Set (TOS), as well as a mechanism for the FAA to assign the best possible route within the set of trajectories in the TOS for a given flight, i.e. the route with the lowest adjusted cost after consideration of system constraints and the requirements of all flights. The routes assigned to an airline depend not only on the TOS\\'s for its own flights but also on the TOS\\'s of all other flights in the CTOP, which are unknown. This paper aims to provide a detailed algorithm for the airline to design its TOS plan which is robust to the uncertainties of its competitors\\' TOS\\'s. To this purpose, we model the CTOP problem as a Bayesian game, and use Linear Program (LP) to compute the security strategy in the Bayesian game model. This security strategy guarantees the airline an upper bound on the sum of the assigned times. The numerical results demonstrate the robustness of the strategy, which is not achieved by any other tested strategy.

  16. Robust matching for voice recognition

    Science.gov (United States)

    Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.

    1994-10-01

    This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.

  17. Management strategies for fibromyalgia

    OpenAIRE

    Le Marshall KF; Littlejohn GO

    2011-01-01

    Kim Francis Le Marshall, Geoffrey Owen LittlejohnDepartments of Rheumatology and Medicine, Monash Medical Centre and Monash University, Victoria, AustraliaDate of preparation: 14 June 2011Clinical question: What are the effective, evidence-based strategies available for the management of fibromyalgia?Conclusion: There are a number of management strategies available with robust evidence to support their use in clinical practice.Definition: Fibromyalgia is a complex pain syndrome characterized ...

  18. Planning schistosomiasis control: investigation of alternative sampling strategies for Schistosoma mansoni to target mass drug administration of praziquantel in East Africa.

    Science.gov (United States)

    Sturrock, Hugh J W; Gething, Pete W; Ashton, Ruth A; Kolaczinski, Jan H; Kabatereine, Narcis B; Brooker, Simon

    2011-09-01

    In schistosomiasis control, there is a need to geographically target treatment to populations at high risk of morbidity. This paper evaluates alternative sampling strategies for surveys of Schistosoma mansoni to target mass drug administration in Kenya and Ethiopia. Two main designs are considered: lot quality assurance sampling (LQAS) of children from all schools; and a geostatistical design that samples a subset of schools and uses semi-variogram analysis and spatial interpolation to predict prevalence in the remaining unsurveyed schools. Computerized simulations are used to investigate the performance of sampling strategies in correctly classifying schools according to treatment needs and their cost-effectiveness in identifying high prevalence schools. LQAS performs better than geostatistical sampling in correctly classifying schools, but at a cost with a higher cost per high prevalence school correctly classified. It is suggested that the optimal surveying strategy for S. mansoni needs to take into account the goals of the control programme and the financial and drug resources available.

  19. Robust real-time pattern matching using bayesian sequential hypothesis testing.

    Science.gov (United States)

    Pele, Ofir; Werman, Michael

    2008-08-01

    This paper describes a method for robust real time pattern matching. We first introduce a family of image distance measures, the "Image Hamming Distance Family". Members of this family are robust to occlusion, small geometrical transforms, light changes and non-rigid deformations. We then present a novel Bayesian framework for sequential hypothesis testing on finite populations. Based on this framework, we design an optimal rejection/acceptance sampling algorithm. This algorithm quickly determines whether two images are similar with respect to a member of the Image Hamming Distance Family. We also present a fast framework that designs a near-optimal sampling algorithm. Extensive experimental results show that the sequential sampling algorithm performance is excellent. Implemented on a Pentium 4 3 GHz processor, detection of a pattern with 2197 pixels, in 640 x 480 pixel frames, where in each frame the pattern rotated and was highly occluded, proceeds at only 0.022 seconds per frame.

  20. Reagent-Less and Robust Biosensor for Direct Determination of Lactate in Food Samples.

    Science.gov (United States)

    Bravo, Iria; Revenga-Parra, Mónica; Pariente, Félix; Lorenzo, Encarnación

    2017-01-13

    Lactic acid is a relevant analyte in the food industry, since it affects the flavor, freshness, and storage quality of several products, such as milk and dairy products, juices, or wines. It is the product of lactose or malo-lactic fermentation. In this work, we developed a lactate biosensor based on the immobilization of lactate oxidase (LOx) onto N , N '-Bis(3,4-dihydroxybenzylidene) -1,2-diaminobenzene Schiff base tetradentate ligand-modified gold nanoparticles (3,4DHS-AuNPs) deposited onto screen-printed carbon electrodes, which exhibit a potent electrocatalytic effect towards hydrogen peroxide oxidation/reduction. 3,4DHS-AuNPs were synthesized within a unique reaction step, in which 3,4DHS acts as reducing/capping/modifier agent for the generation of stable colloidal suspensions of Schiff base ligand-AuNPs assemblies of controlled size. The ligand-in addition to its reduction action-provides a robust coating to gold nanoparticles and a catalytic function. Lactate oxidase (LOx) catalyzes the conversion of l-lactate to pyruvate in the presence of oxygen, producing hydrogen peroxide, which is catalytically oxidized at 3,4DHS-AuNPs modified screen-printed carbon electrodes at +0.2 V. The measured electrocatalytic current is directly proportional to the concentration of peroxide, which is related to the amount of lactate present in the sample. The developed biosensor shows a detection limit of 2.6 μM lactate and a sensitivity of 5.1 ± 0.1 μA·mM -1 . The utility of the device has been demonstrated by the determination of the lactate content in different matrixes (white wine, beer, and yogurt). The obtained results compare well to those obtained using a standard enzymatic-spectrophotometric assay kit.

  1. Reagent-Less and Robust Biosensor for Direct Determination of Lactate in Food Samples

    Directory of Open Access Journals (Sweden)

    Iria Bravo

    2017-01-01

    Full Text Available Lactic acid is a relevant analyte in the food industry, since it affects the flavor, freshness, and storage quality of several products, such as milk and dairy products, juices, or wines. It is the product of lactose or malo-lactic fermentation. In this work, we developed a lactate biosensor based on the immobilization of lactate oxidase (LOx onto N,N′-Bis(3,4-dihydroxybenzylidene -1,2-diaminobenzene Schiff base tetradentate ligand-modified gold nanoparticles (3,4DHS–AuNPs deposited onto screen-printed carbon electrodes, which exhibit a potent electrocatalytic effect towards hydrogen peroxide oxidation/reduction. 3,4DHS–AuNPs were synthesized within a unique reaction step, in which 3,4DHS acts as reducing/capping/modifier agent for the generation of stable colloidal suspensions of Schiff base ligand–AuNPs assemblies of controlled size. The ligand—in addition to its reduction action—provides a robust coating to gold nanoparticles and a catalytic function. Lactate oxidase (LOx catalyzes the conversion of l-lactate to pyruvate in the presence of oxygen, producing hydrogen peroxide, which is catalytically oxidized at 3,4DHS–AuNPs modified screen-printed carbon electrodes at +0.2 V. The measured electrocatalytic current is directly proportional to the concentration of peroxide, which is related to the amount of lactate present in the sample. The developed biosensor shows a detection limit of 2.6 μM lactate and a sensitivity of 5.1 ± 0.1 μA·mM−1. The utility of the device has been demonstrated by the determination of the lactate content in different matrixes (white wine, beer, and yogurt. The obtained results compare well to those obtained using a standard enzymatic-spectrophotometric assay kit.

  2. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  3. Using Linked Survey Paradata to Improve Sampling Strategies in the Medical Expenditure Panel Survey

    Directory of Open Access Journals (Sweden)

    Mirel Lisa B.

    2017-06-01

    Full Text Available Using paradata from a prior survey that is linked to a new survey can help a survey organization develop more effective sampling strategies. One example of this type of linkage or subsampling is between the National Health Interview Survey (NHIS and the Medical Expenditure Panel Survey (MEPS. MEPS is a nationally representative sample of the U.S. civilian, noninstitutionalized population based on a complex multi-stage sample design. Each year a new sample is drawn as a subsample of households from the prior year’s NHIS. The main objective of this article is to examine how paradata from a prior survey can be used in developing a sampling scheme in a subsequent survey. A framework for optimal allocation of the sample in substrata formed for this purpose is presented and evaluated for the relative effectiveness of alternative substratification schemes. The framework is applied, using real MEPS data, to illustrate how utilizing paradata from the linked survey offers the possibility of making improvements to the sampling scheme for the subsequent survey. The improvements aim to reduce the data collection costs while maintaining or increasing effective responding sample sizes and response rates for a harder to reach population.

  4. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  5. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  6. Robustness and strategies of adaptation among farmer varieties of African Rice (Oryza glaberrima) and Asian Rice (Oryza sativa) across West Africa.

    Science.gov (United States)

    Mokuwa, Alfred; Nuijten, Edwin; Okry, Florent; Teeken, Béla; Maat, Harro; Richards, Paul; Struik, Paul C

    2013-01-01

    This study offers evidence of the robustness of farmer rice varieties (Oryza glaberrima and O. sativa) in West Africa. Our experiments in five West African countries showed that farmer varieties were tolerant of sub-optimal conditions, but employed a range of strategies to cope with stress. Varieties belonging to the species Oryza glaberrima - solely the product of farmer agency - were the most successful in adapting to a range of adverse conditions. Some of the farmer selections from within the indica and japonica subspecies of O. sativa also performed well in a range of conditions, but other farmer selections from within these two subspecies were mainly limited to more specific niches. The results contradict the rather common belief that farmer varieties are only of local value. Farmer varieties should be considered by breeding programmes and used (alongside improved varieties) in dissemination projects for rural food security.

  7. Robustness and Strategies of Adaptation among Farmer Varieties of African Rice (Oryza glaberrima) and Asian Rice (Oryza sativa) across West Africa

    Science.gov (United States)

    Maat, Harro; Richards, Paul; Struik, Paul C.

    2013-01-01

    This study offers evidence of the robustness of farmer rice varieties (Oryza glaberrima and O. sativa) in West Africa. Our experiments in five West African countries showed that farmer varieties were tolerant of sub-optimal conditions, but employed a range of strategies to cope with stress. Varieties belonging to the species Oryza glaberrima – solely the product of farmer agency – were the most successful in adapting to a range of adverse conditions. Some of the farmer selections from within the indica and japonica subspecies of O. sativa also performed well in a range of conditions, but other farmer selections from within these two subspecies were mainly limited to more specific niches. The results contradict the rather common belief that farmer varieties are only of local value. Farmer varieties should be considered by breeding programmes and used (alongside improved varieties) in dissemination projects for rural food security. PMID:23536754

  8. Decisional tool to assess current and future process robustness in an antibody purification facility.

    Science.gov (United States)

    Stonier, Adam; Simaria, Ana Sofia; Smith, Martin; Farid, Suzanne S

    2012-07-01

    Increases in cell culture titers in existing facilities have prompted efforts to identify strategies that alleviate purification bottlenecks while controlling costs. This article describes the application of a database-driven dynamic simulation tool to identify optimal purification sizing strategies and visualize their robustness to future titer increases. The tool harnessed the benefits of MySQL to capture the process, business, and risk features of multiple purification options and better manage the large datasets required for uncertainty analysis and optimization. The database was linked to a discrete-event simulation engine so as to model the dynamic features of biopharmaceutical manufacture and impact of resource constraints. For a given titer, the tool performed brute force optimization so as to identify optimal purification sizing strategies that minimized the batch material cost while maintaining the schedule. The tool was applied to industrial case studies based on a platform monoclonal antibody purification process in a multisuite clinical scale manufacturing facility. The case studies assessed the robustness of optimal strategies to batch-to-batch titer variability and extended this to assess the long-term fit of the platform process as titers increase from 1 to 10 g/L, given a range of equipment sizes available to enable scale intensification efforts. Novel visualization plots consisting of multiple Pareto frontiers with tie-lines connecting the position of optimal configurations over a given titer range were constructed. These enabled rapid identification of robust purification configurations given titer fluctuations and the facility limit that the purification suites could handle in terms of the maximum titer and hence harvest load. Copyright © 2012 American Institute of Chemical Engineers (AIChE).

  9. Robust Growth Determinants

    OpenAIRE

    Doppelhofer, Gernot; Weeks, Melvyn

    2011-01-01

    This paper investigates the robustness of determinants of economic growth in the presence of model uncertainty, parameter heterogeneity and outliers. The robust model averaging approach introduced in the paper uses a flexible and parsi- monious mixture modeling that allows for fat-tailed errors compared to the normal benchmark case. Applying robust model averaging to growth determinants, the paper finds that eight out of eighteen variables found to be significantly related to economic growth ...

  10. International Conference on Robust Rank-Based and Nonparametric Methods

    CERN Document Server

    McKean, Joseph

    2016-01-01

    The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with r...

  11. A novel one-step strategy toward ZnMn2O4/N-doped graphene nanosheets with robust chemical interaction for superior lithium storage

    International Nuclear Information System (INIS)

    Wang, Dong; Zhou, Weiwei; Zhang, Yong; Wang, Yali; Wu, Gangan; Yu, Kun; Wen, Guangwu

    2016-01-01

    Ingenious hybrid electrode design, especially realized with a facile strategy, is appealing yet challenging for electrochemical energy storage devices. Here, we report the synthesis of a novel ZnMn 2 O 4 /N-doped graphene (ZMO/NG) nanohybrid with sandwiched structure via a facile one-step approach, in which ultrafine ZMO nanoparticles with diameters of 10–12 nm are well dispersed on both surfaces of N-doped graphene (NG) nanosheets. Note that one-step synthetic strategies are rarely reported for ZMO-based nanostructures. Systematical control experiments reveal that the formation of well-dispersed ZMO nanoparticles is not solely ascribed to the restriction effect of the functional groups on graphene oxide (GO), but also to the presence of ammonia. Benefitting from the synergistic effects and robust chemical interaction between ZMO nanoparticles and N-doped graphene nanosheets, the ZMO/NG hybrids deliver a reversible capacity up to 747 mAh g −1 after 200 cycles at a current density of 500 mA g −1 . Even at a high current density of 3200 mA g −1 , an unrivaled capacity of 500 mAh g −1 can still be retained, corroborating the good rate capability. (paper)

  12. Measuring strategies for learning regulation in medical education: scale reliability and dimensionality in a Swedish sample.

    Science.gov (United States)

    Edelbring, Samuel

    2012-08-15

    The degree of learners' self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206). The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach's alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students' regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  13. Robust sawtooth period control based on adaptive online optimization

    International Nuclear Information System (INIS)

    Bolder, J.J.; Witvoet, G.; De Baar, M.R.; Steinbuch, M.; Van de Wouw, N.; Haring, M.A.M.; Westerhof, E.; Doelman, N.J.

    2012-01-01

    The systematic design of a robust adaptive control strategy for the sawtooth period using electron cyclotron current drive (ECCD) is presented. Recent developments in extremum seeking control (ESC) are employed to derive an optimized controller structure and offer practical tuning guidelines for its parameters. In this technique a cost function in terms of the desired sawtooth period is optimized online by changing the ECCD deposition location based on online estimations of the gradient of the cost function. The controller design does not require a detailed model of the sawtooth instability. Therefore, the proposed ESC is widely applicable to any sawtoothing plasma or plasma simulation and is inherently robust against uncertainties or plasma variations. Moreover, it can handle a broad class of disturbances. This is demonstrated by time-domain simulations, which show successful tracking of time-varying sawtooth period references throughout the whole operating space, even in the presence of variations in plasma parameters, disturbances and slow launcher mirror dynamics. Due to its simplicity and robustness the proposed ESC is a valuable sawtooth control candidate for any experimental tokamak plasma, and may even be applicable to other fusion-related control problems. (paper)

  14. Microgrid Stability Controller Based on Adaptive Robust Total SMC

    DEFF Research Database (Denmark)

    Su, Xiaoling; Han, Minxiao; Guerrero, Josep M.

    2015-01-01

    This paper presents a microgrid stability controller (MSC) in order to provide existing DGs the additional functionality of working in islanding mode without changing their control strategies in grid-connected mode and to enhance the stability of the microgrid. Microgrid operating characteristics....... The MSC provides fast dynamic response and robustness to the microgrid. When the system is operating in grid-connected mode, it is able to improve the controllability of the exchanged power between the microgrid and the utility grid, while smoothing DG’s output power. When the microgrid is operating...... and mathematical models of the MSC indicate that the system is inherently nonlinear and time-variable. Therefore, this paper proposes an adaptive robust total sliding-mode control (ARTSMC) system for the MSC. It is proved that the ARTSMC system is insensitive to parametric uncertainties and external disturbances...

  15. Robust chaotic control of Lorenz system by backstepping design

    International Nuclear Information System (INIS)

    Peng, C.-C.; Chen, C.-L.

    2008-01-01

    This work presents a robust chaotic control strategy for the Lorenz chaos via backstepping design. Backstepping technique is a systematic tool of control law design to provide Lyapunov stability. The concept of extended system is used such that a continuous sliding mode control (SMC) effort is generated using backstepping scheme. In the proposed control algorithm, an adaptation law is applied to estimate the system parameter and the SMC offers the robustness to model uncertainties and external disturbances so that the asymptotical convergence of tracking error can be achieved. Regarding the SMC, an equivalent control algorithm is chosen based on the selection of Lyapunov stability criterion during backstepping approach. The converging rate of error state is relative to the corresponding dynamics of sliding surface. Numerical simulations demonstrate its advantages to a regulation problem and an orbit tracking problem of the Lorenz chaos

  16. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    Science.gov (United States)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  17. Catch, effort and sampling strategies in the highly variable sardine fisheries around East Java, Indonesia.

    NARCIS (Netherlands)

    Pet, J.S.; Densen, van W.L.T.; Machiels, M.A.M.; Sukkel, M.; Setyohady, D.; Tumuljadi, A.

    1997-01-01

    Temporal and spatial patterns in the fishery for Sardinella spp. around East Java, Indonesia, were studied in an attempt to develop an efficient catch and effort sampling strategy for this highly variable fishery. The inter-annual and monthly variation in catch, effort and catch per unit of effort

  18. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    Science.gov (United States)

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes

  19. Robust sliding-window reconstruction for Accelerating the acquisition of MR fingerprinting.

    Science.gov (United States)

    Cao, Xiaozhi; Liao, Congyu; Wang, Zhixing; Chen, Ying; Ye, Huihui; He, Hongjian; Zhong, Jianhui

    2017-10-01

    To develop a method for accelerated and robust MR fingerprinting (MRF) with improved image reconstruction and parameter matching processes. A sliding-window (SW) strategy was applied to MRF, in which signal and dictionary matching was conducted between fingerprints consisting of mixed-contrast image series reconstructed from consecutive data frames segmented by a sliding window, and a precalculated mixed-contrast dictionary. The effectiveness and performance of this new method, dubbed SW-MRF, was evaluated in both phantom and in vivo. Error quantifications were conducted on results obtained with various settings of SW reconstruction parameters. Compared with the original MRF strategy, the results of both phantom and in vivo experiments demonstrate that the proposed SW-MRF strategy either provided similar accuracy with reduced acquisition time, or improved accuracy with equal acquisition time. Parametric maps of T 1 , T 2 , and proton density of comparable quality could be achieved with a two-fold or more reduction in acquisition time. The effect of sliding-window width on dictionary sensitivity was also estimated. The novel SW-MRF recovers high quality image frames from highly undersampled MRF data, which enables more robust dictionary matching with reduced numbers of data frames. This time efficiency may facilitate MRF applications in time-critical clinical settings. Magn Reson Med 78:1579-1588, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  20. Robust Power Management Control for Stand-Alone Hybrid Power Generation System

    International Nuclear Information System (INIS)

    Kamal, Elkhatib; Adouane, Lounis; Aitouche, Abdel; Mohammed, Walaa

    2017-01-01

    This paper presents a new robust fuzzy control of energy management strategy for the stand-alone hybrid power systems. It consists of two levels named centralized fuzzy supervisory control which generates the power references for each decentralized robust fuzzy control. Hybrid power systems comprises: a photovoltaic panel and wind turbine as renewable sources, a micro turbine generator and a battery storage system. The proposed control strategy is able to satisfy the load requirements based on a fuzzy supervisor controller and manage power flows between the different energy sources and the storage unit by respecting the state of charge and the variation of wind speed and irradiance. Centralized controller is designed based on If-Then fuzzy rules to manage and optimize the hybrid power system production by generating the reference power for photovoltaic panel and wind turbine. Decentralized controller is based on the Takagi-Sugeno fuzzy model and permits us to stabilize each photovoltaic panel and wind turbine in presence of disturbances and parametric uncertainties and to optimize the tracking reference which is given by the centralized controller level. The sufficient conditions stability are formulated in the format of linear matrix inequalities using the Lyapunov stability theory. The effectiveness of the proposed Strategy is finally demonstrated through a SAHPS (stand-alone hybrid power systems) to illustrate the effectiveness of the overall proposed method. (paper)

  1. Sampling in forests for radionuclide analysis. General and practical guidance

    Energy Technology Data Exchange (ETDEWEB)

    Aro, Lasse (Finnish Forest Research Inst. (METLA) (Finland)); Plamboeck, Agneta H. (Swedish Defence Research Agency (FOI) (Sweden)); Rantavaara, Aino; Vetikko, Virve (Radiation and Nuclear Safety Authority (STUK) (Finland)); Straalberg, Elisabeth (Inst. Energy Technology (IFE) (Norway))

    2009-01-15

    The NKS project FOREST was established to prepare a guide for sampling in forest ecosystems for radionuclide analysis. The aim of this guide is to improve the reliability of datasets generated in future studies by promoting the use of consistent, recommended practices, thorough documentation of field sampling regimes and robust preparation of samples from the forest ecosystem. The guide covers general aims of sampling, the description of major compartments of the forest ecosystem and outlines key factors to consider when planning sampling campaigns for radioecological field studies in forests. Recommended and known sampling methods for various sample types are also compiled and presented. The guide focuses on sampling practices that are applicable in various types of boreal forests, robust descriptions of sampling sites, and documentation of the origin and details of individual samples. The guide is intended for scientists, students, forestry experts and technicians who appreciate the need to use sound sampling procedures in forest radioecological projects. The guide will hopefully encourage readers to participate in field studies and sampling campaigns, using robust techniques, thereby fostering competence in sampling. (au)

  2. Sampling in forests for radionuclide analysis. General and practical guidance

    International Nuclear Information System (INIS)

    Aro, Lasse; Plamboeck, Agneta H.; Rantavaara, Aino; Vetikko, Virve; Straelberg, Elisabeth

    2009-01-01

    The NKS project FOREST was established to prepare a guide for sampling in forest ecosystems for radionuclide analysis. The aim of this guide is to improve the reliability of datasets generated in future studies by promoting the use of consistent, recommended practices, thorough documentation of field sampling regimes and robust preparation of samples from the forest ecosystem. The guide covers general aims of sampling, the description of major compartments of the forest ecosystem and outlines key factors to consider when planning sampling campaigns for radioecological field studies in forests. Recommended and known sampling methods for various sample types are also compiled and presented. The guide focuses on sampling practices that are applicable in various types of boreal forests, robust descriptions of sampling sites, and documentation of the origin and details of individual samples. The guide is intended for scientists, students, forestry experts and technicians who appreciate the need to use sound sampling procedures in forest radioecological projects. The guide will hopefully encourage readers to participate in field studies and sampling campaigns, using robust techniques, thereby fostering competence in sampling. (au)

  3. Design of Robust Adaptive Array Processors for Non-Stationary Ocean Environments

    National Research Council Canada - National Science Library

    Wage, Kathleen E

    2009-01-01

    The overall goal of this project is to design adaptive array processing algorithms that have good transient performance, are robust to mismatch, work with low sample support, and incorporate waveguide...

  4. Field screening sampling and analysis strategy and methodology for the 183-H Solar Evaporation Basins: Phase 2, Soils

    International Nuclear Information System (INIS)

    Antipas, A.; Hopkins, A.M.; Wasemiller, M.A.; McCain, R.G.

    1996-01-01

    This document provides a sampling/analytical strategy and methodology for Resource Conservation and Recovery Act (RCRA) closure of the 183-H Solar Evaporation Basins within the boundaries and requirements identified in the initial Phase II Sampling and Analysis Plan for RCRA Closure of the 183-H Solar Evaporation Basins

  5. Modelling of in-stream nitrogen and phosphorus concentrations using different sampling strategies for calibration data

    Science.gov (United States)

    Jomaa, Seifeddine; Jiang, Sanyuan; Yang, Xiaoqiang; Rode, Michael

    2016-04-01

    It is known that a good evaluation and prediction of surface water pollution is mainly limited by the monitoring strategy and the capability of the hydrological water quality model to reproduce the internal processes. To this end, a compromise sampling frequency, which can reflect the dynamical behaviour of leached nutrient fluxes responding to changes in land use, agriculture practices and point sources, and appropriate process-based water quality model are required. The objective of this study was to test the identification of hydrological water quality model parameters (nitrogen and phosphorus) under two different monitoring strategies: (1) regular grab-sampling approach and (2) regular grab-sampling with additional monitoring during the hydrological events using automatic samplers. First, the semi-distributed hydrological water quality HYPE (Hydrological Predictions for the Environment) model was successfully calibrated (1994-1998) for discharge (NSE = 0.86), nitrate-N (lowest NSE for nitrate-N load = 0.69), particulate phosphorus and soluble phosphorus in the Selke catchment (463 km2, central Germany) for the period 1994-1998 using regular grab-sampling approach (biweekly to monthly for nitrogen and phosphorus concentrations). Second, the model was successfully validated during the period 1999-2010 for discharge, nitrate-N, particulate-phosphorus and soluble-phosphorus (lowest NSE for soluble phosphorus load = 0.54). Results, showed that when additional sampling during the events with random grab-sampling approach was used (period 2011-2013), the hydrological model could reproduce only the nitrate-N and soluble phosphorus concentrations reasonably well. However, when additional sampling during the hydrological events was considered, the HYPE model could not represent the measured particulate phosphorus. This reflects the importance of suspended sediment during the hydrological events increasing the concentrations of particulate phosphorus. The HYPE model could

  6. Regional Scale Modelling for Exploring Energy Strategies for Africa

    International Nuclear Information System (INIS)

    Welsch, M.

    2015-01-01

    KTH Royal Institute of Technology was founded in 1827 and it is the largest technical university in Sweden with five campuses and Around 15,000 students. KTH-dESA combines an outstanding knowledge in the field of energy systems analysis. This is demonstrated by the successful collaborations with many (UN) organizations. Regional Scale Modelling for Exploring Energy Strategies for Africa include Assessing renewable energy potentials; Analysing investment strategies; ) Assessing climate resilience; Comparing electrification options; Providing web-based decision support; and Quantifying energy access. It is conclude that Strategies required to ensure a robust and flexible energy system (-> no-regret choices); Capacity investments should be in line with national & regional strategies; Climate change important to consider, as it may strongly influence the energy flows in a region; Long-term models can help identify robust energy investment strategies and pathways that Can help assess future markets and profitability of individual projects

  7. Design optimization of a robust sleeve antenna for hepatic microwave ablation

    International Nuclear Information System (INIS)

    Prakash, Punit; Webster, John G; Deng Geng; Converse, Mark C; Mahvi, David M; Ferris, Michael C

    2008-01-01

    We describe the application of a Bayesian variable-number sample-path (VNSP) optimization algorithm to yield a robust design for a floating sleeve antenna for hepatic microwave ablation. Finite element models are used to generate the electromagnetic (EM) field and thermal distribution in liver given a particular design. Dielectric properties of the tissue are assumed to vary within ± 10% of average properties to simulate the variation among individuals. The Bayesian VNSP algorithm yields an optimal design that is a 14.3% improvement over the original design and is more robust in terms of lesion size, shape and efficiency. Moreover, the Bayesian VNSP algorithm finds an optimal solution saving 68.2% simulation of the evaluations compared to the standard sample-path optimization method

  8. Robustness of structures

    DEFF Research Database (Denmark)

    Vrouwenvelder, T.; Sørensen, John Dalsgaard

    2009-01-01

    After the collapse of the World Trade Centre towers in 2001 and a number of collapses of structural systems in the beginning of the century, robustness of structural systems has gained renewed interest. Despite many significant theoretical, methodical and technological advances, structural...... of robustness for structural design such requirements are not substantiated in more detail, nor have the engineering profession been able to agree on an interpretation of robustness which facilitates for its uantification. A European COST action TU 601 on ‘Robustness of structures' has started in 2007...... by a group of members of the CSS. This paper describes the ongoing work in this action, with emphasis on the development of a theoretical and risk based quantification and optimization procedure on the one side and a practical pre-normative guideline on the other....

  9. Many-objective robust decision making for water allocation under climate change.

    Science.gov (United States)

    Yan, Dan; Ludwig, Fulco; Huang, He Qing; Werners, Saskia E

    2017-12-31

    Water allocation is facing profound challenges due to climate change uncertainties. To identify adaptive water allocation strategies that are robust to climate change uncertainties, a model framework combining many-objective robust decision making and biophysical modeling is developed for large rivers. The framework was applied to the Pearl River basin (PRB), China where sufficient flow to the delta is required to reduce saltwater intrusion in the dry season. Before identifying and assessing robust water allocation plans for the future, the performance of ten state-of-the-art MOEAs (multi-objective evolutionary algorithms) is evaluated for the water allocation problem in the PRB. The Borg multi-objective evolutionary algorithm (Borg MOEA), which is a self-adaptive optimization algorithm, has the best performance during the historical periods. Therefore it is selected to generate new water allocation plans for the future (2079-2099). This study shows that robust decision making using carefully selected MOEAs can help limit saltwater intrusion in the Pearl River Delta. However, the framework could perform poorly due to larger than expected climate change impacts on water availability. Results also show that subjective design choices from the researchers and/or water managers could potentially affect the ability of the model framework, and cause the most robust water allocation plans to fail under future climate change. Developing robust allocation plans in a river basin suffering from increasing water shortage requires the researchers and water managers to well characterize future climate change of the study regions and vulnerabilities of their tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Advanced process monitoring and feedback control to enhance cell culture process production and robustness.

    Science.gov (United States)

    Zhang, An; Tsang, Valerie Liu; Moore, Brandon; Shen, Vivian; Huang, Yao-Ming; Kshirsagar, Rashmi; Ryll, Thomas

    2015-12-01

    It is a common practice in biotherapeutic manufacturing to define a fixed-volume feed strategy for nutrient feeds, based on historical cell demand. However, once the feed volumes are defined, they are inflexible to batch-to-batch variations in cell growth and physiology and can lead to inconsistent productivity and product quality. In an effort to control critical quality attributes and to apply process analytical technology (PAT), a fully automated cell culture feedback control system has been explored in three different applications. The first study illustrates that frequent monitoring and automatically controlling the complex feed based on a surrogate (glutamate) level improved protein production. More importantly, the resulting feed strategy was translated into a manufacturing-friendly manual feed strategy without impact on product quality. The second study demonstrates the improved process robustness of an automated feed strategy based on online bio-capacitance measurements for cell growth. In the third study, glucose and lactate concentrations were measured online and were used to automatically control the glucose feed, which in turn changed lactate metabolism. These studies suggest that the auto-feedback control system has the potential to significantly increase productivity and improve robustness in manufacturing, with the goal of ensuring process performance and product quality consistency. © 2015 Wiley Periodicals, Inc.

  11. A simple nonstationary-volatility robust panel unit root test

    NARCIS (Netherlands)

    Demetrescu, Matei; Hanck, Christoph

    2012-01-01

    We propose an IV panel unit root test robust to nonstationary error volatility. Its finite-sample performance is convincing even for many units and strong cross-correlation. An application to GDP prices illustrates the inferential impact of nonstationary volatility. (C) 2012 Elsevier B.V. All rights

  12. Robust optimization of robotic pick and place operations for deformable objects through simulation

    DEFF Research Database (Denmark)

    Bo Jorgensen, Troels; Debrabant, Kristian; Kruger, Norbert

    2016-01-01

    for the task. The solutions are parameterized in terms of the robot motion and the gripper configuration, and after each simulation various objective scores are determined and combined. This enables the use of various optimization strategies. Based on visual inspection of the most robust solution found...

  13. A Parametric Learning and Identification Based Robust Iterative Learning Control for Time Varying Delay Systems

    Directory of Open Access Journals (Sweden)

    Lun Zhai

    2014-01-01

    Full Text Available A parametric learning based robust iterative learning control (ILC scheme is applied to the time varying delay multiple-input and multiple-output (MIMO linear systems. The convergence conditions are derived by using the H∞ and linear matrix inequality (LMI approaches, and the convergence speed is analyzed as well. A practical identification strategy is applied to optimize the learning laws and to improve the robustness and performance of the control system. Numerical simulations are illustrated to validate the above concepts.

  14. Measuring strategies for learning regulation in medical education: Scale reliability and dimensionality in a Swedish sample

    Directory of Open Access Journals (Sweden)

    Edelbring Samuel

    2012-08-01

    Full Text Available Abstract Background The degree of learners’ self-regulated learning and dependence on external regulation influence learning processes in higher education. These regulation strategies are commonly measured by questionnaires developed in other settings than in which they are being used, thereby requiring renewed validation. The aim of this study was to psychometrically evaluate the learning regulation strategy scales from the Inventory of Learning Styles with Swedish medical students (N = 206. Methods The regulation scales were evaluated regarding their reliability, scale dimensionality and interrelations. The primary evaluation focused on dimensionality and was performed with Mokken scale analysis. To assist future scale refinement, additional item analysis, such as item-to-scale correlations, was performed. Results Scale scores in the Swedish sample displayed good reliability in relation to published results: Cronbach’s alpha: 0.82, 0.72, and 0.65 for self-regulation, external regulation and lack of regulation scales respectively. The dimensionalities in scales were adequate for self-regulation and its subscales, whereas external regulation and lack of regulation displayed less unidimensionality. The established theoretical scales were largely replicated in the exploratory analysis. The item analysis identified two items that contributed little to their respective scales. Discussion The results indicate that these scales have an adequate capacity for detecting the three theoretically proposed learning regulation strategies in the medical education sample. Further construct validity should be sought by interpreting scale scores in relation to specific learning activities. Using established scales for measuring students’ regulation strategies enables a broad empirical base for increasing knowledge on regulation strategies in relation to different disciplinary settings and contributes to theoretical development.

  15. Efficient and robust gradient enhanced Kriging emulators.

    Energy Technology Data Exchange (ETDEWEB)

    Dalbey, Keith R.

    2013-08-01

    %E2%80%9CNaive%E2%80%9D or straight-forward Kriging implementations can often perform poorly in practice. The relevant features of the robustly accurate and efficient Kriging and Gradient Enhanced Kriging (GEK) implementations in the DAKOTA software package are detailed herein. The principal contribution is a novel, effective, and efficient approach to handle ill-conditioning of GEK's %E2%80%9Ccorrelation%E2%80%9D matrix, RN%CC%83, based on a pivoted Cholesky factorization of Kriging's (not GEK's) correlation matrix, R, which is a small sub-matrix within GEK's RN%CC%83 matrix. The approach discards sample points/equations that contribute the least %E2%80%9Cnew%E2%80%9D information to RN%CC%83. Since these points contain the least new information, they are the ones which when discarded are both the easiest to predict and provide maximum improvement of RN%CC%83's conditioning. Prior to this work, handling ill-conditioned correlation matrices was a major, perhaps the principal, unsolved challenge necessary for robust and efficient GEK emulators. Numerical results demonstrate that GEK predictions can be significantly more accurate when GEK is allowed to discard points by the presented method. Numerical results also indicate that GEK can be used to break the curse of dimensionality by exploiting inexpensive derivatives (such as those provided by automatic differentiation or adjoint techniques), smoothness in the response being modeled, and adaptive sampling. Development of a suitable adaptive sampling algorithm was beyond the scope of this work; instead adaptive sampling was approximated by omitting the cost of samples discarded by the presented pivoted Cholesky approach.

  16. SASqPCR: robust and rapid analysis of RT-qPCR data in SAS.

    Directory of Open Access Journals (Sweden)

    Daijun Ling

    Full Text Available Reverse transcription quantitative real-time PCR (RT-qPCR is a key method for measurement of relative gene expression. Analysis of RT-qPCR data requires many iterative computations for data normalization and analytical optimization. Currently no computer program for RT-qPCR data analysis is suitable for analytical optimization and user-controllable customization based on data quality, experimental design as well as specific research aims. Here I introduce an all-in-one computer program, SASqPCR, for robust and rapid analysis of RT-qPCR data in SAS. This program has multiple macros for assessment of PCR efficiencies, validation of reference genes, optimization of data normalizers, normalization of confounding variations across samples, and statistical comparison of target gene expression in parallel samples. Users can simply change the macro variables to test various analytical strategies, optimize results and customize the analytical processes. In addition, it is highly automatic and functionally extendable. Thus users are the actual decision-makers controlling RT-qPCR data analyses. SASqPCR and its tutorial are freely available at http://code.google.com/p/sasqpcr/downloads/list.

  17. Robustness of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    This paper describes the background of the robustness requirements implemented in the Danish Code of Practice for Safety of Structures and in the Danish National Annex to the Eurocode 0, see (DS-INF 146, 2003), (DS 409, 2006), (EN 1990 DK NA, 2007) and (Sørensen and Christensen, 2006). More...... frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new structures essential....... According to Danish design rules robustness shall be documented for all structures in high consequence class. The design procedure to document sufficient robustness consists of: 1) Review of loads and possible failure modes / scenarios and determination of acceptable collapse extent; 2) Review...

  18. Uncertainty, robustness, and the value of information in managing a population of northern bobwhites

    Science.gov (United States)

    Johnson, Fred A.; Hagan, Greg; Palmer, William E.; Kemmerer, Michael

    2014-01-01

    The abundance of northern bobwhites (Colinus virginianus) has decreased throughout their range. Managers often respond by considering improvements in harvest and habitat management practices, but this can be challenging if substantial uncertainty exists concerning the cause(s) of the decline. We were interested in how application of decision science could be used to help managers on a large, public management area in southwestern Florida where the bobwhite is a featured species and where abundance has severely declined. We conducted a workshop with managers and scientists to elicit management objectives, alternative hypotheses concerning population limitation in bobwhites, potential management actions, and predicted management outcomes. Using standard and robust approaches to decision making, we determined that improved water management and perhaps some changes in hunting practices would be expected to produce the best management outcomes in the face of uncertainty about what is limiting bobwhite abundance. We used a criterion called the expected value of perfect information to determine that a robust management strategy may perform nearly as well as an optimal management strategy (i.e., a strategy that is expected to perform best, given the relative importance of different management objectives) with all uncertainty resolved. We used the expected value of partial information to determine that management performance could be increased most by eliminating uncertainty over excessive-harvest and human-disturbance hypotheses. Beyond learning about the factors limiting bobwhites, adoption of a dynamic management strategy, which recognizes temporal changes in resource and environmental conditions, might produce the greatest management benefit. Our research demonstrates that robust approaches to decision making, combined with estimates of the value of information, can offer considerable insight into preferred management approaches when great uncertainty exists about

  19. Robust Load Cell Cell for Discrete Contact Force Measurements of Sampling Systems and/or Instruments, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Bear Engineering proposes to develop a simple, robust, extreme environment compatible, mechanical load cell to enable the control of contact forces for placement of...

  20. Dynamics robustness of cascading systems.

    Directory of Open Access Journals (Sweden)

    Jonathan T Young

    2017-03-01

    Full Text Available A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1 Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2 Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it

  1. Strategy as Projects

    DEFF Research Database (Denmark)

    Lund Pedersen, Carsten; Ritter, Thomas; Andersen, Torben Juul

    This paper proposes the adoption of a project-based view to analyze strategy formation and strategic renewal over time. Projects are resource-committing, empirically-tracable investments, and as such, particularly suitable for the analysis of different manifestations of intended strategies as well...... as post-hoc manifestations of deviant, even rebellious, actions taken in opposition to the initial strategy announcement. The paper presents an analytical framework (a 5x2 matrix) of ten different project categories that together allows researchers to investigate how strategic renewal is realized through...... the enactment of different types of project initiatives throughout the organization. The developed framework is validated by two field studies that outline the robustness of the proposed matrix. In addition to the demonstration of the advantages of the framework, we discuss the limitations of the strategy-as-projects...

  2. Robustness Property of Robust-BD Wald-Type Test for Varying-Dimensional General Linear Models

    Directory of Open Access Journals (Sweden)

    Xiao Guo

    2018-03-01

    Full Text Available An important issue for robust inference is to examine the stability of the asymptotic level and power of the test statistic in the presence of contaminated data. Most existing results are derived in finite-dimensional settings with some particular choices of loss functions. This paper re-examines this issue by allowing for a diverging number of parameters combined with a broader array of robust error measures, called “robust- BD ”, for the class of “general linear models”. Under regularity conditions, we derive the influence function of the robust- BD parameter estimator and demonstrate that the robust- BD Wald-type test enjoys the robustness of validity and efficiency asymptotically. Specifically, the asymptotic level of the test is stable under a small amount of contamination of the null hypothesis, whereas the asymptotic power is large enough under a contaminated distribution in a neighborhood of the contiguous alternatives, thus lending supports to the utility of the proposed robust- BD Wald-type test.

  3. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  4. Testing the performance of beta diversity measures based on incidence data: the robustness to undersampling

    DEFF Research Database (Denmark)

    Bondoso Cardoso, Pedro Miguel; Borges, Paulo A. V.; Veech, Joseph A.

    2009-01-01

    computing beta diversity for selected pairs of samples. The robustness of these beta diversity accumulation curves was assessed for the purpose of finding the best measures for undersampled communities. Results The Harrison et al.ß-2 and the Williams ß-3 are particularly robust to undersampling...

  5. Spatiotemporal Super-Resolution Reconstruction Based on Robust Optical Flow and Zernike Moment for Video Sequences

    Directory of Open Access Journals (Sweden)

    Meiyu Liang

    2013-01-01

    Full Text Available In order to improve the spatiotemporal resolution of the video sequences, a novel spatiotemporal super-resolution reconstruction model (STSR based on robust optical flow and Zernike moment is proposed in this paper, which integrates the spatial resolution reconstruction and temporal resolution reconstruction into a unified framework. The model does not rely on accurate estimation of subpixel motion and is robust to noise and rotation. Moreover, it can effectively overcome the problems of hole and block artifacts. First we propose an efficient robust optical flow motion estimation model based on motion details preserving, then we introduce the biweighted fusion strategy to implement the spatiotemporal motion compensation. Next, combining the self-adaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on Zernike moment for better STSR with higher efficiency, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal self-similarity between the adjacent video frames. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations.

  6. Design and implementation of fixed-order robust controllers for a proton exchange membrane fuel cell system

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Fu-Cheng; Chen, Hsuan-Tsung [Department of Mechanical Engineering, National Taiwan University, No.1, Sec. 4, Roosevelt Road, 10617 Taipei (China)

    2009-03-15

    This paper applies fixed-order multivariable robust control strategies to a proton exchange membrane fuel cell (PEMFC) system, and implements the designed controllers on a microchip for system miniaturization. In previous studies, robust control was applied to guarantee system stability and to reduce hydrogen consumption for a PEMFC system. It was noted that for standard robust control design, the order of resulting H{sub {infinity}} controllers is dictated by the plants and weighting functions. However, for hardware implementation, controllers with lower orders are preferable in terms of computing efforts and cost. Therefore, in this paper the PEMFC is modeled as multivariable transfer matrices, then three fixed-order robust control algorithms are applied to design controllers with specified orders for a PEMFC. Finally, the designed controllers are implemented on a microchip to regulate the air and hydrogen flow rates. From the experimental results, fixed-order robust control is deemed effective in supplying steady power and reducing fuel consumption. (author)

  7. Digital Content Strategies

    OpenAIRE

    Halbheer, Daniel; Stahl, Florian; Koenigsberg, Oded; Lehmann, Donald R

    2013-01-01

    This paper studies content strategies for online publishers of digital information goods. It examines sampling strategies and compares their performance to paid content and free content strategies. A sampling strategy, where some of the content is offered for free and consumers are charged for access to the rest, is known as a "metered model" in the newspaper industry. We analyze optimal decisions concerning the size of the sample and the price of the paid content when sampling serves the dua...

  8. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  9. A normative model for assessing competitive strategy

    Directory of Open Access Journals (Sweden)

    Ungerer, Gerard David

    2016-12-01

    Full Text Available The hyper-competitive nature of e-business has raised the need for a generic way to appraise the merit of a developed business strategy. Although progress has been made in the domain of strategy evaluation, the established literature differs over the ‘tests’ that a strategy must pass to be considered well-constructed. This paper therefore investigates the existing strategy-evaluation literature to propose a more integrated and comprehensive normative strategic assessment that can be used to evaluate and refine a business’ s competitive strategy , adding to its robustness and survivability.

  10. On robust parameter estimation in brain-computer interfacing

    Science.gov (United States)

    Samek, Wojciech; Nakajima, Shinichi; Kawanabe, Motoaki; Müller, Klaus-Robert

    2017-12-01

    Objective. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain-computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the trials consisting of a few hundred EEG samples and indicating the start and end of a task. Approach. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. Main results. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. Significance. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.

  11. Robustness in cluster analysis in the presence of anomalous observations

    NARCIS (Netherlands)

    Zhuk, EE

    Cluster analysis of multivariate observations in the presence of "outliers" (anomalous observations) in a sample is studied. The expected (mean) fraction of erroneous decisions for the decision rule is computed analytically by minimizing the intraclass scatter. A robust decision rule (stable to

  12. Reliability-Based Robust Design Optimization of Structures Considering Uncertainty in Design Variables

    Directory of Open Access Journals (Sweden)

    Shujuan Wang

    2015-01-01

    Full Text Available This paper investigates the structural design optimization to cover both the reliability and robustness under uncertainty in design variables. The main objective is to improve the efficiency of the optimization process. To address this problem, a hybrid reliability-based robust design optimization (RRDO method is proposed. Prior to the design optimization, the Sobol sensitivity analysis is used for selecting key design variables and providing response variance as well, resulting in significantly reduced computational complexity. The single-loop algorithm is employed to guarantee the structural reliability, allowing fast optimization process. In the case of robust design, the weighting factor balances the response performance and variance with respect to the uncertainty in design variables. The main contribution of this paper is that the proposed method applies the RRDO strategy with the usage of global approximation and the Sobol sensitivity analysis, leading to the reduced computational cost. A structural example is given to illustrate the performance of the proposed method.

  13. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  14. Systematic review and consensus guidelines for environmental sampling of Burkholderia pseudomallei.

    Directory of Open Access Journals (Sweden)

    Direk Limmathurotsakul

    Full Text Available Burkholderia pseudomallei, a Tier 1 Select Agent and the cause of melioidosis, is a Gram-negative bacillus present in the environment in many tropical countries. Defining the global pattern of B. pseudomallei distribution underpins efforts to prevent infection, and is dependent upon robust environmental sampling methodology. Our objective was to review the literature on the detection of environmental B. pseudomallei, update the risk map for melioidosis, and propose international consensus guidelines for soil sampling.An international working party (Detection of Environmental Burkholderia pseudomallei Working Party (DEBWorP was formed during the VIth World Melioidosis Congress in 2010. PubMed (January 1912 to December 2011 was searched using the following MeSH terms: pseudomallei or melioidosis. Bibliographies were hand-searched for secondary references. The reported geographical distribution of B. pseudomallei in the environment was mapped and categorized as definite, probable, or possible. The methodology used for detecting environmental B. pseudomallei was extracted and collated. We found that global coverage was patchy, with a lack of studies in many areas where melioidosis is suspected to occur. The sampling strategies and bacterial identification methods used were highly variable, and not all were robust. We developed consensus guidelines with the goals of reducing the probability of false-negative results, and the provision of affordable and 'low-tech' methodology that is applicable in both developed and developing countries.The proposed consensus guidelines provide the basis for the development of an accurate and comprehensive global map of environmental B. pseudomallei.

  15. Serum sample containing endogenous antibodies interfering with multiple hormone immunoassays. Laboratory strategies to detect interference

    Directory of Open Access Journals (Sweden)

    Elena García-González

    2016-04-01

    Full Text Available Objectives: Endogenous antibodies (EA may interfere with immunoassays, causing erroneous results for hormone analyses. As (in most cases this interference arises from the assay format and most immunoassays, even from different manufacturers, are constructed in a similar way, it is possible for a single type of EA to interfere with different immunoassays. Here we describe the case of a patient whose serum sample contains EA that interfere several hormones tests. We also discuss the strategies deployed to detect interference. Subjects and methods: Over a period of four years, a 30-year-old man was subjected to a plethora of laboratory and imaging diagnostic procedures as a consequence of elevated hormone results, mainly of pituitary origin, which did not correlate with the overall clinical picture. Results: Once analytical interference was suspected, the best laboratory approaches to investigate it were sample reanalysis on an alternative platform and sample incubation with antibody blocking tubes. Construction of an in-house ‘nonsense’ sandwich assay was also a valuable strategy to confirm interference. In contrast, serial sample dilutions were of no value in our case, while polyethylene glycol (PEG precipitation gave inconclusive results, probably due to the use of inappropriate PEG concentrations for several of the tests assayed. Conclusions: Clinicians and laboratorians must be aware of the drawbacks of immunometric assays, and alert to the possibility of EA interference when results do not fit the clinical pattern. Keywords: Endogenous antibodies, Immunoassay, Interference, Pituitary hormones, Case report

  16. Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors

    Science.gov (United States)

    2011-04-15

    theory have led to significant interest in alternative sampling methods. Specifically, conventional sampling systems rely on the Shannon sampling theorem...28–31]. In this paper we develop strong theoretical reconstruction and robustness guarantees, in the same spirit as neoclassical guarantees provided...dimensional spaces. Although Grassmanian packing problems have been examined in the literature (e.g., in the context of frame theory [66]), to our

  17. [Identification of Systemic Contaminations with Legionella Spec. in Drinking Water Plumbing Systems: Sampling Strategies and Corresponding Parameters].

    Science.gov (United States)

    Völker, S; Schreiber, C; Müller, H; Zacharias, N; Kistemann, T

    2017-05-01

    After the amendment of the Drinking Water Ordinance in 2011, the requirements for the hygienic-microbiological monitoring of drinking water installations have increased significantly. In the BMBF-funded project "Biofilm Management" (2010-2014), we examined the extent to which established sampling strategies in practice can uncover drinking water plumbing systems systemically colonized with Legionella. Moreover, we investigated additional parameters that might be suitable for detecting systemic contaminations. We subjected the drinking water plumbing systems of 8 buildings with known microbial contamination (Legionella) to an intensive hygienic-microbiological sampling with high spatial and temporal resolution. A total of 626 drinking hot water samples were analyzed with classical culture-based methods. In addition, comprehensive hygienic observations were conducted in each building and qualitative interviews with operators and users were applied. Collected tap-specific parameters were quantitatively analyzed by means of sensitivity and accuracy calculations. The systemic presence of Legionella in drinking water plumbing systems has a high spatial and temporal variability. Established sampling strategies were only partially suitable to detect long-term Legionella contaminations in practice. In particular, the sampling of hot water at the calorifier and circulation re-entrance showed little significance in terms of contamination events. To detect the systemic presence of Legionella,the parameters stagnation (qualitatively assessed) and temperature (compliance with the 5K-rule) showed better results. © Georg Thieme Verlag KG Stuttgart · New York.

  18. A sampling strategy to establish existing plant configuration baselines

    International Nuclear Information System (INIS)

    Buchanan, L.P.

    1995-01-01

    The Department of Energy's Gaseous Diffusion Plants (DOEGDP) are undergoing a Safety Analysis Update Program. As part of this program, critical existing structures are being reevaluated for Natural Phenomena Hazards (NPH) based on the recommendations of UCRL-15910. The Department of Energy has specified that current plant configurations be used in the performance of these reevaluations. This paper presents the process and results of a walkdown program implemented at DOEGDP to establish the current configuration baseline for these existing critical structures for use in subsequent NPH evaluations. These structures are classified as moderate hazard facilities and were constructed in the early 1950's. The process involved a statistical sampling strategy to determine the validity of critical design information as represented on the original design drawings such as member sizes, orientation, connection details and anchorage. A floor load inventory of the dead load of the equipment, both permanently attached and spare, was also performed as well as a walkthrough inspection of the overall structure to identify any other significant anomalies

  19. Robust tracking and distributed synchronization control of a multi-motor servomechanism with H-infinity performance.

    Science.gov (United States)

    Wang, Minlin; Ren, Xuemei; Chen, Qiang

    2018-01-01

    The multi-motor servomechanism (MMS) is a multi-variable, high coupling and nonlinear system, which makes the controller design challenging. In this paper, an adaptive robust H-infinity control scheme is proposed to achieve both the load tracking and multi-motor synchronization of MMS. This control scheme consists of two parts: a robust tracking controller and a distributed synchronization controller. The robust tracking controller is constructed by incorporating a neural network (NN) K-filter observer into the dynamic surface control, while the distributed synchronization controller is designed by combining the mean deviation coupling control strategy with the distributed technique. The proposed control scheme has several merits: 1) by using the mean deviation coupling synchronization control strategy, the tracking controller and the synchronization controller can be designed individually without any coupling problem; 2) the immeasurable states and unknown nonlinearities are handled by a NN K-filter observer, where the number of NN weights is largely reduced by using the minimal learning parameter technique; 3) the H-infinity performances of tracking error and synchronization error are guaranteed by introducing a robust term into the tracking controller and the synchronization controller, respectively. The stabilities of the tracking and synchronization control systems are analyzed by the Lyapunov theory. Simulation and experimental results based on a four-motor servomechanism are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    Science.gov (United States)

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Robust Learning of Fixed-Structure Bayesian Networks

    OpenAIRE

    Diakonikolas, Ilias; Kane, Daniel; Stewart, Alistair

    2016-01-01

    We investigate the problem of learning Bayesian networks in an agnostic model where an $\\epsilon$-fraction of the samples are adversarially corrupted. Our agnostic learning model is similar to -- in fact, stronger than -- Huber's contamination model in robust statistics. In this work, we study the fully observable Bernoulli case where the structure of the network is given. Even in this basic setting, previous learning algorithms either run in exponential time or lose dimension-dependent facto...

  2. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    International Nuclear Information System (INIS)

    Liu, W; Schild, S; Bues, M; Liao, Z; Sahoo, N; Park, P; Li, H; Li, Y; Li, X; Shen, J; Anand, A; Dong, L; Zhu, X; Mohan, R

    2014-01-01

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from the internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly

  3. A Numerical Study for Robust Active Portfolio Management with Worst-Case Downside Risk Measure

    Directory of Open Access Journals (Sweden)

    Aifan Ling

    2014-01-01

    Full Text Available Recently, active portfolio management problems are paid close attention by many researchers due to the explosion of fund industries. We consider a numerical study of a robust active portfolio selection model with downside risk and multiple weights constraints in this paper. We compare the numerical performance of solutions with the classical mean-variance tracking error model and the naive 1/N portfolio strategy by real market data from China market and other markets. We find from the numerical results that the tested active models are more attractive and robust than the compared models.

  4. Robust and transferable quantification of NMR spectral quality using IROC analysis

    Science.gov (United States)

    Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.

    2017-12-01

    Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.

  5. Choice of Sample Split in Out-of-Sample Forecast Evaluation

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Timmermann, Allan

    , while conversely the power of forecast evaluation tests is strongest with long out-of-sample periods. To deal with size distortions, we propose a test statistic that is robust to the effect of considering multiple sample split points. Empirical applications to predictabil- ity of stock returns......Out-of-sample tests of forecast performance depend on how a given data set is split into estimation and evaluation periods, yet no guidance exists on how to choose the split point. Empirical forecast evaluation results can therefore be difficult to interpret, particularly when several values...... and inflation demonstrate that out-of-sample forecast evaluation results can critically depend on how the sample split is determined....

  6. Robust Control of Aeronautical Electrical Generators for Energy Management Applications

    Directory of Open Access Journals (Sweden)

    Giacomo Canciello

    2017-01-01

    Full Text Available A new strategy for the control of aeronautical electrical generators via sliding manifold selection is proposed, with an associated innovative intelligent energy management strategy used for efficient power transfer between two sources providing energy to aeronautical loads, having different functionalities and priorities. Electric generators used for aeronautical application involve several machines, including a main generator and an exciter. Standard regulators (PI or PID-like are normally used for the rectification of the generator voltage to be used to supply a high-voltage DC bus. The regulation is obtained by acting on a DC/DC converter that imposes the field voltage of the exciter. In this paper, the field voltage is fed to the generator windings by using a second-order sliding mode controller, resulting into a stable, robust (against disturbances action and a fast convergence to the desired reference. By using this strategy, an energy management strategy is proposed that dynamically changes the voltage set point, in order to intelligently transfer power between two voltage busses. Detailed simulation results are provided in order to show the effectiveness of the proposed energy management strategy in different scenarios.

  7. A Dual-Sensing Receptor Confers Robust Cellular Homeostasis

    Directory of Open Access Journals (Sweden)

    Hannah Schramke

    2016-06-01

    Full Text Available Cells have evolved diverse mechanisms that maintain intracellular homeostasis in fluctuating environments. In bacteria, control is often exerted by bifunctional receptors acting as both kinase and phosphatase to regulate gene expression, a design known to provide robustness against noise. Yet how such antagonistic enzymatic activities are balanced as a function of environmental change remains poorly understood. We find that the bifunctional receptor that regulates K+ uptake in Escherichia coli is a dual sensor, which modulates its autokinase and phosphatase activities in response to both extracellular and intracellular K+ concentration. Using mathematical modeling, we show that dual sensing is a superior strategy for ensuring homeostasis when both the supply of and demand for a limiting resource fluctuate. By engineering standards, this molecular control system displays a strikingly high degree of functional integration, providing a reference for the vast numbers of receptors for which the sensing strategy remains elusive.

  8. Outlier detection by robust Mahalanobis distance in geological data obtained by INAA to provenance studies

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Jose O. dos, E-mail: osmansantos@ig.com.br [Instituto Federal de Educacao, Ciencia e Tecnologia de Sergipe (IFS), Lagarto, SE (Brazil); Munita, Casimiro S., E-mail: camunita@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Soares, Emilio A.A., E-mail: easoares@ufan.edu.br [Universidade Federal do Amazonas (UFAM), Manaus, AM (Brazil). Dept. de Geociencias

    2013-07-01

    The detection of outlier in geochemical studies is one of the main difficulties in the interpretation of dataset because they can disturb the statistical method. The search for outliers in geochemical studies is usually based in the Mahalanobis distance (MD), since points in multivariate space that are a distance larger the some predetermined values from center of the data are considered outliers. However, the MD is very sensitive to the presence of discrepant samples. Many robust estimators for location and covariance have been introduced in the literature, such as Minimum Covariance Determinant (MCD) estimator. When MCD estimators are used to calculate the MD leads to the so-called Robust Mahalanobis Distance (RD). In this context, in this work RD was used to detect outliers in geological study of samples collected from confluence of Negro and Solimoes rivers. The purpose of this study was to study the contributions of the sediments deposited by the Solimoes and Negro rivers in the filling of the tectonic depressions at Parana do Ariau. For that 113 samples were analyzed by Instrumental Neutron Activation Analysis (INAA) in which were determined the concentration of As, Ba, Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, U, Yb, Ta, Tb, Th and Zn. In the dataset was possible to construct the ellipse corresponding to robust Mahalanobis distance for each group of samples. The samples found outside of the tolerance ellipse were considered an outlier. The results showed that Robust Mahalanobis Distance was more appropriate for the identification of the outliers, once it is a more restrictive method. (author)

  9. Outlier detection by robust Mahalanobis distance in geological data obtained by INAA to provenance studies

    International Nuclear Information System (INIS)

    Santos, Jose O. dos; Munita, Casimiro S.; Soares, Emilio A.A.

    2013-01-01

    The detection of outlier in geochemical studies is one of the main difficulties in the interpretation of dataset because they can disturb the statistical method. The search for outliers in geochemical studies is usually based in the Mahalanobis distance (MD), since points in multivariate space that are a distance larger the some predetermined values from center of the data are considered outliers. However, the MD is very sensitive to the presence of discrepant samples. Many robust estimators for location and covariance have been introduced in the literature, such as Minimum Covariance Determinant (MCD) estimator. When MCD estimators are used to calculate the MD leads to the so-called Robust Mahalanobis Distance (RD). In this context, in this work RD was used to detect outliers in geological study of samples collected from confluence of Negro and Solimoes rivers. The purpose of this study was to study the contributions of the sediments deposited by the Solimoes and Negro rivers in the filling of the tectonic depressions at Parana do Ariau. For that 113 samples were analyzed by Instrumental Neutron Activation Analysis (INAA) in which were determined the concentration of As, Ba, Ce, Co, Cr, Cs, Eu, Fe, Hf, K, La, Lu, Na, Nd, Rb, Sb, Sc, Sm, U, Yb, Ta, Tb, Th and Zn. In the dataset was possible to construct the ellipse corresponding to robust Mahalanobis distance for each group of samples. The samples found outside of the tolerance ellipse were considered an outlier. The results showed that Robust Mahalanobis Distance was more appropriate for the identification of the outliers, once it is a more restrictive method. (author)

  10. A Robust Decision-Making Technique for Water Management under Decadal Scale Climate Variability

    Science.gov (United States)

    Callihan, L.; Zagona, E. A.; Rajagopalan, B.

    2013-12-01

    Robust decision making, a flexible and dynamic approach to managing water resources in light of deep uncertainties associated with climate variability at inter-annual to decadal time scales, is an analytical framework that detects when a system is in or approaching a vulnerable state. It provides decision makers the opportunity to implement strategies that both address the vulnerabilities and perform well over a wide range of plausible future scenarios. A strategy that performs acceptably over a wide range of possible future states is not likely to be optimal with respect to the actual future state. The degree of success--the ability to avoid vulnerable states and operate efficiently--thus depends on the skill in projecting future states and the ability to select the most efficient strategies to address vulnerabilities. This research develops a robust decision making framework that incorporates new methods of decadal scale projections with selection of efficient strategies. Previous approaches to water resources planning under inter-annual climate variability combining skillful seasonal flow forecasts with climatology for subsequent years are not skillful for medium term (i.e. decadal scale) projections as decision makers are not able to plan adequately to avoid vulnerabilities. We address this need by integrating skillful decadal scale streamflow projections into the robust decision making framework and making the probability distribution of this projection available to the decision making logic. The range of possible future hydrologic scenarios can be defined using a variety of nonparametric methods. Once defined, an ensemble projection of decadal flow scenarios are generated from a wavelet-based spectral K-nearest-neighbor resampling approach using historical and paleo-reconstructed data. This method has been shown to generate skillful medium term projections with a rich variety of natural variability. The current state of the system in combination with the

  11. International Conference on Robust Statistics

    CERN Document Server

    Filzmoser, Peter; Gather, Ursula; Rousseeuw, Peter

    2003-01-01

    Aspects of Robust Statistics are important in many areas. Based on the International Conference on Robust Statistics 2001 (ICORS 2001) in Vorau, Austria, this volume discusses future directions of the discipline, bringing together leading scientists, experienced researchers and practitioners, as well as younger researchers. The papers cover a multitude of different aspects of Robust Statistics. For instance, the fundamental problem of data summary (weights of evidence) is considered and its robustness properties are studied. Further theoretical subjects include e.g.: robust methods for skewness, time series, longitudinal data, multivariate methods, and tests. Some papers deal with computational aspects and algorithms. Finally, the aspects of application and programming tools complete the volume.

  12. Spatial distribution, sampling precision and survey design optimisation with non-normal variables: The case of anchovy (Engraulis encrasicolus) recruitment in Spanish Mediterranean waters

    Science.gov (United States)

    Tugores, M. Pilar; Iglesias, Magdalena; Oñate, Dolores; Miquel, Joan

    2016-02-01

    In the Mediterranean Sea, the European anchovy (Engraulis encrasicolus) displays a key role in ecological and economical terms. Ensuring stock sustainability requires the provision of crucial information, such as species spatial distribution or unbiased abundance and precision estimates, so that management strategies can be defined (e.g. fishing quotas, temporal closure areas or marine protected areas MPA). Furthermore, the estimation of the precision of global abundance at different sampling intensities can be used for survey design optimisation. Geostatistics provide a priori unbiased estimations of the spatial structure, global abundance and precision for autocorrelated data. However, their application to non-Gaussian data introduces difficulties in the analysis in conjunction with low robustness or unbiasedness. The present study applied intrinsic geostatistics in two dimensions in order to (i) analyse the spatial distribution of anchovy in Spanish Western Mediterranean waters during the species' recruitment season, (ii) produce distribution maps, (iii) estimate global abundance and its precision, (iv) analyse the effect of changing the sampling intensity on the precision of global abundance estimates and, (v) evaluate the effects of several methodological options on the robustness of all the analysed parameters. The results suggested that while the spatial structure was usually non-robust to the tested methodological options when working with the original dataset, it became more robust for the transformed datasets (especially for the log-backtransformed dataset). The global abundance was always highly robust and the global precision was highly or moderately robust to most of the methodological options, except for data transformation.

  13. Replica exchange enveloping distribution sampling (RE-EDS): A robust method to estimate multiple free-energy differences from a single simulation.

    Science.gov (United States)

    Sidler, Dominik; Schwaninger, Arthur; Riniker, Sereina

    2016-10-21

    In molecular dynamics (MD) simulations, free-energy differences are often calculated using free energy perturbation or thermodynamic integration (TI) methods. However, both techniques are only suited to calculate free-energy differences between two end states. Enveloping distribution sampling (EDS) presents an attractive alternative that allows to calculate multiple free-energy differences in a single simulation. In EDS, a reference state is simulated which "envelopes" the end states. The challenge of this methodology is the determination of optimal reference-state parameters to ensure equal sampling of all end states. Currently, the automatic determination of the reference-state parameters for multiple end states is an unsolved issue that limits the application of the methodology. To resolve this, we have generalised the replica-exchange EDS (RE-EDS) approach, introduced by Lee et al. [J. Chem. Theory Comput. 10, 2738 (2014)] for constant-pH MD simulations. By exchanging configurations between replicas with different reference-state parameters, the complexity of the parameter-choice problem can be substantially reduced. A new robust scheme to estimate the reference-state parameters from a short initial RE-EDS simulation with default parameters was developed, which allowed the calculation of 36 free-energy differences between nine small-molecule inhibitors of phenylethanolamine N-methyltransferase from a single simulation. The resulting free-energy differences were in excellent agreement with values obtained previously by TI and two-state EDS simulations.

  14. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  15. Robustness and Uncertainty: Applications for Policy in Climate and Hydrological Modeling

    Science.gov (United States)

    Fields, A. L., III

    2015-12-01

    Policymakers must often decide how to proceed when presented with conflicting simulation data from hydrological, climatological, and geological models. While laboratory sciences often appeal to the reproducibility of results to argue for the validity of their conclusions, simulations cannot use this strategy for a number of pragmatic and methodological reasons. However, robustness of predictions and causal structures can serve the same function for simulations as reproducibility does for laboratory experiments and field observations in either adjudicating between conflicting results or showing that there is insufficient justification to externally validate the results. Additionally, an interpretation of the argument from robustness is presented that involves appealing to the convergence of many well-built and diverse models rather than the more common version which involves appealing to the probability that one of a set of models is likely to be true. This interpretation strengthens the case for taking robustness as an additional requirement for the validation of simulation results and ultimately supports the idea that computer simulations can provide information about the world that is just as trustworthy as data from more traditional laboratory studies and field observations. Understanding the importance of robust results for the validation of simulation data is especially important for policymakers making decisions on the basis of potentially conflicting models. Applications will span climate, hydrological, and hydroclimatological models.

  16. Robust model of fresh jujube soluble solids content with near ...

    African Journals Online (AJOL)

    A robust partial least square (PLS) calibration model with high accuracy and stability was established for the measurement of soluble solids content (SSC) of fresh jujube using near-infrared (NIR) spectroscopy technique. Fresh jujube samples were collected in different areas of Taigu and Taiyuan cities, central China in ...

  17. Robust Trust in Expert Testimony

    Directory of Open Access Journals (Sweden)

    Christian Dahlman

    2015-05-01

    Full Text Available The standard of proof in criminal trials should require that the evidence presented by the prosecution is robust. This requirement of robustness says that it must be unlikely that additional information would change the probability that the defendant is guilty. Robustness is difficult for a judge to estimate, as it requires the judge to assess the possible effect of information that the he or she does not have. This article is concerned with expert witnesses and proposes a method for reviewing the robustness of expert testimony. According to the proposed method, the robustness of expert testimony is estimated with regard to competence, motivation, external strength, internal strength and relevance. The danger of trusting non-robust expert testimony is illustrated with an analysis of the Thomas Quick Case, a Swedish legal scandal where a patient at a mental institution was wrongfully convicted for eight murders.

  18. The analytical calibration in (bio)imaging/mapping of the metallic elements in biological samples--definitions, nomenclature and strategies: state of the art.

    Science.gov (United States)

    Jurowski, Kamil; Buszewski, Bogusław; Piekoszewski, Wojciech

    2015-01-01

    Nowadays, studies related to the distribution of metallic elements in biological samples are one of the most important issues. There are many articles dedicated to specific analytical atomic spectrometry techniques used for mapping/(bio)imaging the metallic elements in various kinds of biological samples. However, in such literature, there is a lack of articles dedicated to reviewing calibration strategies, and their problems, nomenclature, definitions, ways and methods used to obtain quantitative distribution maps. The aim of this article was to characterize the analytical calibration in the (bio)imaging/mapping of the metallic elements in biological samples including (1) nomenclature; (2) definitions, and (3) selected and sophisticated, examples of calibration strategies with analytical calibration procedures applied in the different analytical methods currently used to study an element's distribution in biological samples/materials such as LA ICP-MS, SIMS, EDS, XRF and others. The main emphasis was placed on the procedures and methodology of the analytical calibration strategy. Additionally, the aim of this work is to systematize the nomenclature for the calibration terms: analytical calibration, analytical calibration method, analytical calibration procedure and analytical calibration strategy. The authors also want to popularize the division of calibration methods that are different than those hitherto used. This article is the first work in literature that refers to and emphasizes many different and complex aspects of analytical calibration problems in studies related to (bio)imaging/mapping metallic elements in different kinds of biological samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Median Robust Extended Local Binary Pattern for Texture Classification.

    Science.gov (United States)

    Liu, Li; Lao, Songyang; Fieguth, Paul W; Guo, Yulan; Wang, Xiaogang; Pietikäinen, Matti

    2016-03-01

    Local binary patterns (LBP) are considered among the most computationally efficient high-performance texture features. However, the LBP method is very sensitive to image noise and is unable to capture macrostructure information. To best address these disadvantages, in this paper, we introduce a novel descriptor for texture classification, the median robust extended LBP (MRELBP). Different from the traditional LBP and many LBP variants, MRELBP compares regional image medians rather than raw image intensities. A multiscale LBP type descriptor is computed by efficiently comparing image medians over a novel sampling scheme, which can capture both microstructure and macrostructure texture information. A comprehensive evaluation on benchmark data sets reveals MRELBP's high performance-robust to gray scale variations, rotation changes and noise-but at a low computational cost. MRELBP produces the best classification scores of 99.82%, 99.38%, and 99.77% on three popular Outex test suites. More importantly, MRELBP is shown to be highly robust to image noise, including Gaussian noise, Gaussian blur, salt-and-pepper noise, and random pixel corruption.

  20. Robustness Analysis of Timber Truss Structure

    DEFF Research Database (Denmark)

    Rajčić, Vlatka; Čizmar, Dean; Kirkegaard, Poul Henning

    2010-01-01

    The present paper discusses robustness of structures in general and the robustness requirements given in the codes. Robustness of timber structures is also an issues as this is closely related to Working group 3 (Robustness of systems) of the COST E55 project. Finally, an example of a robustness...... evaluation of a widespan timber truss structure is presented. This structure was built few years ago near Zagreb and has a span of 45m. Reliability analysis of the main members and the system is conducted and based on this a robustness analysis is preformed....

  1. Robust and Reversible Audio Watermarking by Modifying Statistical Features in Time Domain

    Directory of Open Access Journals (Sweden)

    Shijun Xiang

    2017-01-01

    Full Text Available Robust and reversible watermarking is a potential technique in many sensitive applications, such as lossless audio or medical image systems. This paper presents a novel robust reversible audio watermarking method by modifying the statistic features in time domain in the way that the histogram of these statistical values is shifted for data hiding. Firstly, the original audio is divided into nonoverlapped equal-sized frames. In each frame, the use of three samples as a group generates a prediction error and a statistical feature value is calculated as the sum of all the prediction errors in the frame. The watermark bits are embedded into the frames by shifting the histogram of the statistical features. The watermark is reversible and robust to common signal processing operations. Experimental results have shown that the proposed method not only is reversible but also achieves satisfactory robustness to MP3 compression of 64 kbps and additive Gaussian noise of 35 dB.

  2. Decreasing-Rate Pruning Optimizes the Construction of Efficient and Robust Distributed Networks.

    Directory of Open Access Journals (Sweden)

    Saket Navlakha

    2015-07-01

    Full Text Available Robust, efficient, and low-cost networks are advantageous in both biological and engineered systems. During neural network development in the brain, synapses are massively over-produced and then pruned-back over time. This strategy is not commonly used when designing engineered networks, since adding connections that will soon be removed is considered wasteful. Here, we show that for large distributed routing networks, network function is markedly enhanced by hyper-connectivity followed by aggressive pruning and that the global rate of pruning, a developmental parameter not previously studied by experimentalists, plays a critical role in optimizing network structure. We first used high-throughput image analysis techniques to quantify the rate of pruning in the mammalian neocortex across a broad developmental time window and found that the rate is decreasing over time. Based on these results, we analyzed a model of computational routing networks and show using both theoretical analysis and simulations that decreasing rates lead to more robust and efficient networks compared to other rates. We also present an application of this strategy to improve the distributed design of airline networks. Thus, inspiration from neural network formation suggests effective ways to design distributed networks across several domains.

  3. Stochasticity in Ca2+ increase in spines enables robust and sensitive information coding.

    Directory of Open Access Journals (Sweden)

    Takuya Koumura

    Full Text Available A dendritic spine is a very small structure (∼0.1 µm3 of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for information coding. Spines code input timing information by the probability of Ca2+ increases, which makes robust and sensitive information coding possible. We created a stochastic simulation model of input timing-dependent Ca2+ increases in a cerebellar Purkinje cell's spine. Spines used probability coding of Ca2+ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, where information per volume appeared optimal. Probability coding of Ca2+ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca2+ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.

  4. Histogram Equalization to Model Adaptation for Robust Speech Recognition

    Directory of Open Access Journals (Sweden)

    Suh Youngjoo

    2010-01-01

    Full Text Available We propose a new model adaptation method based on the histogram equalization technique for providing robustness in noisy environments. The trained acoustic mean models of a speech recognizer are adapted into environmentally matched conditions by using the histogram equalization algorithm on a single utterance basis. For more robust speech recognition in the heavily noisy conditions, trained acoustic covariance models are efficiently adapted by the signal-to-noise ratio-dependent linear interpolation between trained covariance models and utterance-level sample covariance models. Speech recognition experiments on both the digit-based Aurora2 task and the large vocabulary-based task showed that the proposed model adaptation approach provides significant performance improvements compared to the baseline speech recognizer trained on the clean speech data.

  5. Qualitative Robustness in Estimation

    Directory of Open Access Journals (Sweden)

    Mohammed Nasser

    2012-07-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Times New Roman","serif";} Qualitative robustness, influence function, and breakdown point are three main concepts to judge an estimator from the viewpoint of robust estimation. It is important as well as interesting to study relation among them. This article attempts to present the concept of qualitative robustness as forwarded by first proponents and its later development. It illustrates intricacies of qualitative robustness and its relation with consistency, and also tries to remove commonly believed misunderstandings about relation between influence function and qualitative robustness citing some examples from literature and providing a new counter-example. At the end it places a useful finite and a simulated version of   qualitative robustness index (QRI. In order to assess the performance of the proposed measures, we have compared fifteen estimators of correlation coefficient using simulated as well as real data sets.

  6. Star marketer’s impact on the market strategy choice

    OpenAIRE

    Goran, Vlašić; Hair, Joe F.; Krupka, Zoran

    2017-01-01

    We focus on understanding the role of star marketers in pursuing a market-driven vs. a market-driving strategy. Results indicate that market-driving and market-driven strategies are two approaches that can be pursued by market-oriented firms. A star marketer has a robust positive influence on market-driving strategy. In contrast, a star marketer has no meaningful influence on market-driven strategy. In short, while star marketers are very important for market-driving strategy and long term su...

  7. A Cost-Constrained Sampling Strategy in Support of LAI Product Validation in Mountainous Areas

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2016-08-01

    Full Text Available Increasing attention is being paid on leaf area index (LAI retrieval in mountainous areas. Mountainous areas present extreme topographic variability, and are characterized by more spatial heterogeneity and inaccessibility compared with flat terrain. It is difficult to collect representative ground-truth measurements, and the validation of LAI in mountainous areas is still problematic. A cost-constrained sampling strategy (CSS in support of LAI validation was presented in this study. To account for the influence of rugged terrain on implementation cost, a cost-objective function was incorporated to traditional conditioned Latin hypercube (CLH sampling strategy. A case study in Hailuogou, Sichuan province, China was used to assess the efficiency of CSS. Normalized difference vegetation index (NDVI, land cover type, and slope were selected as auxiliary variables to present the variability of LAI in the study area. Results show that CSS can satisfactorily capture the variability across the site extent, while minimizing field efforts. One appealing feature of CSS is that the compromise between representativeness and implementation cost can be regulated according to actual surface heterogeneity and budget constraints, and this makes CSS flexible. Although the proposed method was only validated for the auxiliary variables rather than the LAI measurements, it serves as a starting point for establishing the locations of field plots and facilitates the preparation of field campaigns in mountainous areas.

  8. Robustness of coevolution in resolving prisoner's dilemma games on interdependent networks subject to attack

    Science.gov (United States)

    Liu, Penghui; Liu, Jing

    2017-08-01

    Recently, coevolution between strategy and network structure has been established as a rule to resolve social dilemmas and reach optimal situations for cooperation. Many follow-up researches have focused on studying how coevolution helps networks reorganize to deter the defectors and many coevolution methods have been proposed. However, the robustness of the coevolution rules against attacks have not been studied much. Since attacks may directly influence the original evolutionary process of cooperation, the robustness should be an important index while evaluating the quality of a coevolution method. In this paper, we focus on investigating the robustness of an elementary coevolution method in resolving the prisoner's dilemma game upon the interdependent networks. Three different types of time-independent attacks, named as edge attacks, instigation attacks and node attacks have been employed to test its robustness. Through analyzing the simulation results obtained, we find this coevolution method is relatively robust against the edge attack and the node attack as it successfully maintains cooperation in the population over the entire attack range. However, when the instigation probability of the attacked individuals is large or the attack range of instigation attack is wide enough, coevolutionary rule finally fails in maintaining cooperation in the population.

  9. Robust Programming by Example

    OpenAIRE

    Bishop , Matt; Elliott , Chip

    2011-01-01

    Part 2: WISE 7; International audience; Robust programming lies at the heart of the type of coding called “secure programming”. Yet it is rarely taught in academia. More commonly, the focus is on how to avoid creating well-known vulnerabilities. While important, that misses the point: a well-structured, robust program should anticipate where problems might arise and compensate for them. This paper discusses one view of robust programming and gives an example of how it may be taught.

  10. Robust Reliability or reliable robustness? - Integrated consideration of robustness and reliability aspects

    DEFF Research Database (Denmark)

    Kemmler, S.; Eifler, Tobias; Bertsche, B.

    2015-01-01

    products are and vice versa. For a comprehensive understanding and to use existing synergies between both domains, this paper discusses the basic principles of Reliability- and Robust Design theory. The development of a comprehensive model will enable an integrated consideration of both domains...

  11. Robust and efficient parameter estimation in dynamic models of biological systems.

    Science.gov (United States)

    Gábor, Attila; Banga, Julio R

    2015-10-29

    Dynamic modelling provides a systematic framework to understand function in biological systems. Parameter estimation in nonlinear dynamic models remains a very challenging inverse problem due to its nonconvexity and ill-conditioning. Associated issues like overfitting and local solutions are usually not properly addressed in the systems biology literature despite their importance. Here we present a method for robust and efficient parameter estimation which uses two main strategies to surmount the aforementioned difficulties: (i) efficient global optimization to deal with nonconvexity, and (ii) proper regularization methods to handle ill-conditioning. In the case of regularization, we present a detailed critical comparison of methods and guidelines for properly tuning them. Further, we show how regularized estimations ensure the best trade-offs between bias and variance, reducing overfitting, and allowing the incorporation of prior knowledge in a systematic way. We illustrate the performance of the presented method with seven case studies of different nature and increasing complexity, considering several scenarios of data availability, measurement noise and prior knowledge. We show how our method ensures improved estimations with faster and more stable convergence. We also show how the calibrated models are more generalizable. Finally, we give a set of simple guidelines to apply this strategy to a wide variety of calibration problems. Here we provide a parameter estimation strategy which combines efficient global optimization with a regularization scheme. This method is able to calibrate dynamic models in an efficient and robust way, effectively fighting overfitting and allowing the incorporation of prior information.

  12. Robust passive control for Internet-based switching systems with time-delay

    Energy Technology Data Exchange (ETDEWEB)

    Guan Zhihong [Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Zhang Hao [Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China)], E-mail: ehao79@163.com; Yang Shuanghua [Department of Computer Science, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2008-04-15

    In this paper, based on remote control and local control strategy, a class of hybrid multi-rate control models with time-delay and switching controllers are formulated and the problem of robust passive control for this discrete system is investigated. By Lyapunov-Krasovskii function and applying it to a descriptor model transformation some new sufficient conditions in form of LMIs are derived. A numerical example is given to illustrate the effectiveness of the theoretical result.

  13. An optimization strategy for the control of small capacity heat pump integrated air-conditioning system

    International Nuclear Information System (INIS)

    Gao, Jiajia; Huang, Gongsheng; Xu, Xinhua

    2016-01-01

    Highlights: • An optimization strategy for a small-scale air-conditioning system is developed. • The optimization strategy aims at optimizing the overall system energy consumption. • The strategy may guarantee the robust control of the space air temperature. • The performance of the optimization strategy was tested on a simulation platform. - Abstract: This paper studies the optimization of a small-scale central air-conditioning system, in which the cooling is provided by a ground source heat pump (GSHP) equipped with an on/off capacity control. The optimization strategy aims to optimize the overall system energy consumption and simultaneously guarantee the robustness of the space air temperature control without violating the allowed GSHP maximum start-ups number per hour specified by customers. The set-point of the chilled water return temperature and the width of the water temperature control band are used as the decision variables for the optimization. The performance of the proposed strategy was tested on a simulation platform. Results show that the optimization strategy can save the energy consumption by 9.59% in a typical spring day and 2.97% in a typical summer day. Meanwhile it is able to enhance the space air temperature control robustness when compared with a basic control strategy without optimization.

  14. Interactive Control System, Intended Strategy, Implemented Strategy dan Emergent Strategy

    OpenAIRE

    Tubagus Ismail; Darjat Sudrajat

    2012-01-01

    The purpose of this study was to examine the relationship between management control system (MCS) and strategy formation processes, namely: intended strategy, emergent strategy and impelemented strategy. The focus of MCS in this study was interactive control system. The study was based on Structural Equation Modeling (SEM) as its multivariate analyses instrument. The samples were upper middle managers of manufacturing company in Banten Province, DKI Jakarta Province and West Java Province. AM...

  15. Evaluation of 5-FU pharmacokinetics in cancer patients with DPD deficiency using a Bayesian limited sampling strategy

    NARCIS (Netherlands)

    Van Kuilenburg, A.; Hausler, P.; Schalhorn, A.; Tanck, M.; Proost, J.H.; Terborg, C.; Behnke, D.; Schwabe, W.; Jabschinsky, K.; Maring, J.G.

    Aims: Dihydropyrimidine dehydrogenase (DPD) is the initial enzyme in the catabolism of 5-fluorouracil (5FU) and DPD deficiency is an important pharmacogenetic syndrome. The main purpose of this study was to develop a limited sampling strategy to evaluate the pharmacokinetics of 5FU and to detect

  16. Bilinear Approximate Model-Based Robust Lyapunov Control for Parabolic Distributed Collectors

    KAUST Repository

    Elmetennani, Shahrazed

    2016-11-09

    This brief addresses the control problem of distributed parabolic solar collectors in order to maintain the field outlet temperature around a desired level. The objective is to design an efficient controller to force the outlet fluid temperature to track a set reference despite the unpredictable varying working conditions. In this brief, a bilinear model-based robust Lyapunov control is proposed to achieve the control objectives with robustness to the environmental changes. The bilinear model is a reduced order approximate representation of the solar collector, which is derived from the hyperbolic distributed equation describing the heat transport dynamics by means of a dynamical Gaussian interpolation. Using the bilinear approximate model, a robust control strategy is designed applying Lyapunov stability theory combined with a phenomenological representation of the system in order to stabilize the tracking error. On the basis of the error analysis, simulation results show good performance of the proposed controller, in terms of tracking accuracy and convergence time, with limited measurement even under unfavorable working conditions. Furthermore, the presented work is of interest for a large category of dynamical systems knowing that the solar collector is representative of physical systems involving transport phenomena constrained by unknown external disturbances.

  17. Combined Heuristic Attack Strategy on Complex Networks

    Directory of Open Access Journals (Sweden)

    Marek Šimon

    2017-01-01

    Full Text Available Usually, the existence of a complex network is considered an advantage feature and efforts are made to increase its robustness against an attack. However, there exist also harmful and/or malicious networks, from social ones like spreading hoax, corruption, phishing, extremist ideology, and terrorist support up to computer networks spreading computer viruses or DDoS attack software or even biological networks of carriers or transport centers spreading disease among the population. New attack strategy can be therefore used against malicious networks, as well as in a worst-case scenario test for robustness of a useful network. A common measure of robustness of networks is their disintegration level after removal of a fraction of nodes. This robustness can be calculated as a ratio of the number of nodes of the greatest remaining network component against the number of nodes in the original network. Our paper presents a combination of heuristics optimized for an attack on a complex network to achieve its greatest disintegration. Nodes are deleted sequentially based on a heuristic criterion. Efficiency of classical attack approaches is compared to the proposed approach on Barabási-Albert, scale-free with tunable power-law exponent, and Erdős-Rényi models of complex networks and on real-world networks. Our attack strategy results in a faster disintegration, which is counterbalanced by its slightly increased computational demands.

  18. Self-organization principles result in robust control of flexible manufacturing systems

    DEFF Research Database (Denmark)

    Nature shows us in our daily life how robust, flexible and optimal self-organized modular constructions work in complex physical, chemical and biological systems, which successfully adapt to new and unexpected situations. A promising strategy is therefore to use such self-organization and pattern...... problems with several autonomous robots and several targets are considered as model of flexible manufacturing systems. Each manufacturing target has to be served in a given time interval by one and only one robot and the total working costs have to be minimized (or total winnings maximized). A specifically...... constructed dynamical system approach (coupled selection equations) is used which is based on pattern formation principles and results in fault resistant and robust behaviour. An important feature is that this type of control also guarantees feasiblitiy of the assignment solutions. In previous work...

  19. Efficient Computation of Info-Gap Robustness for Finite Element Models

    International Nuclear Information System (INIS)

    Stull, Christopher J.; Hemez, Francois M.; Williams, Brian J.

    2012-01-01

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  20. Robust Covariance Estimators Based on Information Divergences and Riemannian Manifold

    Directory of Open Access Journals (Sweden)

    Xiaoqiang Hua

    2018-03-01

    Full Text Available This paper proposes a class of covariance estimators based on information divergences in heterogeneous environments. In particular, the problem of covariance estimation is reformulated on the Riemannian manifold of Hermitian positive-definite (HPD matrices. The means associated with information divergences are derived and used as the estimators. Without resorting to the complete knowledge of the probability distribution of the sample data, the geometry of the Riemannian manifold of HPD matrices is considered in mean estimators. Moreover, the robustness of mean estimators is analyzed using the influence function. Simulation results indicate the robustness and superiority of an adaptive normalized matched filter with our proposed estimators compared with the existing alternatives.

  1. Robust

    DEFF Research Database (Denmark)

    2017-01-01

    Robust – Reflections on Resilient Architecture’, is a scientific publication following the conference of the same name in November of 2017. Researches and PhD-Fellows, associated with the Masters programme: Cultural Heritage, Transformation and Restoration (Transformation), at The Royal Danish...

  2. Influence of binary mask estimation errors on robust speaker identification

    DEFF Research Database (Denmark)

    May, Tobias

    2017-01-01

    Missing-data strategies have been developed to improve the noise-robustness of automatic speech recognition systems in adverse acoustic conditions. This is achieved by classifying time-frequency (T-F) units into reliable and unreliable components, as indicated by a so-called binary mask. Different...... approaches have been proposed to handle unreliable feature components, each with distinct advantages. The direct masking (DM) approach attenuates unreliable T-F units in the spectral domain, which allows the extraction of conventionally used mel-frequency cepstral coefficients (MFCCs). Instead of attenuating....... Since each of these approaches utilizes the knowledge about reliable and unreliable feature components in a different way, they will respond differently to estimation errors in the binary mask. The goal of this study was to identify the most effective strategy to exploit knowledge about reliable...

  3. Robust gene selection methods using weighting schemes for microarray data analysis.

    Science.gov (United States)

    Kang, Suyeon; Song, Jongwoo

    2017-09-02

    A common task in microarray data analysis is to identify informative genes that are differentially expressed between two different states. Owing to the high-dimensional nature of microarray data, identification of significant genes has been essential in analyzing the data. However, the performances of many gene selection techniques are highly dependent on the experimental conditions, such as the presence of measurement error or a limited number of sample replicates. We have proposed new filter-based gene selection techniques, by applying a simple modification to significance analysis of microarrays (SAM). To prove the effectiveness of the proposed method, we considered a series of synthetic datasets with different noise levels and sample sizes along with two real datasets. The following findings were made. First, our proposed methods outperform conventional methods for all simulation set-ups. In particular, our methods are much better when the given data are noisy and sample size is small. They showed relatively robust performance regardless of noise level and sample size, whereas the performance of SAM became significantly worse as the noise level became high or sample size decreased. When sufficient sample replicates were available, SAM and our methods showed similar performance. Finally, our proposed methods are competitive with traditional methods in classification tasks for microarrays. The results of simulation study and real data analysis have demonstrated that our proposed methods are effective for detecting significant genes and classification tasks, especially when the given data are noisy or have few sample replicates. By employing weighting schemes, we can obtain robust and reliable results for microarray data analysis.

  4. Decision Tree and Survey Development for Support in Agricultural Sampling Strategies during Nuclear and Radiological Emergencies

    International Nuclear Information System (INIS)

    Yi, Amelia Lee Zhi; Dercon, Gerd

    2017-01-01

    In the event of a severe nuclear or radiological accident, the release of radionuclides results in contamination of land surfaces affecting agricultural and food resources. Speedy accumulation of information and guidance on decision making is essential in enhancing the ability of stakeholders to strategize for immediate countermeasure strategies. Support tools such as decision trees and sampling protocols allow for swift response by governmental bodies and assist in proper management of the situation. While such tools exist, they focus mainly on protecting public well-being and not food safety management strategies. Consideration of the latter is necessary as it has long-term implications especially to agriculturally dependent Member States. However, it is a research gap that remains to be filled.

  5. Strong and Robust Polyaniline-Based Supramolecular Hydrogels for Flexible Supercapacitors.

    Science.gov (United States)

    Li, Wanwan; Gao, Fengxian; Wang, Xiaoqian; Zhang, Ning; Ma, Mingming

    2016-08-01

    We report a supramolecular strategy to prepare conductive hydrogels with outstanding mechanical and electrochemical properties, which are utilized for flexible solid-state supercapacitors (SCs) with high performance. The supramolecular assembly of polyaniline and polyvinyl alcohol through dynamic boronate bond yields the polyaniline-polyvinyl alcohol hydrogel (PPH), which shows remarkable tensile strength (5.3 MPa) and electrochemical capacitance (928 F g(-1) ). The flexible solid-state supercapacitor based on PPH provides a large capacitance (306 mF cm(-2) and 153 F g(-1) ) and a high energy density of 13.6 Wh kg(-1) , superior to other flexible supercapacitors. The robustness of the PPH-based supercapacitor is demonstrated by the 100 % capacitance retention after 1000 mechanical folding cycles, and the 90 % capacitance retention after 1000 galvanostatic charge-discharge cycles. The high activity and robustness enable the PPH-based supercapacitor as a promising power device for flexible electronics. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Limited sampling strategies drawn within 3 hours postdose poorly predict mycophenolic acid area-under-the-curve after enteric-coated mycophenolate sodium.

    NARCIS (Netherlands)

    Winter, B.C. de; Gelder, T. van; Mathôt, R.A.A.; Glander, P.; Tedesco-Silva, H.; Hilbrands, L.B.; Budde, K.; Hest, R.M. van

    2009-01-01

    Previous studies predicted that limited sampling strategies (LSS) for estimation of mycophenolic acid (MPA) area-under-the-curve (AUC(0-12)) after ingestion of enteric-coated mycophenolate sodium (EC-MPS) using a clinically feasible sampling scheme may have poor predictive performance. Failure of

  7. Robustness in Railway Operations (RobustRailS)

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker

    This study considers the problem of enhancing railway timetable robustness without adding slack time, hence increasing the travel time. The approach integrates a transit assignment model to assess how passengers adapt their behaviour whenever operations are changed. First, the approach considers...

  8. Robust fractional order sliding mode control of doubly-fed induction generator (DFIG)-based wind turbines.

    Science.gov (United States)

    Ebrahimkhani, Sadegh

    2016-07-01

    Wind power plants have nonlinear dynamics and contain many uncertainties such as unknown nonlinear disturbances and parameter uncertainties. Thus, it is a difficult task to design a robust reliable controller for this system. This paper proposes a novel robust fractional-order sliding mode (FOSM) controller for maximum power point tracking (MPPT) control of doubly fed induction generator (DFIG)-based wind energy conversion system. In order to enhance the robustness of the control system, uncertainties and disturbances are estimated using a fractional order uncertainty estimator. In the proposed method a continuous control strategy is developed to achieve the chattering free fractional order sliding-mode control, and also no knowledge of the uncertainties and disturbances or their bound is assumed. The boundedness and convergence properties of the closed-loop signals are proven using Lyapunov׳s stability theory. Simulation results in the presence of various uncertainties were carried out to evaluate the effectiveness and robustness of the proposed control scheme. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Robust electrochemical analysis of As(III) integrating with interference tests: A case study in groundwater

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhong-Gang [Nanomaterials and Environmental Detection Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031 (China); Department of Chemistry, University of Science and Technology of China, Hefei 230026 (China); Chen, Xing; Liu, Jin-Huai [Nanomaterials and Environmental Detection Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031 (China); Huang, Xing-Jiu, E-mail: xingjiuhuang@iim.ac.cn [Nanomaterials and Environmental Detection Laboratory, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031 (China); Department of Chemistry, University of Science and Technology of China, Hefei 230026 (China)

    2014-08-15

    Graphical abstract: - Highlights: • Robust determination of As(III) in Togtoh water samples has been demonstrated. • The results were comparable to that obtained by ICP–AES. • No obvious interference was observed after a series of interference tests. • Robust stability was obtained in long-term measurements. - Abstract: In Togtoh region of Inner Mongolia, northern China, groundwater encountered high concentrations As contamination (greater than 50 μg L{sup −1}) causes an increasing concern. This work demonstrates an electrochemical protocol for robust (efficient and accurate) determination of As(III) in Togtoh water samples using Au microwire electrode without the need of pretreatment or clean-up steps. Considering the complicated conditions of Togtoh water, the efficiency of Au microwire electrode was systematically evaluated by a series of interference tests, stability and reproducibility measurements. No obvious interference on the determination of As(III) was observed. Especially, the influence of humic acid (HA) was intensively investigated. Electrode stability was also observed with long-term measurements (70 days) in Togtoh water solution and under different temperatures (0–35 °C). Excellent reproducibility (RSD:1.28%) was observed from different batches of Au microwire electrodes. The results obtained at Au microwire electrode were comparable to that obtained by inductively coupled plasma atomic emission spectroscopy (ICP–AES), indicating a good accuracy. These evaluations (efficiency, robustness, and accuracy) demonstrated that the Au microwire electrode was able to determine As(III) in application to real environmental samples.

  10. Microgrid Stability Controller Based on Adaptive Robust Total SMC

    Directory of Open Access Journals (Sweden)

    Xiaoling Su

    2015-03-01

    Full Text Available This paper presents a microgrid stability controller (MSC in order to provide existing distributed generation units (DGs the additional functionality of working in islanding mode without changing their control strategies in grid-connected mode and to enhance the stability of the microgrid. Microgrid operating characteristics and mathematical models of the MSC indicate that the system is inherently nonlinear and time-variable. Therefore, this paper proposes an adaptive robust total sliding-mode control (ARTSMC system for the MSC. It is proved that the ARTSMC system is insensitive to parametric uncertainties and external disturbances. The MSC provides fast dynamic response and robustness to the microgrid. When the system is operating in grid-connected mode, it is able to improve the controllability of the exchanged power between the microgrid and the utility grid, while smoothing the DGs’ output power. When the microgrid is operating in islanded mode, it provides voltage and frequency support, while guaranteeing seamless transition between the two operation modes. Simulation and experimental results show the effectiveness of the proposed approach.

  11. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  12. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Science.gov (United States)

    Yao, Peng-Cheng; Gao, Hai-Yan; Wei, Ya-Nan; Zhang, Jian-Hang; Chen, Xiao-Yong; Li, Hong-Qing

    2017-01-01

    Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  13. Robust Manufacturing Control

    CERN Document Server

    2013-01-01

    This contributed volume collects research papers, presented at the CIRP Sponsored Conference Robust Manufacturing Control: Innovative and Interdisciplinary Approaches for Global Networks (RoMaC 2012, Jacobs University, Bremen, Germany, June 18th-20th 2012). These research papers present the latest developments and new ideas focusing on robust manufacturing control for global networks. Today, Global Production Networks (i.e. the nexus of interconnected material and information flows through which products and services are manufactured, assembled and distributed) are confronted with and expected to adapt to: sudden and unpredictable large-scale changes of important parameters which are occurring more and more frequently, event propagation in networks with high degree of interconnectivity which leads to unforeseen fluctuations, and non-equilibrium states which increasingly characterize daily business. These multi-scale changes deeply influence logistic target achievement and call for robust planning and control ...

  14. Wavelet bidomain sample entropy analysis to predict spontaneous termination of atrial fibrillation

    International Nuclear Information System (INIS)

    Alcaraz, Raúl; Rieta, José Joaquín

    2008-01-01

    The ability to predict if an atrial fibrillation (AF) episode terminates spontaneously or not through non-invasive techniques is a challenging problem of great clinical interest. This fact could avoid useless therapeutic interventions and minimize the risks for the patient. The present work introduces a robust AF prediction methodology carried out by estimating, through sample entropy (SampEn), the atrial activity (AA) organization increase prior to AF termination from the surface electrocardiogram (ECG). This regularity variation appears as a consequence of the decrease in the number of reentries wandering throughout the atrial tissue. AA was obtained from surface ECG recordings by applying a QRST cancellation technique. Next, a robust and reliable classification process for terminating and non-terminating AF episodes was developed, making use of two different wavelet decomposition strategies. Finally, the AA organization both in time and wavelet domains (bidomain) was estimated via SampEn. The methodology was validated using a training set consisting of 20 AF recordings with known termination properties and a test set of 30 recordings. All the training signals and 93.33% of the test set were correctly classified into terminating and sustained AF, obtaining 93.75% sensitivity and 92.86% specificity. It can be concluded that spontaneous AF termination can be reliably and noninvasively predicted by applying wavelet bidomain sample entropy

  15. Standard methods for sampling and sample preparation for gamma spectroscopy

    International Nuclear Information System (INIS)

    Taskaeva, M.; Taskaev, E.; Nikolov, P.

    1993-01-01

    The strategy for sampling and sample preparation is outlined: necessary number of samples; analysis and treatment of the results received; quantity of the analysed material according to the radionuclide concentrations and analytical methods; the minimal quantity and kind of the data needed for making final conclusions and decisions on the base of the results received. This strategy was tested in gamma spectroscopic analysis of radionuclide contamination of the region of Eleshnitsa Uranium Mines. The water samples was taken and stored according to the ASTM D 3370-82. The general sampling procedures were in conformity with the recommendations of ISO 5667. The radionuclides was concentrated by coprecipitation with iron hydroxide and ion exchange. The sampling of soil samples complied with the rules of ASTM C 998, and their sample preparation - with ASTM C 999. After preparation the samples were sealed hermetically and measured. (author)

  16. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    Directory of Open Access Journals (Sweden)

    Guangwei Gao

    Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  17. Gear hot forging process robust design based on finite element method

    International Nuclear Information System (INIS)

    Xuewen, Chen; Won, Jung Dong

    2008-01-01

    During the hot forging process, the shaping property and forging quality will fluctuate because of die wear, manufacturing tolerance, dimensional variation caused by temperature and the different friction conditions, etc. In order to control this variation in performance and to optimize the process parameters, a robust design method is proposed in this paper, based on the finite element method for the hot forging process. During the robust design process, the Taguchi method is the basic robust theory. The finite element analysis is incorporated in order to simulate the hot forging process. In addition, in order to calculate the objective function value, an orthogonal design method is selected to arrange experiments and collect sample points. The ANOVA method is employed to analyze the relationships of the design parameters and design objectives and to find the best parameters. Finally, a case study for the gear hot forging process is conducted. With the objective to reduce the forging force and its variation, the robust design mathematical model is established. The optimal design parameters obtained from this study indicate that the forging force has been reduced and its variation has been controlled

  18. Robustness - theoretical framework

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Rizzuto, Enrico; Faber, Michael H.

    2010-01-01

    More frequent use of advanced types of structures with limited redundancy and serious consequences in case of failure combined with increased requirements to efficiency in design and execution followed by increased risk of human errors has made the need of requirements to robustness of new struct...... of this fact sheet is to describe a theoretical and risk based framework to form the basis for quantification of robustness and for pre-normative guidelines....

  19. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    Science.gov (United States)

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical

  20. Sampling strategies to capture single-cell heterogeneity

    OpenAIRE

    Satwik Rajaram; Louise E. Heinrich; John D. Gordan; Jayant Avva; Kathy M. Bonness; Agnieszka K. Witkiewicz; James S. Malter; Chloe E. Atreya; Robert S. Warren; Lani F. Wu; Steven J. Altschuler

    2017-01-01

    Advances in single-cell technologies have highlighted the prevalence and biological significance of cellular heterogeneity. A critical question is how to design experiments that faithfully capture the true range of heterogeneity from samples of cellular populations. Here, we develop a data-driven approach, illustrated in the context of image data, that estimates the sampling depth required for prospective investigations of single-cell heterogeneity from an existing collection of samples. ...

  1. Nonlinear decentralized robust governor control for hydroturbine-generator sets in multi-machine power systems

    Energy Technology Data Exchange (ETDEWEB)

    Qiang Lu; Yusong Sun; Yuanzhang Sun [Tsinghua University, Beijing (China). Dept. of Electrical Engineering; Felix F Wu; Yixin Ni [University of Hong Kong (China). Dept. of Electrical and Electronic Engineering; Yokoyama, Akihiko [University of Tokyo (Japan). Dept. of Electrical Engineering; Goto, Masuo; Konishi, Hiroo [Hitachi Ltd., Tokyo (Japan). Power System Div.

    2004-06-01

    A novel nonlinear decentralized robust governor control for hydroturbine-generator sets in multi-machine power systems is suggested in this paper. The nonelastic water hammer effect and disturbances are considered in the modeling. The advanced differential geometry theory, nonlinear robust control theory and the dynamic feedback method are combined to solve the problem. The nonlinear decentralized robust control law for the speed governor of hydroturbine-generators has been derived. The input signals to the proposed controller are all local measurements and independent to the system parameters. The derived control law guarantees the integrated system stability with disturbance attenuation, which is significant to the real power system application. Computer tests on an 8-machine, 36-bus power system show clearly the effectiveness of the new control strategy in transient stability enhancement and disturbance attenuation. The computer test results based on the suggested controller are compared favorably with those based on the conventional linear governor control. (author)

  2. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  3. IVS Combination Center at BKG - Robust Outlier Detection and Weighting Strategies

    Science.gov (United States)

    Bachmann, S.; Lösler, M.

    2012-12-01

    Outlier detection plays an important role within the IVS combination. Even if the original data is the same for all contributing Analysis Centers (AC), the analyzed data shows differences due to analysis software characteristics. The treatment of outliers is thus a fine line between keeping data heterogeneity and elimination of real outliers. Robust outlier detection based on the Least Median Square (LMS) is used within the IVS combination. This method allows reliable outlier detection with a small number of input parameters. A similar problem arises for the weighting of the individual solutions within the combination process. The variance component estimation (VCE) is used to control the weighting factor for each AC. The Operator-Software-Impact (OSI) method takes into account that the analyzed data is strongly influenced by the software and the responsible operator. It allows to make the VCE more sensitive to the diverse input data. This method has already been set up within GNSS data analysis as well as the analysis of troposphere data. The benefit of an OSI realization within the VLBI combination and its potential in weighting factor determination has not been investigated before.

  4. A Simple and Robust Method for Culturing Human-Induced Pluripotent Stem Cells in an Undifferentiated State Using Botulinum Hemagglutinin.

    Science.gov (United States)

    Kim, Mee-Hae; Matsubara, Yoshifumi; Fujinaga, Yukako; Kino-Oka, Masahiro

    2018-02-01

    Clinical and industrial applications of human-induced pluripotent stem cells (hiPSCs) is hindered by the lack of robust culture strategies capable of sustaining a culture in an undifferentiated state. Here, a simple and robust hiPSC-culture-propagation strategy incorporating botulinum hemagglutinin (HA)-mediated selective removal of cells deviating from an undifferentiated state is developed. After HA treatment, cell-cell adhesion is disrupted, and deviated cells detached from the central region of the colony to subsequently form tight monolayer colonies following prolonged incubation. The authors find that the temporal and dose-dependent activity of HA regulated deviated-cell removal and recoverability after disruption of cell-cell adhesion in hiPSC colonies. The effects of HA are confirmed under all culture conditions examined, regardless of hiPSC line and feeder-dependent or -free culture conditions. After routine application of our HA-treatment paradigm for serial passages, hiPSCs maintains expression of pluripotent markers and readily forms embryoid bodies expressing markers for all three germ-cell layers. This method enables highly efficient culturing of hiPSCs and use of entire undifferentiated portions without having to pick deviated cells manually. This simple and readily reproducible culture strategy is a potentially useful tool for improving the robust and scalable maintenance of undifferentiated hiPSC cultures. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Development of improved space sampling strategies for ocean chemical properties: Total carbon dioxide and dissolved nitrate

    Science.gov (United States)

    Goyet, Catherine; Davis, Daniel; Peltzer, Edward T.; Brewer, Peter G.

    1995-01-01

    Large-scale ocean observing programs such as the Joint Global Ocean Flux Study (JGOFS) and the World Ocean Circulation Experiment (WOCE) today, must face the problem of designing an adequate sampling strategy. For ocean chemical variables, the goals and observing technologies are quite different from ocean physical variables (temperature, salinity, pressure). We have recently acquired data on the ocean CO2 properties on WOCE cruises P16c and P17c that are sufficiently dense to test for sampling redundancy. We use linear and quadratic interpolation methods on the sampled field to investigate what is the minimum number of samples required to define the deep ocean total inorganic carbon (TCO2) field within the limits of experimental accuracy (+/- 4 micromol/kg). Within the limits of current measurements, these lines were oversampled in the deep ocean. Should the precision of the measurement be improved, then a denser sampling pattern may be desirable in the future. This approach rationalizes the efficient use of resources for field work and for estimating gridded (TCO2) fields needed to constrain geochemical models.

  6. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  7. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  8. Distributed Consensus-Based Robust Adaptive Formation Control for Nonholonomic Mobile Robots with Partial Known Dynamics

    Directory of Open Access Journals (Sweden)

    Zhaoxia Peng

    2014-01-01

    Full Text Available This paper investigates the distributed consensus-based robust adaptive formation control for nonholonomic mobile robots with partially known dynamics. Firstly, multirobot formation control problem has been converted into a state consensus problem. Secondly, the practical control strategies, which incorporate the distributed kinematic controllers and the robust adaptive torque controllers, are designed for solving the formation control problem. Thirdly, the specified reference trajectory for the geometric centroid of the formation is assumed as the trajectory of a virtual leader, whose information is available to only a subset of the followers. Finally, numerical results are provided to illustrate the effectiveness of the proposed control approaches.

  9. Possibility of spoof attack against robustness of multibiometric authentication systems

    Science.gov (United States)

    Hariri, Mahdi; Shokouhi, Shahriar Baradaran

    2011-07-01

    Multibiometric systems have been recently developed in order to overcome some weaknesses of single biometric authentication systems, but security of these systems against spoofing has not received enough attention. In this paper, we propose a novel practical method for simulation of possibilities of spoof attacks against a biometric authentication system. Using this method, we model matching scores from standard to completely spoofed genuine samples. Sum, product, and Bayes fusion rules are applied for score level combination. The security of multimodal authentication systems are examined and compared with the single systems against various spoof possibilities. However, vulnerability of fused systems is considerably increased against spoofing, but their robustness is generally higher than single matcher systems. In this paper we show that robustness of a combined system is not always higher than a single system against spoof attack. We propose empirical methods for upgrading the security of multibiometric systems, which contain how to organize and select biometric traits and matchers against various possibilities of spoof attack. These methods provide considerable robustness and present an appropriate reason for using combined systems against spoof attacks.

  10. Robust procedures in chemometrics

    DEFF Research Database (Denmark)

    Kotwa, Ewelina

    properties of the analysed data. The broad theoretical background of robust procedures was given as a very useful supplement to the classical methods, and a new tool, based on robust PCA, aiming at identifying Rayleigh and Raman scatters in excitation-mission (EEM) data was developed. The results show...

  11. Robust canonical correlations: A comparative study

    OpenAIRE

    Branco, JA; Croux, Christophe; Filzmoser, P; Oliveira, MR

    2005-01-01

    Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods axe discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study ...

  12. Standard operating procedures for collection of soil and sediment samples for the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study

    Science.gov (United States)

    Fisher, Shawn C.; Reilly, Timothy J.; Jones, Daniel K.; Benzel, William M.; Griffin, Dale W.; Loftin, Keith A.; Iwanowicz, Luke R.; Cohl, Jonathan A.

    2015-12-17

    An understanding of the effects on human and ecological health brought by major coastal storms or flooding events is typically limited because of a lack of regionally consistent baseline and trends data in locations proximal to potential contaminant sources and mitigation activities, sensitive ecosystems, and recreational facilities where exposures are probable. In an attempt to close this gap, the U.S. Geological Survey (USGS) has implemented the Sediment-bound Contaminant Resiliency and Response (SCoRR) strategy pilot study to collect regional sediment-quality data prior to and in response to future coastal storms. The standard operating procedure (SOP) detailed in this document serves as the sample-collection protocol for the SCoRR strategy by providing step-by-step instructions for site preparation, sample collection and processing, and shipping of soil and surficial sediment (for example, bed sediment, marsh sediment, or beach material). The objectives of the SCoRR strategy pilot study are (1) to create a baseline of soil-, sand-, marsh sediment-, and bed-sediment-quality data from sites located in the coastal counties from Maine to Virginia based on their potential risk of being contaminated in the event of a major coastal storm or flooding (defined as Resiliency mode); and (2) respond to major coastal storms and flooding by reoccupying select baseline sites and sampling within days of the event (defined as Response mode). For both modes, samples are collected in a consistent manner to minimize bias and maximize quality control by ensuring that all sampling personnel across the region collect, document, and process soil and sediment samples following the procedures outlined in this SOP. Samples are analyzed using four USGS-developed screening methods—inorganic geochemistry, organic geochemistry, pathogens, and biological assays—which are also outlined in this SOP. Because the SCoRR strategy employs a multi-metric approach for sample analyses, this

  13. A nested-PCR strategy for molecular diagnosis of mollicutes in uncultured biological samples from cows with vulvovaginitis.

    Science.gov (United States)

    Voltarelli, Daniele Cristina; de Alcântara, Brígida Kussumoto; Lunardi, Michele; Alfieri, Alice Fernandes; de Arruda Leme, Raquel; Alfieri, Amauri Alcindo

    2018-01-01

    Bacteria classified in Mycoplasma (M. bovis and M. bovigenitalium) and Ureaplasma (U. diversum) genera are associated with granular vulvovaginitis that affect heifers and cows at reproductive age. The traditional means for detection and speciation of mollicutes from clinical samples have been culture and serology. However, challenges experienced with these laboratory methods have hampered assessment of their impact in pathogenesis and epidemiology in cattle worldwide. The aim of this study was to develop a PCR strategy to detect and primarily discriminate between the main species of mollicutes associated with reproductive disorders of cattle in uncultured clinical samples. In order to amplify the 16S-23S rRNA internal transcribed spacer region of the genome, a consensual and species-specific nested-PCR assay was developed to identify and discriminate between main species of mollicutes. In addition, 31 vaginal swab samples from dairy and beef affected cows were investigated. This nested-PCR strategy was successfully employed in the diagnosis of single and mixed mollicute infections of diseased cows from cattle herds from Brazil. The developed system enabled the rapid and unambiguous identification of the main mollicute species known to be associated with this cattle reproductive disorder through differential amplification of partial fragments of the ITS region of mollicute genomes. The development of rapid and sensitive tools for mollicute detection and discrimination without the need for previous cultures or sequencing of PCR products is a high priority for accurate diagnosis in animal health. Therefore, the PCR strategy described herein may be helpful for diagnosis of this class of bacteria in genital swabs submitted to veterinary diagnostic laboratories, not demanding expertise in mycoplasma culture and identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Building robust functionality in synthetic circuits using engineered feedback regulation.

    Science.gov (United States)

    Chen, Susan; Harrigan, Patrick; Heineike, Benjamin; Stewart-Ornstein, Jacob; El-Samad, Hana

    2013-08-01

    The ability to engineer novel functionality within cells, to quantitatively control cellular circuits, and to manipulate the behaviors of populations, has many important applications in biotechnology and biomedicine. These applications are only beginning to be explored. In this review, we advocate the use of feedback control as an essential strategy for the engineering of robust homeostatic control of biological circuits and cellular populations. We also describe recent works where feedback control, implemented in silico or with biological components, was successfully employed for this purpose. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. The effects of ecology and evolutionary history on robust capuchin morphological diversity.

    Science.gov (United States)

    Wright, Kristin A; Wright, Barth W; Ford, Susan M; Fragaszy, Dorothy; Izar, Patricia; Norconk, Marilyn; Masterson, Thomas; Hobbs, David G; Alfaro, Michael E; Lynch Alfaro, Jessica W

    2015-01-01

    Recent molecular work has confirmed the long-standing morphological hypothesis that capuchins are comprised of two distinct clades, the gracile (untufted) capuchins (genus Cebus, Erxleben, 1777) and the robust (tufted) capuchins (genus Sapajus Kerr, 1792). In the past, the robust group was treated as a single, undifferentiated and cosmopolitan species, with data from all populations lumped together in morphological and ecological studies, obscuring morphological differences that might exist across this radiation. Genetic evidence suggests that the modern radiation of robust capuchins began diversifying ∼2.5 Ma, with significant subsequent geographic expansion into new habitat types. In this study we use a morphological sample of gracile and robust capuchin craniofacial and postcranial characters to examine how ecology and evolutionary history have contributed to morphological diversity within the robust capuchins. We predicted that if ecology is driving robust capuchin variation, three distinct robust morphotypes would be identified: (1) the Atlantic Forest species (Sapajus xanthosternos, S. robustus, and S. nigritus), (2) the Amazonian rainforest species (S. apella, S. cay and S. macrocephalus), and (3) the Cerrado-Caatinga species (S. libidinosus). Alternatively, if diversification time between species pairs predicts degree of morphological difference, we predicted that the recently diverged S. apella, S. macrocephalus, S. libidinosus, and S. cay would be morphologically comparable, with greater variation among the more ancient lineages of S. nigritus, S. xanthosternos, and S. robustus. Our analyses suggest that S. libidinosus has the most derived craniofacial and postcranial features, indicative of inhabiting a more terrestrial niche that includes a dependence on tool use for the extraction of imbedded foods. We also suggest that the cranial robusticity of S. macrocephalus and S. apella are indicative of recent competition with sympatric gracile capuchin

  16. Robustness of non-interdependent and interdependent networks against dependent and adaptive attacks

    Science.gov (United States)

    Tyra, Adam; Li, Jingtao; Shang, Yilun; Jiang, Shuo; Zhao, Yanjun; Xu, Shouhuai

    2017-09-01

    Robustness of complex networks has been extensively studied via the notion of site percolation, which typically models independent and non-adaptive attacks (or disruptions). However, real-life attacks are often dependent and/or adaptive. This motivates us to characterize the robustness of complex networks, including non-interdependent and interdependent ones, against dependent and adaptive attacks. For this purpose, dependent attacks are accommodated by L-hop percolation where the nodes within some L-hop (L ≥ 0) distance of a chosen node are all deleted during one attack (with L = 0 degenerating to site percolation). Whereas, adaptive attacks are launched by attackers who can make node-selection decisions based on the network state in the beginning of each attack. The resulting characterization enriches the body of knowledge with new insights, such as: (i) the Achilles' Heel phenomenon is only valid for independent attacks, but not for dependent attacks; (ii) powerful attack strategies (e.g., targeted attacks and dependent attacks, dependent attacks and adaptive attacks) are not compatible and cannot help the attacker when used collectively. Our results shed some light on the design of robust complex networks.

  17. Product integration rules at Clenshaw-Curtis and related points: A robust implementation

    International Nuclear Information System (INIS)

    Adam, G.; Nobile, A.

    1989-12-01

    Product integration rules generalizing the Fejer, Clenshaw-Curtis and Filippi quadrature rules respectively are derived for integrals with trigonometric and hyperbolic weight factors. The study puts in evidence the existence of well-conditioned fully analytic solutions, in terms of hypergeometric functions 0 F 1 . An a priori error estimator is discussed which is shown both to avoid wasteful invocation of the integration rule and to increase significantly the robustness of the automatic quadrature procedure. Then, specializing to extended Clenshaw-Curtis (ECC) rules, three types of a posteriori error estimates are considered and the existence of a great risk of their failure is put into evidence by large scale validation tests. An empirical error estimator, superseding them at slowly varying integrands, is found to result in a spectacular increase in the output reliability. Finally, enhancements in the control of the interval subdivision strategy aiming at increasing code robustness is discussed. Comparison with the code DQAWO of QUADPACK, extending over a statistics of about hundred thousand solved integrals, is illustrative for the increased robustness and error estimate reliability of our computer code implementation of the ECC rules. (author). 19 refs, 8 tabs

  18. Module-based analysis of robustness tradeoffs in the heat shock response system.

    Directory of Open Access Journals (Sweden)

    Hiroyuki Kurata

    2006-07-01

    Full Text Available Biological systems have evolved complex regulatory mechanisms, even in situations where much simpler designs seem to be sufficient for generating nominal functionality. Using module-based analysis coupled with rigorous mathematical comparisons, we propose that in analogy to control engineering architectures, the complexity of cellular systems and the presence of hierarchical modular structures can be attributed to the necessity of achieving robustness. We employ the Escherichia coli heat shock response system, a strongly conserved cellular mechanism, as an example to explore the design principles of such modular architectures. In the heat shock response system, the sigma-factor sigma32 is a central regulator that integrates multiple feedforward and feedback modules. Each of these modules provides a different type of robustness with its inherent tradeoffs in terms of transient response and efficiency. We demonstrate how the overall architecture of the system balances such tradeoffs. An extensive mathematical exploration nevertheless points to the existence of an array of alternative strategies for the existing heat shock response that could exhibit similar behavior. We therefore deduce that the evolutionary constraints facing the system might have steered its architecture toward one of many robustly functional solutions.

  19. Assessment and testing of industrial devices robustness against cyber security attacks

    International Nuclear Information System (INIS)

    Tilaro, F.; Copy, B.

    2012-01-01

    CERN (European Organization for Nuclear Research),like any organization, needs to achieve the conflicting objectives of connecting its operational network to Internet while at the same time keeping its industrial control systems secure from external and internal cyber attacks. Devices robustness represents a key link in the defense-in-depth concept as some attacks will inevitably penetrate security boundaries and thus require further protection measures. CERN - in collaboration with Siemens - has designed and implemented a dedicated working environment, the Test-bench for Robustness of Industrial Equipment. Such tests attempt to detect possible anomalies by exploiting corrupt communication channels and manipulating the normal behavior of the communication protocols, in the same way as a cyber attacker would proceed. Our approach consists of analyzing protocol implementations by injecting malformed PDUs (Protocol Data Unit) to corrupt the normal behaviour of the system. As a PDU typically has many fields, the number of possible syntactically faulty PDUs grows exponentially with the number of fields. In this document, we proposed a strategy to explore this huge test domain using a hybrid approach of fuzzing and syntax techniques, specifically developed to evaluate industrial device communication robustness. So far, not all the tests can be integrated into automatic tools, human analysis and management is necessary to discover and investigate specific possible failures

  20. Robust Clamping Force Control of an Electro-Mechanical Brake System for Application to Commercial City Buses

    Directory of Open Access Journals (Sweden)

    Sangjune Eum

    2017-02-01

    Full Text Available This paper proposes a sensor-less robust force control method for improving the control performance of an electro-mechanical brake (EMB which is applicable to commercial city buses. The EMB generates the accurate clamping force commanded by a driver through an independent motor control at each wheel instead of using existing mechanical components. In general, an EMB undergoes parameter variation and a backdrivability problem. For this reason, the cascade control strategy (e.g., force-position cascade control structure is proposed and the disturbance observer is employed to enhance control robustness against model variations. Additionally, this paper proposed the clamping force estimation method for a sensor-less control, i.e., the clamping force observer (CFO. Finally, in order to confirm the performance and effectiveness of a proposed robust control method, several experiments are performed and analyzed.

  1. Hybrid Robust Control Law with Disturbance Observer for High-Frequency Response Electro-Hydraulic Servo Loading System

    Directory of Open Access Journals (Sweden)

    Zhiqing Sheng

    2016-04-01

    Full Text Available Addressing the simulating issue of the helicopter-manipulating booster aerodynamic load with high-frequency dynamic load superimposed on a large static load, this paper studies the design of the robust controller for the electro-hydraulic loading system to realize the simulation of this kind of load. Firstly, the equivalent linear model of the electro-hydraulic loading system under assumed parameter uncertainty is established. Then, a hybrid control scheme is proposed for the loading system. This control scheme consists of a constant velocity feed-forward compensator, a robust inner loop compensator based on disturbance observer and a robust outer loop feedback controller. The constant velocity compensator eliminates most of the extraneous force at first, and then the double-loop cascade composition control strategy is employed to design the compensated system. The disturbance observer–based inner loop compensator further restrains the disturbances including the remaining extraneous force, and makes the actual plant tracking a nominal model approximately in a certain frequency range. The robust outer loop controller achieves the desired force-tracking performance, and guarantees system robustness in the high frequency region. The optimized low-pass filter Q(s is designed by using the H∞ mixed sensitivity optimization method. The simulation results show that the proposed hybrid control scheme and controller can effectively suppress the extraneous force and improve the robustness of the electro-hydraulic loading system.

  2. Appreciating the difference between design-based and model-based sampling strategies in quantitative morphology of the nervous system.

    Science.gov (United States)

    Geuna, S

    2000-11-20

    Quantitative morphology of the nervous system has undergone great developments over recent years, and several new technical procedures have been devised and applied successfully to neuromorphological research. However, a lively debate has arisen on some issues, and a great deal of confusion appears to exist that is definitely responsible for the slow spread of the new techniques among scientists. One such element of confusion is related to uncertainty about the meaning, implications, and advantages of the design-based sampling strategy that characterize the new techniques. In this article, to help remove this uncertainty, morphoquantitative methods are described and contrasted on the basis of the inferential paradigm of the sampling strategy: design-based vs model-based. Moreover, some recommendations are made to help scientists judge the appropriateness of a method used for a given study in relation to its specific goals. Finally, the use of the term stereology to label, more or less expressly, only some methods is critically discussed. Copyright 2000 Wiley-Liss, Inc.

  3. Solid-Phase Extraction Strategies to Surmount Body Fluid Sample Complexity in High-Throughput Mass Spectrometry-Based Proteomics

    Science.gov (United States)

    Bladergroen, Marco R.; van der Burgt, Yuri E. M.

    2015-01-01

    For large-scale and standardized applications in mass spectrometry- (MS-) based proteomics automation of each step is essential. Here we present high-throughput sample preparation solutions for balancing the speed of current MS-acquisitions and the time needed for analytical workup of body fluids. The discussed workflows reduce body fluid sample complexity and apply for both bottom-up proteomics experiments and top-down protein characterization approaches. Various sample preparation methods that involve solid-phase extraction (SPE) including affinity enrichment strategies have been automated. Obtained peptide and protein fractions can be mass analyzed by direct infusion into an electrospray ionization (ESI) source or by means of matrix-assisted laser desorption ionization (MALDI) without further need of time-consuming liquid chromatography (LC) separations. PMID:25692071

  4. Getting complete genomes from complex samples using nanopore sequencing

    DEFF Research Database (Denmark)

    Kirkegaard, Rasmus Hansen; Karst, Søren Michael; Albertsen, Mads

    Short read sequencing and metagenomic binning workflows have made it possible to extract bacterial genome bins from environmental microbial samples containing hundreds to thousands of different species. However, these genome bins often do not represent complete genomes, as they are mostly...... fragmented, incomplete and often contaminated with foreign DNA and with no robust strategies to validate the quality. The value of these `draft genomes` have limited, lasting value to the scientific community, as gene synteny is broken and the uncertainty of what is missing. The genetic material most often...... missed is important multi-copy and/or conserved marker genes such as the 16S rRNA gene, as sequence micro-heterogeneity prevents assembly of these genes in the de novo assembly. We demonstrate that using nanopore long reads it is now possible to overcome these issues and make complete genomes from...

  5. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  6. Assessing a robust ensemble-based Kalman filter for efficient ecosystem data assimilation of the Cretan Sea

    KAUST Repository

    Triantafyllou, George N.; Hoteit, Ibrahim; Luo, Xiaodong; Tsiaras, Kostas P.; Petihakis, George

    2013-01-01

    An application of an ensemble-based robust filter for data assimilation into an ecosystem model of the Cretan Sea is presented and discussed. The ecosystem model comprises two on-line coupled sub-models: the Princeton Ocean Model (POM) and the European Regional Seas Ecosystem Model (ERSEM). The filtering scheme is based on the Singular Evolutive Interpolated Kalman (SEIK) filter which is implemented with a time-local H∞ filtering strategy to enhance robustness and performances during periods of strong ecosystem variability. Assimilation experiments in the Cretan Sea indicate that robustness can be achieved in the SEIK filter by introducing an adaptive inflation scheme of the modes of the filter error covariance matrix. Twin-experiments are performed to evaluate the performance of the assimilation system and to study the benefits of using robust filtering in an ensemble filtering framework. Pseudo-observations of surface chlorophyll, extracted from a model reference run, were assimilated every two days. Simulation results suggest that the adaptive inflation scheme significantly improves the behavior of the SEIK filter during periods of strong ecosystem variability. © 2012 Elsevier B.V.

  7. Assessing a robust ensemble-based Kalman filter for efficient ecosystem data assimilation of the Cretan Sea

    KAUST Repository

    Triantafyllou, George N.

    2013-09-01

    An application of an ensemble-based robust filter for data assimilation into an ecosystem model of the Cretan Sea is presented and discussed. The ecosystem model comprises two on-line coupled sub-models: the Princeton Ocean Model (POM) and the European Regional Seas Ecosystem Model (ERSEM). The filtering scheme is based on the Singular Evolutive Interpolated Kalman (SEIK) filter which is implemented with a time-local H∞ filtering strategy to enhance robustness and performances during periods of strong ecosystem variability. Assimilation experiments in the Cretan Sea indicate that robustness can be achieved in the SEIK filter by introducing an adaptive inflation scheme of the modes of the filter error covariance matrix. Twin-experiments are performed to evaluate the performance of the assimilation system and to study the benefits of using robust filtering in an ensemble filtering framework. Pseudo-observations of surface chlorophyll, extracted from a model reference run, were assimilated every two days. Simulation results suggest that the adaptive inflation scheme significantly improves the behavior of the SEIK filter during periods of strong ecosystem variability. © 2012 Elsevier B.V.

  8. A Robust Multivariable Feedforward/Feedback Controller Design for Integrated Power Control of Boiling Water Reactor Power Plants

    International Nuclear Information System (INIS)

    Shyu, S.-S.; Edwards, Robert M.

    2002-01-01

    In this paper, a methodology for synthesizing a robust multivariable feedforward/feedback control (FF/FBC) strategy is proposed for an integrated control of turbine power, throttle pressure, and reactor water level in a nuclear power plant. In the proposed method, the FBC is synthesized by the robust control approach. The feedforward control, which is generated via nonlinear programming, is added to the robust FBC system to further improve the control performance. The plant uncertainties, including unmodeled dynamics, linearization, and model reduction, are characterized and estimated. The comparisons of simulation responses based on a nonlinear reactor model demonstrate the achievement of the proposed controller with specified performance and endurance under uncertainty. It is also important to note that all input variables are manipulated in an orchestrated manner in response to a single output's setpoint change

  9. Sustained Attention Across the Life Span in a Sample of 10,000: Dissociating Ability and Strategy.

    Science.gov (United States)

    Fortenbaugh, Francesca C; DeGutis, Joseph; Germine, Laura; Wilmer, Jeremy B; Grosso, Mallory; Russo, Kathryn; Esterman, Michael

    2015-09-01

    Normal and abnormal differences in sustained visual attention have long been of interest to scientists, educators, and clinicians. Still lacking, however, is a clear understanding of how sustained visual attention varies across the broad sweep of the human life span. In the present study, we filled this gap in two ways. First, using an unprecedentedly large 10,430-person sample, we modeled age-related differences with substantially greater precision than have prior efforts. Second, using the recently developed gradual-onset continuous performance test (gradCPT), we parsed sustained-attention performance over the life span into its ability and strategy components. We found that after the age of 15 years, the strategy and ability trajectories saliently diverge. Strategy becomes monotonically more conservative with age, whereas ability peaks in the early 40s and is followed by a gradual decline in older adults. These observed life-span trajectories for sustained attention are distinct from results of other life-span studies focusing on fluid and crystallized intelligence. © The Author(s) 2015.

  10. Robust plasmonic substrates

    DEFF Research Database (Denmark)

    Kostiučenko, Oksana; Fiutowski, Jacek; Tamulevicius, Tomas

    2014-01-01

    Robustness is a key issue for the applications of plasmonic substrates such as tip-enhanced Raman spectroscopy, surface-enhanced spectroscopies, enhanced optical biosensing, optical and optoelectronic plasmonic nanosensors and others. A novel approach for the fabrication of robust plasmonic...... substrates is presented, which relies on the coverage of gold nanostructures with diamond-like carbon (DLC) thin films of thicknesses 25, 55 and 105 nm. DLC thin films were grown by direct hydrocarbon ion beam deposition. In order to find the optimum balance between optical and mechanical properties...

  11. Seroadaptive Strategies of Gay & Bisexual Men (GBM) with the Highest Quartile Number of Sexual Partners in Vancouver, Canada.

    Science.gov (United States)

    Card, Kiffer G; Lachowsky, Nathan J; Cui, Zishan; Sereda, Paul; Rich, Ashleigh; Jollimore, Jody; Howard, Terry; Birch, Robert; Carter, Allison; Montaner, Julio; Moore, David; Hogg, Robert S; Roth, Eric Abella

    2017-05-01

    Despite continued research among men with more sexual partners, little information exists on their seroadaptive behavior. Therefore, we examined seroadaptive anal sex strategies among 719 Vancouver gay and bisexual men (GBM) recruited using respondent-driven sampling. We provide descriptive, bivariable, and multivariable adjusted statistics, stratified by HIV status, for the covariates of having ≥7 male anal sex partners in the past 6 months (Population fourth quartile versus <7). Sensitivity Analysis were also performed to assess the robustness of this cut-off. Results suggest that GBM with more sexual partners are more likely to employ seroadaptive strategies than men with fewer partners. These strategies may be used in hopes of offsetting risk, assessing needs for subsequent HIV testing, and balancing personal health with sexual intimacy. Further research is needed to determine the efficacy of these strategies, assess how GBM perceive their efficacy, and understand the social and health impacts of their widespread uptake.

  12. A robust standard deviation control chart

    NARCIS (Netherlands)

    Schoonhoven, M.; Does, R.J.M.M.

    2012-01-01

    This article studies the robustness of Phase I estimators for the standard deviation control chart. A Phase I estimator should be efficient in the absence of contaminations and resistant to disturbances. Most of the robust estimators proposed in the literature are robust against either diffuse

  13. Integrating Globality and Locality for Robust Representation Based Classification

    Directory of Open Access Journals (Sweden)

    Zheng Zhang

    2014-01-01

    Full Text Available The representation based classification method (RBCM has shown huge potential for face recognition since it first emerged. Linear regression classification (LRC method and collaborative representation classification (CRC method are two well-known RBCMs. LRC and CRC exploit training samples of each class and all the training samples to represent the testing sample, respectively, and subsequently conduct classification on the basis of the representation residual. LRC method can be viewed as a “locality representation” method because it just uses the training samples of each class to represent the testing sample and it cannot embody the effectiveness of the “globality representation.” On the contrary, it seems that CRC method cannot own the benefit of locality of the general RBCM. Thus we propose to integrate CRC and LRC to perform more robust representation based classification. The experimental results on benchmark face databases substantially demonstrate that the proposed method achieves high classification accuracy.

  14. Robust Multi-Objective PQ Scheduling for Electric Vehicles in Flexible Unbalanced Distribution Grids

    DEFF Research Database (Denmark)

    Knezovic, Katarina; Soroudi, Alireza; Marinelli, Mattia

    2017-01-01

    With increased penetration of distributed energy resources and electric vehicles (EVs), different EV management strategies can be used for mitigating adverse effects and supporting the distribution grid. This paper proposes a robust multi-objective methodology for determining the optimal day...... demand response programs. The method is tested on a real Danish unbalanced distribution grid with 35% EV penetration to demonstrate the effectiveness of the proposed approach. It is shown that the proposed formulation guarantees an optimal EV cost as long as the price uncertainties are lower than....... The robust formulation effectively considers the errors in the electricity price forecast and its influence on the EV schedule. Moreover, the impact of EV reactive power support on objective values and technical parameters is analysed both when EVs are the only flexible resources and when linked with other...

  15. Star marketer’s impact on the market strategy choice

    Directory of Open Access Journals (Sweden)

    Vlašić Goran

    2017-01-01

    Full Text Available We focus on understanding the role of star marketers in pursuing a market-driven vs. a market-driving strategy. Results indicate that market-driving and market-driven strategies are two approaches that can be pursued by market-oriented firms. A star marketer has a robust positive influence on market-driving strategy. In contrast, a star marketer has no meaningful influence on market-driven strategy. In short, while star marketers are very important for market-driving strategy and long term success, they represent an unnecessary cost and provide no added value to companies focusing on market-driven strategies and short term results.

  16. Impedance Stability Assessment of Active Damping Strategies for LCL Grid-Connected Converters

    DEFF Research Database (Denmark)

    Diaz, Enrique Rodriguez; Quintero, Juan Carlos Vasquez; Guerrero, Josep M.

    2017-01-01

    as a countermeasure. This paper analyses the robustness of the closed-loop dynamics when different active damping strategies are implemented. Due to their readiness and simplicity, the following schemes are considered: 1) filtered capacitor voltage feed-forward and 2) notch filters in cascade with the main current......The use of LCL filters is a well accepted solution to attenuate the harmonics created by the pulsewidth modulation (PWM). However, inherently LCL filters have a resonance region where the unwanted harmonics are amplified, which can compromise instability. Active damping strategies are implemented...... controller. The impedance/admittance stability formulation is used to model the system, which has been proven to be very convenient for the assessment of robustness. The design case study shows that the filtered capacitor voltage feed-forward provides a more robust implementation than the one based...

  17. Application of Robust Regression and Bootstrap in Poductivity Analysis of GERD Variable in EU27

    Directory of Open Access Journals (Sweden)

    Dagmar Blatná

    2014-06-01

    Full Text Available The GERD is one of Europe 2020 headline indicators being tracked within the Europe 2020 strategy. The headline indicator is the 3% target for the GERD to be reached within the EU by 2020. Eurostat defi nes “GERD” as total gross domestic expenditure on research and experimental development in a percentage of GDP. GERD depends on numerous factors of a general economic background, namely of employment, innovation and research, science and technology. The values of these indicators vary among the European countries, and consequently the occurrence of outliers can be anticipated in corresponding analyses. In such a case, a classical statistical approach – the least squares method – can be highly unreliable, the robust regression methods representing an acceptable and useful tool. The aim of the present paper is to demonstrate the advantages of robust regression and applicability of the bootstrap approach in regression based on both classical and robust methods.

  18. Radial line-scans as representative sampling strategy in dried-droplet laser ablation of liquid samples deposited on pre-cut filter paper disks

    Energy Technology Data Exchange (ETDEWEB)

    Nischkauer, Winfried [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria); Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Vanhaecke, Frank [Department of Analytical Chemistry, Ghent University, Ghent (Belgium); Bernacchi, Sébastien; Herwig, Christoph [Institute of Chemical Engineering, Vienna University of Technology, Vienna (Austria); Limbeck, Andreas, E-mail: Andreas.Limbeck@tuwien.ac.at [Institute of Chemical Technologies and Analytics, Vienna University of Technology, Vienna (Austria)

    2014-11-01

    Nebulising liquid samples and using the aerosol thus obtained for further analysis is the standard method in many current analytical techniques, also with inductively coupled plasma (ICP)-based devices. With such a set-up, quantification via external calibration is usually straightforward for samples with aqueous or close-to-aqueous matrix composition. However, there is a variety of more complex samples. Such samples can be found in medical, biological, technological and industrial contexts and can range from body fluids, like blood or urine, to fuel additives or fermentation broths. Specialized nebulizer systems or careful digestion and dilution are required to tackle such demanding sample matrices. One alternative approach is to convert the liquid into a dried solid and to use laser ablation for sample introduction. Up to now, this approach required the application of internal standards or matrix-adjusted calibration due to matrix effects. In this contribution, we show a way to circumvent these matrix effects while using simple external calibration for quantification. The principle of representative sampling that we propose uses radial line-scans across the dried residue. This compensates for centro-symmetric inhomogeneities typically observed in dried spots. The effectiveness of the proposed sampling strategy is exemplified via the determination of phosphorus in biochemical fermentation media. However, the universal viability of the presented measurement protocol is postulated. Detection limits using laser ablation-ICP-optical emission spectrometry were in the order of 40 μg mL{sup −1} with a reproducibility of 10 % relative standard deviation (n = 4, concentration = 10 times the quantification limit). The reported sensitivity is fit-for-purpose in the biochemical context described here, but could be improved using ICP-mass spectrometry, if future analytical tasks would require it. Trueness of the proposed method was investigated by cross-validation with

  19. Robustness: confronting lessons from physics and biology.

    Science.gov (United States)

    Lesne, Annick

    2008-11-01

    The term robustness is encountered in very different scientific fields, from engineering and control theory to dynamical systems to biology. The main question addressed herein is whether the notion of robustness and its correlates (stability, resilience, self-organisation) developed in physics are relevant to biology, or whether specific extensions and novel frameworks are required to account for the robustness properties of living systems. To clarify this issue, the different meanings covered by this unique term are discussed; it is argued that they crucially depend on the kind of perturbations that a robust system should by definition withstand. Possible mechanisms underlying robust behaviours are examined, either encountered in all natural systems (symmetries, conservation laws, dynamic stability) or specific to biological systems (feedbacks and regulatory networks). Special attention is devoted to the (sometimes counterintuitive) interrelations between robustness and noise. A distinction between dynamic selection and natural selection in the establishment of a robust behaviour is underlined. It is finally argued that nested notions of robustness, relevant to different time scales and different levels of organisation, allow one to reconcile the seemingly contradictory requirements for robustness and adaptability in living systems.

  20. Extortion under uncertainty: Zero-determinant strategies in noisy games

    Science.gov (United States)

    Hao, Dong; Rong, Zhihai; Zhou, Tao

    2015-05-01

    Repeated game theory has been one of the most prevailing tools for understanding long-running relationships, which are the foundation in building human society. Recent works have revealed a new set of "zero-determinant" (ZD) strategies, which is an important advance in repeated games. A ZD strategy player can exert unilateral control on two players' payoffs. In particular, he can deterministically set the opponent's payoff or enforce an unfair linear relationship between the players' payoffs, thereby always seizing an advantageous share of payoffs. One of the limitations of the original ZD strategy, however, is that it does not capture the notion of robustness when the game is subjected to stochastic errors. In this paper, we propose a general model of ZD strategies for noisy repeated games and find that ZD strategies have high robustness against errors. We further derive the pinning strategy under noise, by which the ZD strategy player coercively sets the opponent's expected payoff to his desired level, although his payoff control ability declines with the increase of noise strength. Due to the uncertainty caused by noise, the ZD strategy player cannot ensure his payoff to be permanently higher than the opponent's, which implies dominant extortions do not exist even under low noise. While we show that the ZD strategy player can still establish a novel kind of extortions, named contingent extortions, where any increase of his own payoff always exceeds that of the opponent's by a fixed percentage, and the conditions under which the contingent extortions can be realized are more stringent as the noise becomes stronger.